In this blog, I will give a brief introduction of Semantic GS SLAM papers recently. The first one is accepted by ECCV2024, and the rest two are accepted by ACM MM 2024.

SGS-SLAM: Semantic Gaussian Splatting For Neural Dense SLAM

Method

  1. The multi-channel Gaussian representation contains semantic color (3-channel as LangSplat)
  2. Tracking loss:
    1. Assuming constant velocity $E_{t+1} = E_t + (E_t - E_{t-1})$
    2. Photometric loss + depth loss + semantic loss
  3. Keyframe selection and weighting:
    1. Geometric-based selection: Randomly select pixels of current frame and their Gaussians. These Gaussians are projected onto camera views of keyframes. The more the projected Gaussians, the lower the weights and should be removed.
    2. Semantic-based selection: Remove the ones with high mIoU
  4. Mapping Loss: Depth loss and SSIM loss for color and semantic image

image.png

SemGauss-SLAM: Dense Semantic Gaussian Splatting SLAM