We present an inverse image-formation module that can enhance the robustness of existing visual SLAM pipelines for casually captured scenarios. Casual video captures often suffer from motion blur and varying appearances, which degrade the final quality of coherent 3D visual representation. We propose integrating the physical imaging into the SLAM system, which employs linear HDR radiance maps to collect measurements. Specifically, individual frames aggregate images of multiple poses along the camera trajectory to explain prevalent motion blur in hand-held videos. Additionally, we accommodate per-frame appearance variation by dedicating explicit variables for image formation steps, namely white balance, exposure time, and camera response function. Through joint optimization of additional variables, the SLAM pipeline produces high-quality images with more accurate trajectories. Extensive experiments demonstrate that our approach can be incorporated into recent visual SLAM pipelines using various scene representations, such as neural radiance fields or Gaussian splatting.
We reconstruct a sharp HDR radiance field map. Motion blur is simulated by integration of sharp images, which are obtained from virtual camera poses during the exposure time. Then we obtain the blurry LDR image by applying differentiable tone mapping module. SLAM methods simultaneously perform tracking and mapping from degraded images to reconstruct a sharp HDR map.
I2-SLAM is a generic module that improves the quality of existing visual SLAM approaches by inverting the image formation process for casually captured videos.
@InProceedings{I2-SLAM_2024, author = {Bae, Gwangtak and Choi, Changwoon and Heo, Hyeongjun and Kim, Sang Min and Kim, Young Min}, title = {I2-SLAM: Inverting Imaging Process for Robust Photorealistic Dense SLAM}, booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)}, month = {}, year = {2024}, pages = {}, }