Yifu Tao, Yash Bhalgat, Lanke Frank Tarimo Fu, Matias Mattamala, Nived Chebrolu, Maurice Fallon

IEEE International Conference on Robotics and Automation (ICRA) 2024

Arxiv
Arxiv
YouTube
YouTube
bilibili
bilibili
Github
Github



News: Dataset used can be generated from Oxford Spires

We are working on a second version of SiLVR, and the code will be release and updated on this page. Stay tuned!

Abstract: We present a neural-field-based large-scale reconstruction system that fuses lidar and vision data to generate high-quality reconstructions that are geometrically accurate and capture photo-realistic textures. This system adapts the state-of-the-art neural radiance field (NeRF) representation to also incorporate lidar data which adds strong geometric constraints on the depth and surface normals. We exploit the trajectory from a real-time lidar SLAM system to bootstrap a Structure-from-Motion (SfM) procedure to both significantly reduce the computation time and to provide metric scale which is crucial for lidar depth loss. We use submapping to scale the system to large-scale environments captured over long trajectories. We demonstrate the reconstruction system with data from a multi-camera, lidar sensor suite onboard a legged robot, hand-held while scanning building scenes for 600 metres, and onboard an aerial robot surveying a multi-storey mock disaster site building.

Citation

@inproceedings{tao2024silvr,
  title={SiLVR: Scalable Lidar-Visual Reconstruction with Neural Radiance Fields for Robotic Inspection},
  author={Tao, Yifu and Bhalgat, Yash and Fu, Lanke Frank Tarimo and Mattamala, Matias and Chebrolu, Nived and Fallon, Maurice},
  booktitle={IEEE International Conference on Robotics and Automation (ICRA)}, 
  year={2024},
}

Acknowledgement: The authors would like to thank Ren Komatsu for his help on software development, Tobit Flatscher for deploying spot robot, Rowan Border for drone data collection, and Sundara Tejaswi Digumarti for his helpful discussions.