new

We present The Newer College Dataset with a variety of mobile mapping sensors handcarried at typical walking speeds through New College, Oxford for nearly 6.7 km. The dataset uses two different devices made up of commercially available sensors. These datasets contain some challenging sequences such as fast motion, aggressive shaking, rapid lighting change, and textureless surface.

Stereo Vision Lidar IMU dataset (Original, March 2020):

  • Intel Realsense D435i - a stereoscopic-inertial camera
  • Ouster OS-1 (Gen 1) 64 - a 64 multi-beam 3D LiDAR also with an IMU

Multicam Vision Lidar IMU dataset (Extension, December 2021):

  • Sevensense Alphasense Core - a 4-camera visual inertial camera
  • Ouster OS-0 128 - a 128 multi-beam 3D LiDAR also with an IMU

Both datasets are paired with precise centimetre accurate ground truth for the motion of the sensor rig.

Click here to access to the dataset

new

We used a tripod-mounted survey grade BLK360 LiDAR scanner to capture a detailed millimeter-accurate 3D map of the test location (containing 290 million points). Using the map we inferred centimeter-accurate 6 Degree of Freedom (DoF) ground truth for the position of the device for each LiDAR scan. This enables better evaluation of LiDAR and visual localisation, mapping, and reconstruction systems.

This ground truth is the particular novel contribution of this dataset and we believe that it will enable systematic evaluation which many similar datasets have lacked. The dataset combines both built environments, open spaces, and vegetated areas to test localization and mapping systems such as vision-based navigation, visual and LiDAR SLAM, 3D LIDAR reconstruction, and appearance-based place recognition.

Citation

For stereo vison lidar IMU dataset, please cite:
The Newer College Dataset: Handheld LiDAR, Inertial and Vision with Ground Truth
Milad Ramezani, Yiduo Wang, Marco Camurri, David Wisth, Matias Mattamala and Maurice Fallon [Preprint] [PDF] [Video]

@INPROCEEDINGS{ramezani2020newer,
  author={Ramezani, Milad and Wang, Yiduo and Camurri, Marco and Wisth, David and Mattamala, Matias and Fallon, Maurice},
  booktitle={2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, 
  title={The Newer College Dataset: Handheld LiDAR, Inertial and Vision with Ground Truth}, 
  year={2020},
  volume={},
  number={},
  pages={4353-4360},
  doi={10.1109/IROS45743.2020.9340849}}

For multicamera vison lidar IMU dataset, please cite:
Multi-Camera LiDAR Inertial Extension to the Newer College Dataset
Lintong Zhang, Marco Camurri, David Wisth, Maurice Fallon
[Preprint] [PDF] [Video]

@misc{zhang2021multicamera,
      title={Multi-Camera LiDAR Inertial Extension to the Newer College Dataset},
      author={Lintong Zhang and Marco Camurri and David Wisth and Maurice Fallon},
      year={2021},
      eprint={2112.08854},
      archivePrefix={arXiv},
      primaryClass={cs.RO}
}

Contact Us

If you have any feedback about the dataset please email us newercollegedataset@robots.ox.ac.uk

Authors

The following members of Dynamic Robot Systems Group at Oxford Robotics Institute contributed to this dataset: Milad Ramezani, Lintong Zhang, Yiduo Wang, Marco Camurri, David Wisth, Matias Mattamala and Maurice Fallon.

Updates

2020 Mar: Colleged original dataset with the stereo configuration.

2020 Jun: Added 4 new experiments with aggressive motion. Ground truth poses to follow.

2020 Jul: Improved ground truth poses (for the long and short experiments) added to data drive.

2021 Dec: Introduced multi-camera lidar inertial dataset and example usage.