Jianeng Wang1, Matias Mattamala1, Christina Kassab1, Guillaume Burger2, Fabio Elnecave2, Lintong Zhang1, Marine Petriaux2, Maurice Fallon1

1 Dynamic Robot Systems Group, Oxford Robotics Institute, University of Oxford
2 Wandercraft SAS

Accepted to IEEE Robotics and Automation Letters (RA-L) 2025

Arxiv
Arxiv
YouTube
YouTube



Abstract: Self-balancing exoskeletons are a key enabling technology for individuals with mobility impairments. While the current challenges focus on human-compliant hardware and control, unlocking their use for daily activities requires a scene perception system. In this work, we present Exosense, a vision-centric scene understanding system for self-balancing exoskeletons. We introduce a multi-sensor visual-inertial mapping device as well as a navigation stack for state estimation, terrain mapping and long-term operation. We tested Exosense attached to both a human leg and Wandercraft’s Personal Exoskeleton in real-world indoor scenarios. This enabled us to test the system during typical periodic walking gaits, as well as future uses in multi-story environments. We demonstrate that Exosense can achieve an odometry drift of about 4 cm per meter traveled, and construct terrain maps under 1 cm average reconstruction error. It can also work in a visual localization mode in a previously mapped environment, providing a step towards long-term operation of exoskeletons.

Citation

@misc{wang2024exosense,
      title={Exosense: A Vision-Based Scene Understanding System For Exoskeletons}, 
      author={Jianeng Wang and Matias Mattamala and Christina Kassab and Guillaume Burger and Fabio Elnecave and Lintong Zhang and Marine Petriaux and Maurice Fallon},
      year={2024},
      eprint={2403.14320},
      archivePrefix={arXiv},
      primaryClass={cs.RO},
      url={https://arxiv.org/abs/2403.14320}, 
}

Acknowledgement: This work is supported by a Royal Society University Research Fellowship (Fallon, Kassab), Horizon Europe project DigiForest 101070405 (Wang), and EPSRC C2C Grant EP/Z531212/1 (Mattamala). We thank Wayne Tubby and Matthew Graham for hardware design support.