This package implements a tightly-coupled multi-modal fusion framework. It currently suppports fusing LiDAR (Geometric, Photometric), Radar, any Odometry and IMU to provide robust state estimation in challenging environments. The framework is designed to be modular and easily extensible to add new sensors.
mimosa maintains a sliding window factor graph to fuse factors generated from multiple sensors. On arrival of a new measurement (a pointcloud from the LiDAR or Radar or an odometry message from an external odometry source like VIO), a new state is "declared" in the graph and connected with a preintegrated IMU factor. Then the measurement gets processed by the corresponding sensor manager to generate a new factor(s). This factor(s) is(are) then added to the graph and the graph is optimized. The optimized state is then published on mimosa_node/graph/odometry topic as a nav_msgs/Odometry message.
The IMU factor is based on GTSAM's provided PreintegratedIMUFactor but modified to also have gravity as a state in the preintegration. This is due to the fact that we consider the initial orientation of the IMU to be the world frame (whereas GTSAM assumes the world frame to be gravity aligned).
There are two types of LiDAR factors implemented:
- Geometric Factor: This factor uses point-to-plane scan-to-map ICP residuals to constrain the LiDAR pose. The correspondences are found using a k-d tree and the residuals are computed using the point-to-plane distance.
- Photometric Factor (Implemented only for Ouster LiDARs): This factor uses photometric error of patches in the intensity image to constrain the LiDAR pose.
The radar factor provides a single factor per pointcloud that utilizes the radial speed residuals from the radar measurements.
Consecutive odometry measurements (e.g., from a VIO system) are used to create relative pose factors (Between factors) between the corresponding states in the graph.
These instructions assume that ros-noetic-desktop-full is installed on your Ubuntu 20.04 system.
# dependancies
sudo apt install python3-catkin-tools \
libgoogle-glog-dev \
libspdlog-dev
mkdir catkin_ws/src && cd catkin_ws/src
git clone git@github.com:ntnu-arl/config_utilities.git -b dev/mimosa
git clone git@github.com:ntnu-arl/gtsam.git -b feature/imu_factor_with_gravity
git clone git@github.com:ntnu-arl/gtsam_points.git -b minimal_updated
# get this code
git clone git@github.com:ntnu-arl/mimosa.git
# build it
cd ..
catkin config -DCMAKE_BUILD_TYPE=Release -DGTSAM_POSE3_EXPMAP=ON -DGTSAM_ROT3_EXPMAP=ON -DGTSAM_USE_QUATERNIONS=ON -DGTSAM_USE_SYSTEM_EIGEN=ON -DGTSAM_BUILD_WITH_MARCH_NATIVE=OFF -DGTSAM_BUILD_EXAMPLES_ALWAYS=OFF -DGTSAM_WITH_TBB=OFF
catkin build mimosaThis package can be run either online (mimosa_node) or offline using a rosbag (mimosa_rosbag). The online version is for deployment on a robot, while the offline version is for testing and debugging. Both versions are identical in terms of functionality since they use the same callbacks. The offline version just allows you to read a rosbag and process the callbacks directly instead of recieving them over the network.
Download any rosbag from https://huggingface.co/datasets/ntnu-arl/unified_autonomy_stack_datasets. There are two variants of robots in this dataset: one which has an Ouster LiDAR and the other which has a Unipilot module (with a Robosense Airy LiDAR). As per the rosbag that you downloaded, you will need to build the ouster_ros or rslidar_sdk package respectively to be able to convert the raw LiDAR packets to pointclouds. Please follow the instructions in the respective repositories to build the drivers. Note that in the case of rslidar_sdk you will need to modify the config file to set the lidar_type as RSAIRY, the common/msg_source as 2 (for packet message comes from ros or ros2), send_packet_ros as false and send_point_cloud_ros as true. You can find a working config file here.
The bags should be replayed with --clock option and the global /use_sim_time parameter set to true. To do this, a simple way is to create a launch file called simcore.launch with the following content:
<launch>
<param name="/use_sim_time" value="true"/>
</launch>Then you can launch the system as:
# Terminal 1
roslaunch simcore.launch
# Terminal 2
roslaunch ouster_ros replay.launch
# or
roslaunch rslidar_sdk start.launch
# Terminal 3
roslaunch mimosa hornbill.launch
# or
roslaunch mimosa magpie.launch
# Terminal 4
rosbag play --clock /path/to/your/rosbag.bagDownload the dataset from https://projects.asl.ethz.ch/datasets/enwide. After downloading any of the sequences in the dataset, you can run the following command to launch the example:
roslaunch mimosa enwide_rosbag.launch bag:="/path/to/your/rosbag.bag" viz:="true"Download the dataset from https://ori-drs.github.io/newer-college-dataset/download/. After downloading any of the sequences in the dataset, you can run the following command to launch the example:
roslaunch mimosa newer_college_rosbag.launch bag:="/path/to/your/rosbag.bag" viz:="true"There is also a script dataset_evaluation.py that can be used to run the complete dataset evaluation. This script will run the mimosa_rosbag node on all the sequences in the dataset and save the results in a folder. Before running this script make sure to set the currect dataset_path and results_directory. It is assumed that the dataset is in the following format:
dataset_path/
├── sequence_1
│ ├── xxxxxsequence_1.bag
| ├── gt-sequence_1.csv
├── sequence_2
│ ├── xxxxxsequence_2.bag
| ├── gt-sequence_2.csv
├── sequence_3
│ ├── xxxxxsequence_3.bag
| ├── gt-sequence_3.csv
├── sequence_4
│ ├── xxxxxsequence_4.bag
| ├── gt-sequence_4.csvpython dataset_evaluation.pyTo run on your own data, you need to set up the following:
- Take the most relevant launch file
- Modify the remapping of the input topics to your sensor topics in the launch file
- Modify parameters in the corresponding config file
- e.g. for velodyne the
point_skip_factormust be 1 - Set your transforms for
T_B_Sfor each of the sensors you are using. This is the transform from the Sensor frame to the Body frame (i.e. pose of Sensor in the Body frame).
- e.g. for velodyne the
- Launch your launch file
This project is licensed under the BSD-3-Clause License - see the LICENSE file for details.
If you use this work in your research, please cite the relevant publications:
@misc{perception_pglio,
title = {{PG}-{LIO}: {Photometric}-{Geometric} fusion for {Robust} {LiDAR}-{Inertial} {Odometry}},
shorttitle = {{PG}-{LIO}},
url = {http://arxiv.org/abs/2506.18583},
doi = {10.48550/arXiv.2506.18583},
publisher = {arXiv},
author = {Khedekar, Nikhil and Alexis, Kostas},
month = jun,
year = {2025},
note = {arXiv:2506.18583 [cs]},
keywords = {Computer Science - Robotics},
}
@inproceedings{perception_dlrio,
title = {Degradation {Resilient} {LiDAR}-{Radar}-{Inertial} {Odometry}},
url = {https://ieeexplore.ieee.org/document/10611444},
doi = {10.1109/ICRA57147.2024.10611444},
booktitle = {2024 {IEEE} {International} {Conference} on {Robotics} and {Automation} ({ICRA})},
author = {Nissov, Morten and Khedekar, Nikhil and Alexis, Kostas},
month = may,
year = {2024},
keywords = {Degradation, Estimation, Laser radar, Odometry, Prevention and mitigation, Robot sensing systems, Sensors},
pages = {8587--8594},
}
@ARTICLE{perception_jplRadar,
author={Nissov, Morten and Edlund, Jeffrey A. and Spieler, Patrick and Padgett, Curtis and Alexis, Kostas and Khattak, Shehryar},
journal={IEEE Robotics and Automation Letters},
title={Robust High-Speed State Estimation for Off-Road Navigation Using Radar Velocity Factors},
year={2024},
volume={9},
number={12},
pages={11146-11153},
keywords={Radar;Sensors;Velocity measurement;Radar measurements;Laser radar;Odometry;State estimation;Robustness;Robot sensing systems;Field robots;localization;sensor fusion},
doi={10.1109/lra.2024.3486189}
}You can open an issue or contact us for any questions: