Wheel Odometry Integration
Note
Wheel odometry integration requires visual-inertial-kinematics (VIK) calibration and this is available as part of commercial projects. Please contact support@slamcore.com for more information before capturing sequences as described below.
Ground, wheeled robots often have sensors on the motors. These sensors can be used to obtain odometry estimates. The combination of wheel odometry with visual-inertial SLAM can provide highly robust, accurate positioning. This tutorial provides a description of the process that needs to be followed to integrate wheel odometry into Slamcore’s Position sensor fusion algorithms within a ROS 2 environment.
The wheel odometry integration will allow your system to run SLAM with an accurate positioning system based on visual feed, the IMU and wheel odometry obtained from your robotic platform.
To synchronise the sensor data from the visual, inertial and odometry sources, Slamcore provides calibration customised to your robotic setup. Two stages of calibration are carried out:
visual-inertial (VI) calibration to estimate the intrinsic and extrinsic parameters of your IMU and camera, and
visual-inertial-kinematics calibration to estimate the parameters for wheel odometry.
Requirements
Wheel odometry ROS nav_msgs/Odometry messages published in ROS 2
Foxy
orGalactic
orHumble
containing Pose data (x and y position and yaw orientation). Measurement covariance and twist data are not required.Wheel odometry is reported in its respective frame of reference (O) which is different to the visual-inertial frame of reference of the robotic system (S).
The odometry measurement frequency should be constant and at least 10Hz.
Note
For an example of how to publish odometry information over ROS see https://navigation.ros.org/setup_guides/odom/setup_odom.html
You may obtain the pose directly from your wheel encoders instead of using tf
.
VI Calibration
Please refer to RealSense Camera Calibration for instructions on calibrating your Intel RealSense D435i or D455 camera.
If using the Luxonis OAK-D S2 Camera, the Set up an OAK-D S2 Camera process will involve calibrating your camera.
VIK Calibration
During VIK calibration, the following parameters are estimated from the visual-inertial-kinematics datasets:
Time offset between the camera and wheel odometry measurements
Parameters accounting for wheel odometry’s systematic and non-systematic errors.
Wheel odometry to IMU extrinsics transformation:
T_SO
.
To perform the VIK calibration customised to your camera and wheeled robotic platform, we require multiple VIK calibration datasets recorded from that specific camera and platform, with visual, inertial and wheel odometry data. When recording the calibration datasets, ensure that your camera is mounted securely on your robotic platform in the position and orientation required for deployment.
Additionally, we also require an initial estimate of the camera-odometry frame of reference transformation, and an annotated photo showing the camera sensor and wheeled platform setup in order to verify this transform.
VIK Recording Setup
Note
On NVIDIA Jetson kits flashed with the default release of an L4T image, you may be required
to have superuser permissions before running the Slamcore ROS dataset recorder. To do so,
run sudo su
before running the launch files.
With wheel odometry input turned on for your robotic platform, you may record a dataset with visual, inertial and odometry data in ROS by running this:
$ source /opt/ros/foxy/setup.bash
# initialise your odometry provider and robot controller
# ...
$ ros2 launch slamcore_slam dataset_recorder.launch.py \
> output_dir:=<output-dir> \
> odom_reading_topic:=<odometry-provider-topic> \
> override_realsense_depth:=true \
> realsense_depth_override_value:=false
See more parameter options and defaults under ROS 2 API.
Recording VIK Calibration Sequences
Note
Please take into account the following while recording the calibration sequences.
Start and finish recording with the camera in the same place, pointing in the same direction with the same orientation in all your sequences.
Ensure the camera motion is smooth but not too slow.
Record in a space where there are enough visual textures in the scene within 2 metres from the camera.
Before starting to record each one of the VIK calibration sequences, stop and restart the odometry-publishing node, or at least make sure that the odometry measurements start from zero translation and rotation.
Ensure that the published odometry data abides to the coordinate frame convention of ROS. This means that when the robot moves forward the X component of the odometry translation should increase and when it moves laterally, it’s Y component should change. The rotations should also abide to this convention. For an in-plane 2D motion, and assuming the quaternion convention, only the Z and W components should change when the robot is operating.
A number of datasets or sequences are required for the VIK calibration:
5 calibration sequences with good conditions for the Visual Inertial SLAM system. For each sequence, perform either 2 consecutive clockwise or counterclockwise square loops, along the perimeter of a 5×5 m square on a flat ground.
While 5 calibration sequences would be ideal, 1-2 will suffice if there are time-constraints.
While a 5×5 m square area or larger is ideal, 1×1, 1.5×1.5, 2×2, 3×3 or 4×4 will suffice if there are space constraints.
The starting points for each sequence can be different, but the ending points and orientation roughly matching each of the starting points.
1 long sequence in which the robot goes over different surfaces, such as carpets, hardwood, tatami, rugs. Ideally these should be the surfaces that the robot interacts the most during the intended deployment. Its goal is to estimate the non-systematic errors (i.e., the noise model of the odometry measurements).
Change the dataset folder names to end up with the following dataset structure:
VIK_calibration_datasets/
├── VIK_calib0/
│ ├── capture_info.json
│ ├── imu0/
│ ├── ir0/
│ ├── ir1/
│ └── odometry0/
├── VIK_calib1/
├── VIK_calib2/
├── VIK_calib3/
├── VIK_calib4/
└── VIK_calib_long/
Obtain the Camera-Odometry Frame of Reference Transformation
Based on your robotic platform’s setup, measure roughly the physical
transformation between the wheel odometry frame of reference and the
visual-inertial sensor’s frame of reference as a 4×4 homogeneous matrix. Save
this transformation in a text file, e.g. T_SO.txt
.

Fig. 63 Example robot and camera-odometry frames of reference
We can work with either one of the following transformations:
T_SO
: The transformation of the wheel odometry frame (O) with regards to the IMU sensor’s frame of reference (S) , orT_CO
: The transformation of the wheel odometry frame (O) with regards to the left IR sensor’s frame of reference (C) .
Note
The rough estimate will be overwritten during the calibration process.

Fig. 64 T_SO
Transformation Matrix
Example
As an example, for the robot shown in this tutorial, the T_SO
would be:
[ 0, -1, 0, 0.0154;
0, 0, -1, 0.2358;
1, 0, 0, -0.0874;
0, 0, 0, 1.0000 ]

Fig. 65 Approximate distance between frames of reference
The wheel odometry frame of reference is approximately 1.5cm to the right, 23.6cm below and 8.7cm behind the camera’s IMU sensor frame of reference. The wheel odometry frame of reference is also rotated 90 degrees in X and -90 degrees in Z relative to the visual-inertial frame of reference - this rotation is given by the rotation matrix shown above. You may find this online tool useful to convert between different representations of a 3D rotation.
Detailed look at the the visual-inertial frame of reference (camera’s coordinate system)
The Intel RealSense camera’s coordinate system (used for the visual-inertial frame of reference) is shown in the figure below in detail (as illustrated on RealSense’s tutorial page).

Fig. 66 RealSense camera coordinate system (Visual-inertial frame of reference)
The rough estimate provided above will be overwritten during the calibration
process. The refined T_SO
will be given in the VIK configuration file in the
following form (with the rotation being expressed in wxyz
quaternion form):
"T_SO": {
"R": [
0.4988096468110019,
0.5038227837046615,
-0.5036569976989878,
0.4936407271864948
],
"T": [
0.015391235851986106,
0.23577251157419954,
-0.08736454907300172
]
Annotate an image of the robotic platform setup

Fig. 67 An example of the annotated image
For Slamcore to verify the transform, please annotate image(s) of the camera sensor and wheeled platform setup with:
The camera sensor frame of reference
The wheel odometry frame of reference
The distance (in metres) between the two frame of references.
Send Files to Slamcore for Calibration
Dataset Folder Structure
You should end up with the following folder structure:
T_SO.txt
VIK_calibration_datasets/
├── VIK_calib0/
│ ├── capture_info.json
│ ├── imu0/
│ ├── ir0/
│ ├── ir1/
│ └── odometry0/
├── VIK_calib1/
├── VIK_calib2/
├── VIK_calib3/
├── VIK_calib4/
└── VIK_calib_long/
Dataset Validation
Before sending the datasets to Slamcore, ensure that the sequences recorded are suitable for
calibration by running Visual-inertial SLAM on them using the Slamcore
Visualiser. Use the --no-k
flag to disable the kinematics stream:
$ slamcore_visualiser dataset -u <path/to/dataset> --no-k
For all the datasets, ensure that the trajectories look reasonable with no large jumps and that loop closures are present (if possible).
Note
The VIK calibration is an iterative process, during which we might request for additional datasets or a different recording setup to improve calibration results. Validating the datasets prior to calibration will improve the efficiency of calibration process.
Compress the files
Compress each individual dataset into a zip file each for ease of file sharing. For example, run in your terminal window for each dataset:
# rename the dataset directory, then compress
$ mv <path/to/dataset> <path/to/VIK_calib0>
# enter the parent directory of the dataset e.g. /home/<user>/Documents
$ cd <path/to/parent/directory>
# compress the dataset
$ tar -czvf VIK_calib0.tar.gz VIK_calib0/
Files required
Collate these files to send to Slamcore:
Folder containing compressed VIK calibration datasets,
A text file containing the rough estimation of
T_SO
orT_CO
, specifying which has been provided: the transform to the IMU or the left IR sensor.Annotated images of the platform setup.
The resulting folder structure:
annotated_setup.png
T_SO.txt
VIK_calibration_datasets/
├── VIK_calib0.tar.gz
├── VIK_calib1.tar.gz
├── VIK_calib2.tar.gz
├── VIK_calib3.tar.gz
├── VIK_calib4.tar.gz
└── VIK_calib_long.tar.gz
Upload the sequences
Upload the files to Slamcore’s MASV portal below or at https://slamcore.portal.massive.app in one single package, filling in the following details:
Your email
Package name:
<company_name>_<D435i/D455>_<serial_number>_VIK
Message: any details or description of the data provided
Finally, click SEND
. A page indicating “Transfer Complete” will be shown if
the upload is successful. You can email us at support@slamcore.com about the
data upload.

Fig. 68 The process to upload to Slamcore’s MASV portal
The MASV portal has been embedded below and can be used to share upload and share the files.
Run the system in VIK Mode
Once the calibration is complete, we will provide you with a VIK configuration file in JSON format, which can be used to run VIK on your setup.
Warning
The configuration file format for running SLAM on customized parameters has changed in v23.01. This affects all VIK calibration files previously provided to you. Please see JSON configuration file migration for more information.
Warning
The calibration configuration only applies to the camera and robotic platform setup you have recorded the calibration sequences on. Please do not alter the camera position on the robot, or use a different camera/robot with the VIK configuration file. You might need to calibrate again if the camera moves from the calibrated position or orientation due to bumping.
Run VIK SLAM on live camera (ROS)
To run VIK SLAM live on your robotic platform with our ROS wrapper, ensure you can provide visual, inertial and odometry data and run:
$ source /opt/ros/foxy/setup.bash
# initialise your odometry provider and robot controller
# ...
$ ros2 launch slamcore_slam slam_publisher.launch.py \
> config_file:=<path/to/vik-configuration-file> \
> odom_reading_topic:=<odometry-provider-topic>
Run in Localisation mode with VIK (ROS)
To run in localisation mode, you must have a previously created session map. Load the session map by providing the full path to the session file when launching the SLAM wrapper ROS node:
$ source /opt/ros/foxy/setup.bash
# initialise your odometry provider and robot controller
# ...
$ ros2 launch slamcore_slam slam_publisher.launch.py \
> config_file:=<path/to/vik-configuration-file> \
> session_file:=<path/to/session/file> \
> odom_reading_topic:=<odometry-provider-topic>
Run VIK SLAM with a dataset
You may run VIK SLAM on any dataset that has been recorded with odometry data. You can do this by passing a VIK configuration file. If a VIK configuration file is not provided, the Slamcore tools will default to running in visual-inertial SLAM mode on any type of dataset.
$ slamcore_visualiser dataset \
> -u <path/to/dataset> \
> -c <path/to/vik-configuration-file>
You may verify that you are running VIK SLAM on the visualiser by checking the
Mode
displayed on the left sidebar.
$ slamcore_dataset_processor dataset \
> -u <path/to/dataset> \
> -c <path/to/vik-configuration-file> \
> -o <output/directory>
$ source /opt/ros/foxy/setup.bash
$ ros2 launch slamcore_slam slam_publisher.launch.py \
> dataset_path:=<path/to/dataset> \
> config_file:=<path/to/vik-configuration-file>