RealSense Camera Calibration¶
This tutorial provides a guide to capture sequences that can be used to enhance the calibration of your Intel RealSense D435i or D455. During the calibration, the following parameters are estimated from the visual-inertial datasets:
Camera intrinsics and distortion parameters
Camera to IMU (Sensor) extrinsics
Time offset between the camera and IMU measurements
Noise densities for the accelerometer and gyro IMU biases
Accelerometer and gyro IMU biases priors.
Once the calibration is applied to your account, you may see improvements in the accuracy and robustness of SLAM when working with the SLAMcore SDK.
This tutorial assumes that you have the following:
An Intel RealSense D435i or D455 installed and plugged into a portable local machine via USB
The latest version of our SLAMcore SDK installed on the local machine
A RealSense box (or substitute object) to use as a visual reference point in the scene
To perform the VI calibration customised to your camera, we require two sequences recorded from that specific camera, with visual and inertial data. A guideline for the ideal recording environment:
An office-sized recording space,
Has walls with abundant visual textures and clutter, and
If blank/texture-less walls, ceiling or floor are present, avoid pointing the camera at them.
If only a large warehouse-sized space is available, limit your movements to one corner that contains visual textures and clutter in the scene, and avoid pointing the camera far into the distance.
The goal is to record sequences that are easy for visual-inertial SLAM to perform well in.
Capturing a Sequence¶
Record 2 sequences of about 2 minutes duration, with the camera performing slow and steady motion around the RealSense box (or substitute object).
Before beginning the capture, you may want to read through the full instructions below and take a look at our demonstration of a calibration sequence capture:
Ensure your camera and laptop setup is portable. You may find it easier to hold the camera with its tripod attached.
Place the camera and the RealSense box on a convenient surface, 1-2 m apart.
Mark this “starting position” of the camera with duct tape, or as accurately as possible. You will have to return the camera to the spot at the end of the recording.
Launch the Dataset Recorder GUI or CLI tool, with the depth stream turned off.
$ slamcore_dataset_recorder --no-depth
(More details at SLAMcore Dataset Recorder)
$ slamcore_dataset_recorder_cli --no-depth -o <output-directory>
(More details at Dataset Recorder CLI)
$ source /opt/ros/melodic/setup.bash $ roslaunch slamcore_slam run_dataset_recorder.launch \ > override_realsense_depth:=true \ > realsense_depth_override_value:=false \ > output_dir:=<output_directory>
[OPTIONAL] To enable ROS1 visualisation in
RViz, run the
setup_monitoring.launchfile in another terminal window or machine:
$ roslaunch slamcore_viz setup_monitoring.launch
Please install the
slamcore-vizDebian package to use
RVizas described in this section.
(More details at Dataset Recorder)
$ source /opt/ros/foxy/setup.bash $ ros2 launch slamcore_slam run_dataset_recorder.launch.py \ > override_realsense_depth:=true \ > realsense_depth_override_value:=false \ > output_dir:=<output_directory>
(More details at Dataset Recorder)
Imagine a spherical surface in the recording space, with the RealSense box in the centre of the sphere.
Begin recording. Pick up the camera from the start marker, slowly and smoothly move it along the spherical surface with the camera facing the RealSense box at all times. Ensure that the trajectory does not contain sudden jerks or rotations.
Perform the following arcs around the RealSense box, arching as far as possible while avoiding blank walls in the camera’s field of view at all times:
The sequence should be around 2 minutes long. You may perform more arcs from a different spot to fill up the time.
Return the camera to the start position marked previously, and stop the recording.
Repeat the steps above to record a second dataset. You may use a different starting position or room that meets the requirements.
When you’re done, you should have two datasets in the directory you specified earlier.
Send Files to SLAMcore for Calibration¶
Dataset Folder Structure¶
You should end up with the following dataset structure:
VI_calibration_datasets/ ├── VI_calib0/ │ ├── capture_info.json │ ├── imu0/ │ ├── ir0/ │ └── ir1/ └── VI_calib1/
Before sending the datasets to SLAMcore, ensure that the sequences recorded are suitable for calibration by running SLAM on them.
Launch the SLAMcore Visualiser:
$ slamcore_visualiser dataset -u <path/to/dataset>
For all the datasets, ensure that the trajectories look reasonable with no large jumps and that loop closures are present (if possible).
Compress the files¶
Compress each individual sequence into a zip file each for ease of file sharing. For example, run in your terminal window:
# rename the dataset directory, then compress $ mv <path/to/dataset> <path/to/VI_calib0> $ tar -czvf VI_calib0.tar.gz <path/to/VI_calib0>
The folder structure of the compressed datasets should roughly be:
SLAMcore_<company_name>_calibration_datasets/ └── VI_calibration_datasets/ ├── VI_calib0.tar.gz └── VI_calib1.tar.gz
Upload the sequences¶
Upload the compressed files to https://slamcore.portal.massive.app in one single
package. Name your package accordingly:
SEND. A page indicating “Transfer Complete” will be shown if the upload
Email us at email@example.com about the data upload.
Once received, we may take up 1-2 working days to generate the calibration and results. We will email you for a confirmation and will apply the calibration permanently to your camera on your SLAMcore account via https://portal.slamcore.com. As all changes are applied in the back end, you will not be able to see the calibration.
The calibration data will be effective the next time you run SLAMcore.
Please email us if you wish to revert the calibration data to the default due to poor results from the new calibration data.