You’re reading an older version of the Slamcore SDK documenation. The latest one is 23.01.

Tutorial Overview

This section provides various tutorials to get you started with SLAMcore Tools.

1. Capturing Datasets

Pre-recording datasets allows you to process them multiple times at a later date to evaluate the performance of different localisation/mapping parameters in your use environment. Follow the tutorial Capturing Datasets to capture a master dataset for building a point-cloud of your entire test space, and capture evaluation datasets to evaluate the position estimation in different modes explained below.

2. Processing Datasets

Once you have captured the aforementioned datasets, you can use the SLAMcore Visualiser tool to process them. Three different positioning modes are available. The one you choose will depend on a variety of factors including your use case, computational resource available, operational and environmental requirements.

For an accurate test of the system we recommend running the system first in single-session SLAM mode (mode 2) to create a point-cloud map and then to test real-time localisation using the pre-built point-cloud map (mode 3).

Visual-Inertial Odometry (VIO)

(OPTIONAL) VIO does not store any history of the location of natural features or the historic position estimates of the robot. The core output of this system mode is a position estimate that is smooth but is subject to drift over time.

Follow this tutorial: Visual-Inertial Odometry Positioning.

Single-session SLAM mode

This mode tracks and stores the location of natural features to create a live point-cloud which is used to calculate the real-time position of the robot. In this mode it is possible to detect returning to locations that have been visited before and to correct for any drift that may have accumulated (loop closure). The main outputs of this system are the live pose and an optimised trajectory at the end of the sequence. This positioning mode operates only as a single session so when it reset, all data/history is lost. There is the option to save the point-cloud that was created for use by the “Localisation using a pre-built point cloud” mode.

Follow this tutorial: Single Session SLAM Positioning Mode.

Localisation using a pre-built point cloud

This mode provides the most accurate and robust performance. This mode uses a previously recorded offline point-cloud map of the area to operate (map previously recorded in single session SLAM mode). When operating in this mode the system matches the live view from the sensor against the offline map, providing an accurate real-time position in the offline map reference frame.

Follow this tutorial: Localisation Mode.

3. Multi-Agent Localisation

(OPTIONAL) This tutorial is an experimental use case of the localisation mode, sharing a pre-build point-cloud map between multiple machines with SLAMcore software compatible hardware and operating simultaneously in the same environment.

Follow this tutorial: Multi-Agent Localisation.

4. Wheel Odometry Integration

(OPTIONAL) Wheel odometry integration requires calibration which is available as part of commercial projects. This tutorial provides odometry data requirements, instructions on recording calibration datasets and instructions on using the calibration file to run in visual-inertial-kinematics SLAM mode on SLAMcore software.

Follow this tutorial: Wheel Odometry Integration.

5. 2D Occupancy Mapping

SLAMcore uses depth data to generate a 2.5D height map of the ground plane area, which can be converted into a 2D occupancy map. This tutorial guides you through recording a dataset appropriate for 2D occupancy mapping up to preparing the map for navigation use in ROS.

Follow this tutorial: 2D Occupancy Mapping.

6. ROS1 Melodic Navigation Stack Integration

(OPTIONAL) This tutorial demonstrates how the SLAMcore SDK can be used as the main source of accurate positioning and environment mapping for an autonomous robot navigation stack in ROS1 Melodic. For the purpose of this demonstration, we also use the Kobuki robotic platform, D435i camera and NVIDIA Jetson NX as hardware components, and open-source path planner and costmap generator packages to complete the navigation stack.

Follow this tutorial: ROS1 Navigation Stack Integration.

7. ROS2 Foxy Nav2 Integration

(OPTIONAL) This tutorial demonstrates how the SLAMcore SDK can be used as the main source of accurate positioning and environment mapping for an autonomous robot navigation stack in ROS2 Foxy. For the purpose of this demonstration, we also use the Kobuki robotic platform, D435i camera and NVIDIA Jetson NX as hardware components, and open-source path planner and costmap generator packages to complete the navigation stack.

Follow this tutorial: Nav2 Integration.