Nav2 Integration Overview
This page introduces users to a number of example implementations provided in the
slamcore-ros2-examples
repository which integrate the Slamcore SLAM algorithms into Nav2
(the ROS 2 Navigation Stack), using Slamcore as the core component to map the
environment as well as to provide accurate positioning of the robotic platform,
in ROS 2 Foxy
or Galactic
or Humble
.
Note
This page covers an introduction to the Slamcore ↔︎ Nav2 Integration, the supported robot platforms and how to set them up, as well as any changes that are required to use Slamcore.
For the actual demo instructions, see the following page, Nav2 Integration Guide.
Goal
In these examples the Slamcore SDK is used as the main source of positioning during navigation as well as for mapping the environment before or during navigation. In the Nav2 documentation examples, localisation and mapping is done using SLAM Toolbox, an open-source 2D graph-based SLAM library which uses a 2D laser scan for these tasks. AMCL, which also uses 2D laser scans, is also suggested as a localisation alternative.
Instead, we’ll be using the 2D Occupancy Mapping capabilities of our SDK to generate an occupancy grid map and our visual-inertial SLAM positioning to localise in that map. Additionally, we will integrate wheel odometry into our SLAM system to increase localisation robustness.
Warning
To use wheel odometry, you will require a unique VIK calibration. Further details are provided in the Visual-Inertial-Kinematic Calibration section of the Nav2 Integration Guide.

Fig. 70 Using the Slamcore ROS 2 Wrapper for navigation
Hardware Setup
The slamcore-ros2-examples repository provides example launch and configuration files to use the Slamcore ROS 2 Wrapper with an Intel RealSense D435i camera, a compute module such as the NVIDIA Jetson Xavier NX or Raspberry Pi 4B, and one of the following robotic platforms:
Clearpath Robotics TurtleBot 4 Standard or Lite
We will also use a laptop as a visualisation machine. Images of each setup along with mounting instructions can be found in the tabs below.

Supported Software
ROS 2 Galactic
Slamcore SDK v23.01
Create 3 Firmware G4.1
Mounting Hardware
In our example, we mounted the RealSense D435i camera using iRobot’s 3D printable camera mount, which can be downloaded from the Create3 docs Sensor Mounts Page. Mounts for the Raspberry Pi 4B or Jetson Xavier NX can also be found in their Compute Boards Page.
Mounting Instructions & Hookup Guide
For this example, the camera mount was placed right behind the Create 3 buttons and secured using 4x M3 self-tapping screws under the Create 3 plate, which screwed into the 3D printed mount from underneath. The compute board was mounted with another 3D printed mount in the cargo bay and connected to the Create 3 as explained in their Hookup Guides (Jetson Xavier NX, Raspberry Pi).
Make sure to also configure the software on your compute board to communicate with the Create 3 over USB correctly as explained in their Software Config pages (Jetson Xavier NX, Raspberry Pi).
Lastly, if you’d like to use wheel odometry, make sure you set up Chrony on your compute board following the Network Time Protocol page from the Create 3 docs, to ensure the Create 3 and compute board are on the same clock.

Supported Software
ROS 2 Galactic
Slamcore SDK v23.01
Create 3 Firmware G4.1
Mounting Hardware
The TurtleBot 4 models come pre-assembled and do not required any additional mounting hardware unless you want to replace the Raspberry Pi 4B with a different compute board.
Mounting Instructions & Hookup Guide
Simply replace the Oak-D camera with a RealSense D435i.

Supported Software
ROS 2 Foxy
Slamcore SDK v23.01
Mounting Hardware
We designed a custom mount for the Kobuki to hold the RealSense D435i and Jetson Xavier NX board. Download the STL file for 3D printing and SVG for laser cutting from the following links:
Additional components required for mounting are listed in the assembly guide linked below.
Mounting Instructions & Hookup Guide
Follow the Kobuki Hardware Assembly Guide.
Nav2 - Slamcore Integration
Traditionally Nav2 requires the following components to be in place:
An occupancy grid map of the environment, either generated ahead of time, or live.
A global planner and a controller (also known as local planner in ROS1) which guide your robot from the start to the end location. The default choice for these are NavFn for the global planner and DWB for the controller. Other available options are discussed in the Selecting the Algorithm Plugins section of the Nav2 docs.
A global and local costmap which assign computation costs to the aforementioned grid map so that the planner chooses to go through or to avoid certain routes in the map.
A localisation module, such as SLAM Toolbox or AMCL.
As discussed earlier, we’ll be using Slamcore software to generate a map of the environment as well as localising the robot in the environment. On top of that, we’ll use the NavFn global planner and DWB controller for navigation. Lastly, we will be using the local point cloud published by our software for obstacle avoidance.
Positioning information is transmitted to Nav2 using TF, so we will need to
make sure the correct transforms are set up and being broadcast for Nav2 to
function correctly. A short introduction to the required transforms is provided
in the Nav2 Setting Up Transformations tutorial page.
As explained in the drop-down below, the Slamcore ROS Wrapper abides to REP-105
and will publish both the the map
\(\rightarrow\) odom
and odom
\(\rightarrow\) base_link
transforms required for
navigation by default.
Abiding to REP-105 - map
and odom
frames
Note
In ROS, REP-105 specifies the following coordinate frame conventions:
base_link - Frame rigidly attached to the mobile robot base.
odom - World-fixed frame in which the pose of a mobile platform can drift over time.
map - World-fixed frame, with Z axis pointing upwards, in which the pose of a mobile platform should not significantly drift over time. This frame is not continuous and may contain pose jumps.
When using the Slamcore ROS Wrappers, the map
frame and odom
frame are
equivalent to the Slamcore World
frame and the base_link
frame is
equivalent to the Slamcore Body
frame. Therefore, the reported pose will be
that of the Body
frame, attached to the camera.
Many popular ROS frameworks, like the navigation stack abide to this ROS Coordinate Frames convention, and thus expect two transformations from which they can get pose information:
map
\(\rightarrow\)base_link
odom
\(\rightarrow\)base_link
where base_link
is the main frame of reference of the robot platform in use.
This way a ROS node that’s interested in the pose of the robot can query either
map
\(\rightarrow\) base_link
or odom
\(\rightarrow\) base_link
. If it queries the former, it will get the most accurate
estimation of the robot pose, however, that may include discontinuities or
jumps due to, for example, loop closures (e.g. Slamcore’s SLAM mode). On the
other hand, the odom
\(\rightarrow\) base_link
transform is guaranteed to change smoothly but may
accumulate drift over time (e.g. Slamcore’s VIO mode).
In a traditional robot setup, the localisation module, for example slam_toolbox,
will ingest lidar sensor data and the latest odom
\(\rightarrow\) base_link
transform, as published by an
odometry node (e.g. dead reckoning via wheel odometry) to calculate map
\(\rightarrow\) base_link
and
eventually publish the map
\(\rightarrow\) odom
transform to abide to REP-105.
To abide to this standard and also increase the overall accuracy of these
transforms, the Slamcore ROS Wrappers incorporate the latest odometry
information and publish both the map
\(\rightarrow\) odom
and odom
\(\rightarrow\) base_link
transforms. This way the
user can access the map
\(\rightarrow\) odom
transform, as well as, a smooth odom
\(\rightarrow\) base_link
transform that
uses information from the visual and inertial sensors and wheel odometry (if a
VIK calibration has been generated). This is preferred over just using the odom
\(\rightarrow\) base_link
transform provided by a wheel odometry node, which would be less accurate and
robust.
For more on Slamcore’s frames of reference convention, see Coordinate Frames Convention.
As seen above, Nav2 has similar requirements to the ROS1 Navigation Stack - you
need a map, some sort of positioning (TF) and some sensor streams for obstacle
avoidance, and these are all provided by our software. The main difference with
ROS1 is that Nav2 no longer uses the move_base
finite state machine and
instead uses Behaviour Trees
to call modular servers to complete an action (e.g. compute a path,
navigate…). This allows the user to configure the navigation behaviour easily
using plugins in a behaviour tree xml file. A detailed comparison with the ROS1
Navigation stack can be found in the
ROS to ROS 2 Navigation
Nav2 docs page.

Fig. 71 Slamcore integration into Nav2
Nav2’s parameters and plugins, which can be configured for your unique use case,
have been included and can be easily modified in the nav2-demo-params
yaml
file for each robot, in our repository. These parameters are based on the default
configuration provided in the nav2_params.yaml
file from the Nav2 repository. Specific details about Nav2 configuration and
obstacle avoidance parameters when using Slamcore can be found in the
Nav2 Configuration section of the Nav2 Integration Guide page.
How to Set Up and Run the Examples
To set up the examples and learn more about any changes required to your robot setup, visit the next page, Nav2 Integration Guide.