Nav2 Integration Guide
This page guides users through the example implementations provided in the
slamcore-ros2-examples
repository which integrate the Slamcore SLAM algorithms into Nav2 (the ROS 2 Navigation Stack), using
Slamcore as the core component to map the environment as well as to provide
accurate positioning of the robotic platform, in ROS 2
Foxy
or Galactic
or Humble
.
The tutorial details how to run the examples natively on Ubuntu 20.04 or using a Docker container for systems which do not support Ubuntu 20.04. Instructions are provided for the iRobot Create 3, Clearpath Robotics TurtleBot 4 and Yujin Robot Kobuki robots, however, using the example launch and configuration files along with these instructions should allow you to integrate Slamcore with Nav2 easily on any robotic platform.
Note
This page only covers the instructions on how to run the examples, to learn more about the Slamcore ↔︎ Nav2 Integration and supported hardware, first see Nav2 Integration Overview.
Outline
Following is the list of steps we will cover in this tutorial.

Fig. 72 Outline of the demo
We’ll delve into each one of these steps in more detail in the next sections.
[OPTIONAL] Visual-Inertial-Kinematic Calibration, to improve the overall performance.
Mapping and Live Navigation while teleoperating your robot. For detailed steps on map creation and navigation see, Navigation in Single Session SLAM Mode or Navigation in Localisation Mode Using a Pre-recorded Map.
Interact with the Navigation Demo, set waypoints and navigation goals using our dedicated
RViz2
launch file.
Robot-specific Features and Changes When Using Slamcore
This section outlines which features from each robot we will take advantage of and which changes are required when running these demos, compared to the default setup that is provided when first using one of the selected robot platforms.
We currently only support the Create 3 robot using ROS 2 Galactic.
Minimal changes are required to use Slamcore and Nav2 with the Create 3
robot. The robot must be using firmware G4.1 or newer. We will use the
wheel odometry from the robot by subscribing to the Create’s /odom
topic to improve our localisation robustness, however, we will disable
the odom
TF publishing of the robot as our SLAM will take care of
publishing the base_link
↔︎ odom
transform used for navigation.
The Create 3 implements a number of safety features and
reflexes, for
example, the robot will reverse slightly when it bumps into something or
detects a cliff. These settings can be configured through the ROS 2
parameter server. We will keep all the default setting enabled, the only
setting we will change for these examples is the safety_override
parameter which is exposed in the Create 3 webserver. We will change
this setting from none
to backup_only
to allow the robot to
drive backwards without triggering the cliff safety backup limit. You
may keep this parameter in the default setting if you prefer.
We will also change the default RMW_IMPLEMENTATION
being used from
rmw_cyclonedds_cpp
to rmw_fastrtps_cpp
, as we have found this
Data Distribution Service (DDS) to be more reliable when using corporate
Wi-Fi networks. Again, you may keep this parameter in the default
setting if you prefer.
Lastly, instead of publishing velocity commands directly to the Create’s
/cmd_vel
topic, we will implement a cmd_vel_mux
node which
listens to incoming Twist messages from different topics and chooses the
highest priority one to republish to the /cmd_vel
topic, to avoid
clashes between e.g., teleoperation commands and the navigation stack.
The cmd_vel_mux
priorities are set via a YAML
configuration
file.
Instructions on how to configure these changes are provided in the following sections.
We currently only support the TurtleBot 4 robot using ROS 2 Galactic. The
TurtleBot 4 robot comes with a number of turtlebot4
ROS 2 packages
pre-installed on the Raspberry Pi as mentioned on the TurtleBot 4
Documentation. These
packages mainly add functionality to bring up the components that are
included with the robot such as the 2D lidar and OAK-D camera, allow
control of the buttons and display on the TurtleBot 4 Standard (these
are not present on the Lite version), and provide example launch and
configuration files for using SLAM and navigation on the TurtleBot 4
with the aforementioned 2D lidar and OAK-D camera.
When using Slamcore on the TurtleBot 4, our goal was to try and maintain all the original functionality, making it easy to revert any new changes that might be necessary for our integration. As a result, all the changes have been documented below and instructions on how to apply them and revert them are provided further down the page.
Note
The TurtleBot 4 uses a Create 3 robot as a base, therefore, the same changes that are described in our Create 3-only example will be applied for this example. These have been included below as well for completeness.
Documented changes:
We will not be using the lidar and camera on board the TurtleBot 4. We will replace the camera with an Intel RealSense D435i.
We will use Slamcore Visual-Inertial-Kinematic SLAM using the RealSense D435i camera and the wheel odometry from the Create 3 instead of the 2D lidar SLAM with slam_toolbox. Similar to the Create 3-only example, we will subscribe to the Create’s
/odom
topic to integrate the wheel odometry data and disable theodom
TF publishing from the Create 3 robot base as Slamcore will take care of publishing both theodom
↔︎base_link
andmap
↔︎odom
frame transforms. Additionally, note that map generation, saving and loading is all managed through the Slamcore ROS 2 Wrapper.We will not use the turtlebot4_navigation package or its launch and configuration files. Instead, we will use our
slamcore_ros2_turtlebot4_example
package from slamcore-ros2-examples which provides similar navigation launch and configuration files but using our SLAM.We will use the turtlebot4_description package for the TurtleBot4
urdf
model, but we will replace the OAK-D with a RealSense Camera. Our full URDF models for the TurtleBot 4 Standard and Lite can be found in theslamcore_ros2_turtlebot_example/descriptions
folder.We will edit the turtlebot4-bringup launch files to not launch the lidar and OAK-D camera. With these new launch files, we will also bring up our modified
robot_description
which includes a RealSense camera,joy_teleop
with slightly different controls and acmd_vel_mux
so that we can set priorities for the different incoming commands i.e. teleoperation, autonomous navigation. You can find the new launch files in theslamcore_ros2_turtlebot_example/launch
folder.We will disable the default bringup service
turtlebot.service
that is installed on the Raspberry Pi using robot_upstart and runs on boot and will replace it with a newslamcore_turtlebot4.service
based on our new bringup launch files. Instructions on how to set this up are provided further below.We will still retain the button and screen functionality on the TurtleBot 4 Standard.
We will use the
turtlebot4_diagnostics
package, however, diagnostics for the OAK-D camera and lidar will be broken, as we will be using a RealSense D435i instead, for which we have not implemented diagnostics. Our SLAM, however, does publish diagnostics info on the/diagnostics
topic.We will not use the
turtlebot4_viz
package. We have our ownRViz2
config to visualize the robot during navigation as well as our own launch file to visualize the static robot model. These are essentially the same as the ones provided by turtlebot but they load our own robot model.We will not support the
turtlebot4_simulator
package.The Create 3 implements a number of safety features and reflexes, for example, the robot will reverse slightly when it bumps into something or detects a cliff. These settings can be configured through the ROS 2 parameter server. We will keep all the default setting enabled, the only setting we will change for these examples is the
safety_override
parameter which is exposed in the Create 3 webserver. We will change this setting fromnone
tobackup_only
to allow the robot to drive backwards without triggering the cliff safety backup limit. You may keep this parameter in the default setting if you prefer.We will change the default
RMW_IMPLEMENTATION
being used fromrmw_cyclonedds_cpp
tormw_fastrtps_cpp
, as we have found this DDS to be more reliable when using corporate Wi-Fi networks. Again, you may keep this parameter in the default setting if you prefer.Lastly, instead of publishing velocity commands directly to the Create’s
/cmd_vel
topic, we will implement acmd_vel_mux
node which listens to incoming Twist messages from different topics and chooses the highest priority one to republish to the/cmd_vel
topic, to avoid clashes between e.g., teleoperation commands and the navigation stack. Thecmd_vel_mux
priorities are set via aYAML
configuration file.
Instructions on how to configure these changes are provided in the following sections.
We currently only support the Kobuki robot using ROS 2 Foxy. Note that the Kobuki robot does not run ROS 2 onboard, so the motor control nodes will run on your compute board of choice, which must be connected via USB to the robot.
We will use the kobuki_node
, kobuki_description
,
kobuki_safety_controller
, kobuki_bumper2pc
and kobuki_keyop
packages from the kobuki_ros repository built from
source, launching the nodes from these packages with our
slamcore_ros2_kobuki_example
launch files. In addition, similar to
the other examples, we will implement a cmd_vel_mux
package which
selects among a number of incoming Twist messages, choosing the highest
priority one to republish to the output /cmd_vel
topic. The
cmd_vel_mux
priorities are set via a YAML
configuration file.
In our slamcore_ros2_kobuki_example
package, the
kobuki_setup_comms_launch.py
launch file will bring up the
kobuki_node
motor controller, the cmd_vel_mux
and the
kobuki_safety_controller
, which will cause the robot to reverse if
it hits something with a bumper or detects a cliff. The safety
controller will also stop the wheels if the robot is picked up.
Our kobuki_live_navigation_launch.py
launch file launches our SLAM,
Nav2, the robot_state_publisher
which loads our robot model URDF and
the kobuki_bumper2pc
node which publishes any bumper presses as
point cloud points in front of the robot, so that any bumper events can
be taken into account and added to the costmaps during navigation,
allowing the robot to replan its path.
Initial Robot Set Up and Configuration
This section details how to apply the changes described in the previous section
and configure your robot base and compute board to work with our
slamcore-ros2-examples
metapackage. If using a different robot platform, you
may skip this section.
The Create 3 runs ROS 2 directly on board. Make sure you follow the Create 3 Initial Setup Guide and are using the Create 3 Galactic firmware version G4.1 or newer with our examples. Note, our examples do not currently support ROS 2 Humble.
Once you have connected the Create 3 to Wi-Fi and can access the webserver, you will need to do the following changes:
Set a
ROS 2 Domain ID
if desired. Note thisDomain ID
will have to be the same on the Create 3, your compute board and your visualization laptop for them to communicate correctly. You can do this on your compute board and laptop by running, e.g.,export ROS_DOMAIN_ID=20
in your terminal or adding that command to the end of your~/.bashrc
file.Do not set a
ROS 2 Namespace
as our launch files are meant to be used with the Create 3 with no namespace. You may add a namespace and update our launch files manually if desired.We recommend changing the
RMW_IMPLEMENTATION
tormw_fastrtps_cpp
as we have found this to work better with corporate networks. Note theRMW_IMPLEMENTATION
will have to be the same on the Create 3, your compute board and your visualization laptop for them to communicate correctly. You can do this on your compute board and laptop by running, e.g.,export RMW_IMPLEMENTATION=rmw_fastrtps_cpp
in your terminal or adding it to your~/.bashrc
file.Replace the
yaml
text underApplication ROS 2 Parameters File
with the following:motion_control: ros__parameters: # safety_override options are # "none" - standard safety profile, robot cannot backup more than an inch because of lack of cliff protection in rear, max speed 0.306m/s # "backup_only" - allow backup without cliff safety, but keep cliff safety forward and max speed at 0.306m/s # "full" - no cliff safety, robot will ignore cliffs and set max speed to 0.46m/s safety_override: "backup_only" robot_state: ros__parameters: publish_odom_tfs: falseThis will change the default safety_override from
none
tobackup_only
, to allow driving the robot backwards without cliff safety kicking in, and will disable theodom
TF publishing from the Create 3, as Slamcore will take care of publishing that transform (Note this feature is only available from G.4.1 onward).Save all the changes and restart the application.
Note
If you’d like to use wheel odometry, make sure you set up Chrony on your compute board following the Network Time Protocol page from the Create 3 docs, to ensure the Create 3 and compute board are on the same clock.
Note
The TurtleBot 4 uses a Create 3 robot as a base, therefore, the same changes that are described in our Create 3-only example will be applied for this example. These have been included below as well for completeness.
Follow the TurtleBot 4 Basic Setup tutorial to bring up your robot and connect it to Wi-Fi. We currently only support the TurtleBot 4 with ROS 2 Galactic, make sure the Raspberry Pi on the robot is running Ubuntu 20.04 and ROS 2 Galactic. Similarly, make sure you update the Create 3 base through its webserver to Create 3 Galactic firmware version G.4.1 or newer. Our examples do not currently support ROS 2 Humble.
If you haven’t done so already, the first step is to swap the TurtleBot’s OAK-D camera with a RealSense D435i. The RealSense D435i can be mounted on the existing camera mount, you will need 2x M3 5mm screws. If you have longer screws you can use some nuts or washers to ensure the camera is secured correctly.
Once you have connected the TurtleBot 4 to Wi-Fi and can access the Create 3 webserver (Accessing the Create 3 webserver) , you will need to do the following changes to the Create 3 configuration:
Set a
ROS 2 Domain ID
if desired. Note thisDomain ID
will have to be the same on the Create 3, your compute board and your visualization laptop for them to communicate correctly. You can do this on your compute board and laptop by running, e.g.,export ROS_DOMAIN_ID=20
in your terminal or adding that command to the end of your~/.bashrc
file.Do not set a
ROS 2 Namespace
as our launch files are meant to be used with the Create 3 with no namespace. You may add a namespace and update our launch files manually if desired.We recommend changing the
RMW_IMPLEMENTATION
tormw_fastrtps_cpp
as we have found this to work better with corporate networks. Note theRMW_IMPLEMENTATION
will have to be the same on the Create 3, your compute board and your visualization laptop for them to communicate correctly. You can do this on your compute board and laptop by running, e.g.,export RMW_IMPLEMENTATION=rmw_fastrtps_cpp
in your terminal or adding it to your~/.bashrc
file.Replace the
yaml
text underApplication ROS 2 Parameters File
with the following:motion_control: ros__parameters: # safety_override options are # "none" - standard safety profile, robot cannot backup more than an inch because of lack of cliff protection in rear, max speed 0.306m/s # "backup_only" - allow backup without cliff safety, but keep cliff safety forward and max speed at 0.306m/s # "full" - no cliff safety, robot will ignore cliffs and set max speed to 0.46m/s safety_override: "backup_only" robot_state: ros__parameters: publish_odom_tfs: falseThis will change the default safety_override from
none
tobackup_only
, to allow driving the robot backwards without cliff safety kicking in, and will disable theodom
TF publishing from the Create 3, as Slamcore will take care of publishing that transform (Note this feature is only available from G.4.1 onward).Save all the changes and restart the application.
If you have set a ROS_DOMAIN_ID
or changed the
RMW_IMPLEMENTATION
on the Create 3, you will need to do this on
the Raspberry Pi (and your laptop) as well, to ensure the Pi, the robot
and your laptop can communicate correctly.
ssh
and open the.bashrc
file on the Pi with e.g.vim
:$ vim ~/.bashrcChange the
RMW_IMPLEMENTATION
fromrmw_cyclonedds_cpp
tormw_fastrtps_cpp
. Comment out theCYCLONEDDS_URI
line as we are no longer usingCYCLONE_DDS
and, finally, add a line to set your chosenROS_DOMAIN_ID
.export RMW_IMPLEMENTATION=rmw_fastrtps_dynamic_cpp #export CYCLONEDDS_URI=/etc/cyclonedds_rpi.xml export ROS_DOMAIN_ID=20Save and exit. Then source your updated
.bashrc
file and restart the ROS 2 daemon.$ source ~/.bashrc $ ros2 daemon stop $ ros2 daemon start
Lastly, we will disable the pre-installed TurleBot 4 bringup service:
The Raspberry Pi has a systemd bringup service that will launch the
standard.launch
orlite.launch
file from the turtlebot4_bringup package on boot. We will set up our own systemd service that will launch our custom bringup launch file, so we want to disable the existing one so that they don’t clash.$ sudo systemctl disable turtlebot4.service --nowYou can check the status of the service with:
$ sudo systemctl status turtlebot4.serviceWe will look into how to set up our new service in the next section.
As the Kobuki robot does not have a configurable system on board, no
changes are necessary before setting up our slamcore-ros2-examples
metapackage.
slamcore-ros2-examples
Metapackage Set Up
You will need to download the Slamcore C++ API and the Slamcore ROS 2 wrapper, regardless of whether you would like to run this demo on native Ubuntu 20.04 or through a docker container. See the Getting Started page for details on how to download the C++ API and ROS 2 Wrapper Debian packages. Installation of the Slamcore Tools package on a separate laptop can be useful to inspect recorded datasets and the generated occupancy grids.
Once you have downloaded the Slamcore C++ API and ROS 2 Wrapper you may continue with set up and installation following the steps below. Specific instructions to set up Slamcore using Docker on one of the supported robot platforms are provided in the Docker tab.
The following tabs provide instructions on how to set up our software and examples for each robot, directly on an Ubuntu 20.04 system.
On Ubuntu 20.04, a working ROS 2 installation is required before installing the Slamcore ROS 2 Wrapper. Follow the steps on the Slamcore ROS 2 Wrapper page for details on how to install ROS 2 Galactic and the Slamcore ROS 2 Wrapper.
Set up ROS 2 workspace
You will have to create a new ROS 2 workspace by cloning the slamcore-ros2-examples repository. This repository holds all the navigation-related nodes and configuration for enabling the demo.
$ git clone git@github.com:slamcore/slamcore-ros2-examples
Cloning into 'slamcore-ros2-examples'...
Set up Binary Dependencies
Next, install the Create 3-specific dependencies found in the
dependencies-create3.txt
file.
$ cd slamcore-ros2-examples
$ sudo apt install $(sed "s/ROS_VER/galactic/g" dependencies-create3.txt) -y
Clone additional repos using Vcstool
Before compiling the workspace, install vcstool, which is used for fetching the additional ROS 2 source packages required for this example.
$ sudo apt-get install python3-vcstool
Then run vcstool
to import the repos found in the repos-create3.yaml
file.
$ vcs import src < repos-create3.yaml
Remove unnecessary packages and colcon build
We recommend removing the unnecessary source packages which are not
required such as slamcore_ros2_kobuki_example
and
slamcore_ros2_turtlebot4_example
, as they have a different set of
dependencies which we have not installed in this case and could cause
colcon build
to fail. After doing that, we can build the workspace
from the root slamcore-ros2-examples
directory.
$ cd src
$ rm -rf slamcore_ros2_kobuki_example slamcore_ros2_turtlebot4_example
$ cd ..
$ colcon build
Source the new workspace environment
After building the workspace, you will need to source the setup.bash
file. You will have to do this every time you open a new terminal, to
“load” the workspace and its launch files, unless you add the command to
your ~/.bashrc
.
$ source install/setup.bash
On Ubuntu 20.04, a working ROS 2 installation is required before installing the Slamcore ROS 2 Wrapper. Follow the steps on the Slamcore ROS 2 Wrapper page for details on how to install ROS 2 Galactic and the Slamcore ROS 2 Wrapper.
Set up ROS 2 workspace
You will have to create a new ROS 2 workspace by cloning the slamcore-ros2-examples repository. This repository holds all the navigation-related nodes and configuration for enabling the demo.
$ git clone git@github.com:slamcore/slamcore-ros2-examples
Cloning into 'slamcore-ros2-examples'...
Set up Binary Dependencies
Next, install the TurtleBot 4-specific dependencies found in the
dependencies-turtlebot4.txt
file.
$ cd slamcore-ros2-examples
$ sudo apt install $(sed "s/ROS_VER/galactic/g" dependencies-turtlebot4.txt) -y
Clone additional repos using Vcstool
Before compiling the workspace, install vcstool, which is used for fetching the additional ROS 2 source packages required for this example.
$ sudo apt-get install python3-vcstool
Then run vcstool
to import the repos found in the repos-turtlebot4.yaml
file.
$ vcs import src < repos-turtlebot4.yaml
Remove unnecessary packages and colcon build
We recommend removing the unnecessary source packages which are not
required such as slamcore_ros2_kobuki_example
and
slamcore_ros2_create3_example
, as they have a different set of
dependencies which we have not installed in this case and could cause
colcon build
to fail. After doing that, we can build the workspace
from the root slamcore-ros2-examples
directory.
$ cd src
$ rm -rf slamcore_ros2_kobuki_example slamcore_ros2_create3_example
$ cd ..
$ colcon build
Source the new workspace environment
After building the workspace, you will need to source the setup.bash
file. You will have to do this every time you open a new terminal, to
“load” the workspace and its launch files, unless you add the command to
your ~/.bashrc
.
$ source install/setup.bash
Install Slamcore’s bringup service
In the previous section, we disabled the default turtlebot4.service
systemd service, which runs on boot, as we want a different file to be
launched on boot for our example.
If you haven’t disabled that yet you can do so with:
$ sudo systemctl disable turtlebot4.service --now
You can check the status of the service with:
$ sudo systemctl status turtlebot4.service
We will create a new service which will launch our
turtlebot4_bringup_standard.launch.py
file or
turtlebot4_bringup_lite.launch.py
file, depending on your robot
version. This will launch the TurtleBot 4 nodes, diagnostics
,
robot_description
, cmd_vel_mux
and PS4 teleop (Note PS4 teleop
is only brought up for the Standard version and will need to be launched
manually for the Lite version). We will not bring up the Lidar and Oak-D
launch file as these will not be used for our demo.
To install the new service, we will use the robot_upstart
package
which we imported with vcstool
. In the following command, we provide
the launch file to start on boot (replace <version>
with you robot
version, standard
or lite
), the name of the service with the
--job
flag, the --ros_domain_id
and --rmw
we are using on
the Pi and finally we also set the --symlink
flag, so that if we
make any changes to our launch file these will also be present on boot.
$ ros2 run robot_upstart install slamcore_ros2_turtlebot4_example/launch/turtlebot4_bringup_<version>.launch.py \
> --job slamcore_turtlebot4 \
> --ros_domain_id 20 \
> --rmw rmw_fastrtps_cpp \
> --symlink
Once the service is created you will be prompted to reload the systemd daemon and start the service.
$ sudo systemctl daemon-reload && sudo systemctl start slamcore_turtlebot4.service
You can check its status with:
$ sudo systemctl status slamcore_turtlebot4.service
If you want to disable the Slamcore service and revert to the default TurtleBot 4 service you can do:
$ sudo systemctl disable slamcore_turtlebot4.service --now
$ sudo systemctl enable turtlebot4.service --now
If you’d like to uninstall the Slamcore service completely:
$ sudo systemctl enable slamcore_turtlebot4.service
$ sudo systemctl stop slamcore_turtlebot4.service
$ ros2 run robot_upstart uninstall slamcore_turtlebot4
On Ubuntu 20.04, a working ROS 2 installation is required before installing the Slamcore ROS 2 Wrapper. Follow the steps on the Slamcore ROS 2 Wrapper page for details on how to install ROS 2 Galactic and the Slamcore ROS 2 Wrapper.
Set up ROS 2 workspace
You will have to create a new ROS 2 workspace by cloning the slamcore-ros2-examples repository. This repository holds all the navigation-related nodes and configuration for enabling the demo.
$ git clone git@github.com:slamcore/slamcore-ros2-examples
Cloning into 'slamcore-ros2-examples'...
Set up Binary Dependencies
Next, install the Kobuki-specific dependencies found in the
dependencies-kobuki.txt
file.
$ cd slamcore-ros2-examples
$ sudo apt install $(sed "s/ROS_VER/foxy/g" dependencies-kobuki.txt) -y
Clone additional repos using Vcstool
Before compiling the workspace, install vcstool, which is used for fetching the additional ROS 2 source packages required for this example.
$ sudo apt-get install python3-vcstool
Then run vcstool
to import the repos found in the repos-kobuki.yaml
file.
$ vcs import src < repos-kobuki.yaml
Remove unnecessary packages and colcon build
We recommend removing the unnecessary source packages which are not
required such as slamcore_ros2_create3_example
and
slamcore_ros2_turtlebot4_example
, as they have a different set of
dependencies which we have not installed in this case and could cause
colcon build
to fail. After doing that, we can build the workspace
from the root slamcore-ros2-examples
directory.
$ cd src
$ rm -rf slamcore_ros2_create3_example slamcore_ros2_turtlebot4_example
$ cd ..
$ colcon build
Source the new workspace environment
After building the workspace, you will need to source the setup.bash
file. You will have to do this every time you open a new terminal, to
“load” the workspace and its launch files, unless you add the command to
your ~/.bashrc
.
$ source install/setup.bash
Set up the Kobuki udev rules
Once you have set up the new workspace, make sure you set up the appropriate udev rules to communicate with the Kobuki.
Copy the 60-kobuki.rules
file, available from the kobuki_ftdi repository to the
/etc/udev/rules.d
directory of your system.
$ # Download the file or clone the repository
$ wget https://raw.githubusercontent.com/kobuki-base/kobuki_ftdi/devel/60-kobuki.rules
$ # Copy it to the correct directory
$ sudo cp 60-kobuki.rules /etc/udev/rules.d
$ sudo service udev reload
$ sudo service udev restart
You may need to reboot your machine for changes to take effect.
This tab provides instructions on how to set up a Docker container with the Slamcore ROS 2 Wrapper and our examples on each of the supported robot platforms. This is the preferred method if you are a using a compute board which does not support the installation of Ubuntu 20.04. or you prefer an isolated installation.
When using Docker, no ROS 2 installation is required on the host system before installing the Slamcore ROS 2 Wrapper, as it will all be installed in a container during the build process.
We use the go-task/task task runner to build, run and interact with the Docker container. This task runner allows us to configure and build different containers depending on your selected robot platform.
If you haven’t already, download the Slamcore C++ API and Slamcore ROS 2 Wrapper for your host system from the
Download Slamcore Software
link on the Slamcore Portal.Clone the slamcore-ros2-examples repository.
$ git clone https://github.com/slamcore/slamcore-ros2-examples.git
Install
task
, following the instructions in their Installation page. On Ubuntu you can use snap to install it or you can also install a standalone binary using a command like the following:$ sh -c "$(curl --location https://taskfile.dev/install.sh)" -- -d -b ~/.local/bin
After installation, you can see the available list of commands by running the following inside the
slamcore-ros2-examples
directory.$ cd slamcore-ros2-examples $ task --list
You can now build the container environment for e.g., the Create 3 example. To do so, first place the downloaded
slamcore-dev
andslamcore-ros
packages under a new directorydebs/
.Set the
SLAMCORE_ROS_PKG
andSLAMCORE_DEV_PKG
environment variables, defining the path to the Debian packages.$ export SLAMCORE_ROS_PKG=<path>/<to>/debs/ros-galactic-slamcore-ros_23.01.13-focal1_arm64.deb $ export SLAMCORE_DEV_PKG=<path>/<to>/debs/slamcore-dev_23.01.48-focal1_arm64.deb
Set the ROS_VER environment variable to specify the ROS 2 version to build the container with.
$ export ROS_VER=galactic
$ export ROS_VER=galactic
$ export ROS_VER=foxy
After setting these variables, you should be able to build the container.
$ task create3:build
$ task turtlebot4:build
$ task kobuki:build
Once the container is built, you can run it with:
$ export ROS_VER=galactic $ task create3:run
$ export ROS_VER=galactic $ task turtlebot4:run
$ export ROS_VER=foxy $ task kobuki:run
This should give you a bash prompt inside it. You can run this command in multiple terminals.
Now you should be in the container and should have
/ros_ws
in your $PATH. The/slamcore-ros2-examples
directory on your machine has been mounted in the container inside the/ros_ws
workspace. Therefore, files saved inside the/ros_ws/slamcore-ros2-examples
directory in the container should appear on the/slamcore-ros2-examples
directory on your machine.We will use
vcstool
to fetch the additional ROS 2 source packages needed for this demo.~/ros_ws$ cd slamcore-ros2-examples/ ~/ros_ws/slamcore-ros2-examples$ vcs import src < repos-create3.yaml
~/ros_ws$ cd slamcore-ros2-examples/ ~/ros_ws/slamcore-ros2-examples$ vcs import src < repos-turtlebot4.yaml
~/ros_ws$ cd slamcore-ros2-examples/ ~/ros_ws/slamcore-ros2-examples$ vcs import src < repos-kobuki.yaml
Next, remove the unnecessary source packages which are not required such as, in the case of the Create 3 example, the
slamcore_ros2_kobuki_example
andslamcore_ros2_turtlebot4_example
packages, as they have a different set of dependencies which we have not installed and could causecolcon build
to fail.Note
Alternatively, instead of removing the unnecessary packages as detailed below, you can ignore steps 11 and 12 and directly run
colcon_build
(note the underscore) inside the container to build the ROS packages.This is a command we have defined in the
~/.bashrc
of the container - It usescolcon build
and will automatically ignore the irrelevant packages based on the robot that was chosen when building the container. Details about this command can be found in the entrypoint.sh file.To remove the irrelevant packages, you can do the following:
~/ros_ws/slamcore-ros2-examples$ cd src ~/ros_ws/slamcore-ros2-examples/src$ rm -rf slamcore_ros2_turtlebot4_example slamcore_ros2_kobuki_example
~/ros_ws/slamcore-ros2-examples$ cd src ~/ros_ws/slamcore-ros2-examples/src$ rm -rf slamcore_ros2_create3_example slamcore_ros2_kobuki_example
~/ros_ws/slamcore-ros2-examples$ cd src ~/ros_ws/slamcore-ros2-examples/src$ rm -rf slamcore_ros2_create3_example slamcore_ros2_turtlebot4_example
After doing that, we can build the workspace from the root
ros_ws
directory.~/ros_ws/slamcore-ros2-examples/src$ cd ~/ros_ws ~/ros_ws$ colcon build
Finally, source the workspace. Note this will be required in every new terminal you open.
~/ros_ws$ source install/setup.bash
If you would like to exit the running container, simply use Ctrl + D
or type exit
. You may remove the container with task
<robot>:purge
or recreate it (remove the existing container and build
a new one) with task <robot>:recreate
.
Additional robot-specific steps
Any additional robot-specific tasks required to complete the initial setup are described below.
If you have set a specific ROS_DOMAIN_ID
and
RMW_IMPLEMENTATION
on the Create 3, make sure you also use
these inside the Docker container. You can add them to the
~/.bashrc.local
file in the container and these variables
will be sourced on startup of the container.
If you have set a specific ROS_DOMAIN_ID
and
RMW_IMPLEMENTATION
on the Create 3, make sure you also use
these inside the Docker container. You can add them to the
~/.bashrc.local
file in the container and these variables
will be sourced on startup of the container.
Also, note when using Docker, you will manually have to launch
the TurtleBot 4 bringup launch files
(turtlebot4_bringup_standard.launch.py
or
turtlebot4_bringup_lite.launch.py
), instead of using a
systemd service on boot.
Once you have set up your container, make sure you also set up the Kobuki udev rules correctly on your system (from outside the container) to communicate with the Kobuki, as detailed below.
Warning
In order to communicate with the Kobuki, you will also need to set up the
appropriate udev rules. To do that, copy the 60-kobuki.rules
file,
available from the kobuki_ftdi
repository to the /etc/udev/rules.d
directory of your system.
$ # Download the file or clone the repository
$ wget https://raw.githubusercontent.com/kobuki-base/kobuki_ftdi/devel/60-kobuki.rules
$ # Copy it to the correct directory
$ sudo cp 60-kobuki.rules /etc/udev/rules.d
$ sudo service udev reload
$ sudo service udev restart
You may need to reboot your machine for changes to take effect.
Visualization Machine Set Up
You may repeat the same steps outlined above on a second machine for visualization. We will then be able to take advantage of ROS 2’s network capabilities to visualize the topics being published by the robot on this second machine with minimal effort - as long as both machines are on the same network.
Note
You may use the Dockerfile provided even if your system is Ubuntu 20.04 for an easier, isolated setup.
Teleoperation
There are two ways we can teleoperate a robot, keyboard teleoperation or joystick teleoperation, e.g. using a Bluetooth PS4 controller. Instructions on how to teleoperate each robot are provided in the tabs below.
To teleoperate the Create 3, first make sure the cmd_vel_mux
node
is running. On your compute board run:
$ ros2 launch slamcore_ros2_create3_example create3_cmd_vel_mux_launch.py
Then you can launch keyboard teleoperation or joystick teleoperation.
Keyboard Teleoperation
After setting up the slamcore-ros2-examples
workspace on your laptop
and installing the dependencies-create3.txt
, you can run keyboard
teleoperation with:
$ ros2 run slamcore_ros2_create3_example create3_teleop_key
This script starts the teleop_twist_keyboard
node, allowing you to
drive the robot with the i
j
l
,
keys via the terminal.
The node will publish the commands on the ROS 2 network which the
cmd_vel_mux
on the compute board will subscribe to.
Joystick Teleoperation
For joystick teleoperation, you will need a Bluetooth controller, e.g.,
a PS4 controller. First, make sure you pair the controller to your
compute board and can connect it automatically reliably. Then launch
joy_teleop
on your compute board with:
$ ros2 launch slamcore_ros2_create3_example create3_teleop_joy_launch.py
To drive the robot, press and hold the L1 button (dead man switch) and
use the left joystick for Fwd/Back
and the right joystick for
steering (See PS4 Button Mapping). You can also hold the R1 button to
drive the robot at a higher speed.
The current configuration is meant to be used with a PS4 controller, you can change the joystick configuration or create a new one using the following example: ps4-joy.yaml.
On the TurtleBot 4 platforms, if you set up the
slamcore_turtlebot4.service
bringup service as explained in the
previous section, the cmd_vel_mux
node will be launched on boot.
otherwise, you will have to launch turtlebot4_cmd_vel_mux_launch.py
manually for the robot to receive the teleoperation commands.
Follow the instructions below depending on the version of TurtleBot you are using.
If you have not enabled the slamcore_turtlebot4.service
on
boot, bring up the essential TurtleBot 4 nodes, which includes
the cmd_vel_mux
and joy_teleop
nodes with:
$ ros2 run slamcore_ros2_turtlebot4_example turtlebot4_bringup_standard.launch.py
Keyboard Teleoperation
After setting up the slamcore-ros2-examples
workspace on
your laptop and installing the dependencies-turtlebot4.txt
,
you can run Keyboard Teleoperation with:
$ ros2 run slamcore_ros2_turtlebot4_example turtlebot4_teleop_key
This script starts the teleop_twist_keyboard
node, allowing
you to drive the robot with the i
j
l
,
keys via
the terminal. The node will publish the commands on the ROS 2
network which the cmd_vel_mux
on the compute board will
subscribe to.
Joystick Teleoperation
The TurtleBot 4 Standard, includes a Bluetooth joystick controller which should already be paired to the Raspberry Pi. You can turn it on by pressing the central home button. The controller will show it has paired successfully when the light on it turns blue. If the controller does not pair or you’d like to pair a new controller, see the TurtleBot 4 Controller Setup Guide.
The turtlebot4_bringup_standard.launch.py
launch file which
is brought up with the slamcore_turtlebot4.service
on boot
will launch the cmd_vel_mux
and joy_teleop
nodes, which
means after you turn on the robot and connect the controller,
you should be able to teleoperate the robot straight away.
You can check that the service is running correctly with:
$ sudo systemctl status slamcore_turtlebot4.service
Alternatively, if it is not running, you can launch joystick teleoperation manually with:
$ ros2 launch slamcore_ros2_turtlebot4_example turtlebot4_teleop_joy_launch.py
Warning
Make sure the cmd_vel_mux
is also running if you launch joy_teleop
manually.
To drive the robot, press and hold the L1 button (dead man switch) and
use the left joystick for Fwd/Back
and the right joystick for
steering (See PS4 Button Mapping). You can also hold the R1 button to
drive the robot at a higher speed.
The current configuration is meant to be used with a PS4 controller, you can change the joystick configuration or create a new one using the following example: ps4-joy.yaml.
The TurtleBot 4 Standard also adds the following options to the controller, allowing you to call a number of the Create’s services and to control the menu on the on board display:
controller:
a: ["Select"]
b: ["EStop"]
x: ["Back"]
y: ["RPLIDAR Motor"]
up: ["Scroll Up"]
down: ["Scroll Down"]
l2: ["Wall Follow Left"]
r2: ["Wall Follow Right"]
home: ["Dock", "Undock", "3000"]
If you have not enabled the slamcore_turtlebot4.service
on
boot, bring up the essential TurtleBot 4 nodes, which includes
the cmd_vel_mux
:
Warning
On the TurtleBot 4 Lite, joy_teleop
is not included in
the bringup launch file and needs to be launched manually.
Further details below.
$ ros2 run slamcore_ros2_turtlebot4_example turtlebot4_bringup_lite.launch.py
Keyboard Teleoperation
After setting up the slamcore-ros2-examples
workspace on
your laptop and installing the dependencies-turtlebot4.txt
,
you can run Keyboard Teleoperation with:
$ ros2 run slamcore_ros2_turtlebot4_example turtlebot4_teleop_key
This script starts the teleop_twist_keyboard
node, allowing
you to drive the robot with the i
j
l
,
keys via
the terminal. The node will publish the commands on the ROS 2
network which the cmd_vel_mux
on the compute board will
subscribe to.
Joystick Teleoperation
The TurtleBot 4 Lite, does not include a Bluetooth joystick
controller so you will have to pair a new one. The Bluetooth
packages are not installed on the Lite’s Raspberry Pi by default
so, if you haven’t done so already, you will need to install
them with sudo bluetooth.sh
(a TurtleBot 4 script that
should be present on the Pi that comes with the robot) and then
reboot the Pi, as explained in the TurtleBot 4 Joystick
Teleoperation Tutorial.
You can then pair a new controller following the TurtleBot 4
Controller Setup Guide.
The turtlebot4_bringup_lite.launch.py
launch file which is
brought up with the slamcore_turtlebot4.service
on boot will
launch the cmd_vel_mux
node but not the joy_teleop
node. Therefore, you will need to launch this manually to start
joystick teleoperation:
$ ros2 launch slamcore_ros2_turtlebot4_example turtlebot4_teleop_joy_launch.py
Warning
Make sure the cmd_vel_mux
is also running when you launch joy_teleop
manually.
To drive the robot, press and hold the L1 button (dead man switch) and
use the left joystick for Fwd/Back
and the right joystick for
steering (See PS4 Button Mapping). You can also hold the R1 button to
drive the robot at a higher speed.
The current configuration is meant to be used with a PS4 controller, you can change the joystick configuration or create a new one using the following example: ps4-joy.yaml.
To teleoperate the Kobuki, first make sure the kobuki_node
and
cmd_vel_mux
nodes are running. On your compute board run:
$ ros2 launch slamcore_ros2_kobuki_example kobuki_setup_comms_launch.py
Then you can launch keyboard teleoperation or joystick teleoperation.
Keyboard Teleoperation
After setting up the slamcore-ros2-examples
workspace on your laptop
and installing the dependencies-kobuki.txt
, you can run keyboard
teleoperation with:
$ ros2 run slamcore_ros2_kobuki_example kobuki_teleop_key
This script starts the kobuki_keyop
node, allowing you to
drive the robot with the arrow keys via the terminal. The node
will publish the commands on the ROS 2 network which the cmd_vel_mux
on the compute board will subscribe to.
Joystick Teleoperation
For joystick teleoperation, you will need a Bluetooth controller, e.g.,
a PS4 controller. First, make sure you pair the controller to your
compute board and can connect it automatically reliably. Then launch
joy_teleop
on your compute board with:
$ ros2 run slamcore_ros2_kobuki_example kobuki_teleop_joy
To drive the robot, press and hold the L1 button (dead man switch) and
use the left joystick for Fwd/Back
and the right joystick for
steering (See PS4 Button Mapping).
Note
If the above mapping does not work with your joystick/driver, you may try
the following alternative using the joystick_mode:=old
argument:
$ # Press L1 and use the left joystick for Fwd/Back, right joystick for Left/Right
$ ros2 run slamcore_ros2_kobuki_example kobuki_teleop_joy --ros-args -p joystick_mode:=old
See the PS4 Button Mapping section in the Appendix for an illustration of the PS4 controller buttons to be used.
Visual-Inertial-Kinematic Calibration
To increase the overall accuracy of the pose estimation we will fuse the wheel-odometry measurements of the robot encoders into our SLAM processing pipeline. This also makes our positioning robust to kidnapping issues (objects partially or totally blocking the camera field of view) since the algorithm can now depend on the odometry to maintain tracking.
To enable the wheel-odometry integration, follow the corresponding tutorial: Wheel Odometry Integration. After the aforementioned calibration step, you will receive a VIK configuration file similar to the one shown below:
VIK configuration file
{
"Version": "1.1.0",
"Patch": {
"Base": {
"Sensors": [
{
"EstimationScaleY": false,
"InitialRotationVariance": 0.01,
"InitialRotationVariancePoorVisual": 0.01,
"InitialTranslationVariance": 0.01,
"InitialTranslationVariancePoorVisual": 1e-8,
"ReferenceFrame": "Odometry_0",
"ScaleTheta": 0.9364361585576232,
"ScaleX": 1.0009030227774692,
"ScaleY": 1.0,
"SigmaCauchyKernel": 0.0445684,
"SigmaTheta": 1.94824,
"SigmaX": 0.0212807,
"SigmaY": 0.00238471,
"TimeOffset": "42ms",
"Type": [
"Odometry",
0
]
}
],
"StaticTransforms": [
{
"ChildReferenceFrame": "Odometry_0",
"ReferenceFrame": "IMU_0",
"T": {
"R": [
0.5062407414595297,
0.4924240392715688,
-0.4916330719846658,
0.50944656222699
],
"T": [
0.0153912358519861,
0.2357725115741995,
-0.0873645490730017
]
}
}
]
}
},
"Position": {
"Backend": {
"Type": "VisualInertialKinematic"
}
}
}
Warning
The configuration file format for running SLAM on customized parameters has changed in v23.01. This affects all VIK calibration files previously provided to you. Please see JSON configuration file migration for more information.
The slamcore/base_link
➞ base_link
Transformation
To integrate the Slamcore coordinate frame (located at the camera’s IMU) with
the robot’s base coordinate frame, usually base_link
or base_footprint
at the base of the robot between the wheels, we must set a
slamcore/base_link
\(\rightarrow\) base_link
transformation. This
will allow us to connect the map
\(\rightarrow\) odom
\(\rightarrow\) slamcore/base_link
transform tree with the robot’s
static transform tree described in its URDF
file. With this we will be able
to display the robot model correctly.
If you are using one of the robots supported in these examples, with the
camera mounted in the same position, you can use the existing
slamcore_camera_to_robot_base_transform
defined in the
<robot>-nav2-demo-params.yaml
file of each example. Our
static_transform_pub_launch
script then fetches and loads this static
transform when included in a launch file, like seen, for example, in
turtlebot4_slam_launch.py.
If you are using your own robot or have mounted the camera elsewhere, you will need to measure and define this transformation, as detailed in the drop down below.
How to set the slamcore/base_link
➞ base_link
transformation
Before you can start navigating in the environment, you need to provide the transformation between the frame of reference of the robot base, base_link in our example and the frame of reference of the Slamcore pose estimation algorithm, i.e., slamcore/base_link.
The translation and rotation parts of this transform should be specified in
the xyz
and rpy
fields of the
slamcore_camera_to_robot_base_transform
parameter in the
<robot>-nav2-demo-params.yaml
config file. Examples can be found in our
slamcore-ros2-examples repo. You may copy
this file and modify the parameters to suit your setup. You can then load
this file by passing in the path to the file using the params_file
argument when launching navigation.
Order of Transforms in TF - Validation of specified transform
The order of transforms in the TF tree is as follows:
map
\(\rightarrow\) odom
\(\rightarrow\)
slamcore/base_link
\(\rightarrow\) base_link
Note that slamcore/base_link
, i.e. the frame of the Slamcore pose
estimation is the parent of the base_link
. Thus,
xyz
and rpy
should encode the
transformation of the base_link
relative to the
slamcore/base_link
frame.
Also note that when the camera is pointing forwards and in parallel to the
robot platform surface, the axes of the slamcore/base_link
frame should
be:
Z pointing forwards
X pointing to the right side
Y pointing downwards
In contrast, the robot base_link
axes commonly are as follows:
X pointing forwards
Y pointing to the left side
Z pointing upwards
You can also visually inspect the validity of your transformation using the
view_model_launch.py
file. Note this might take some time to load during which RViz2
will be
unresponsive. Here’s the relative transform of the aforementioned frames in
our setup:
$ ros2 launch slamcore_ros2_examples_common view_model_launch.py

You can also refer to the Troubleshooting section for a simplified version of the overall TF Tree.
The more accurate this transform the better, but for now a rough estimation will
do. In our case, since the camera is placed 23cm above and 8.7cm in front of the
base_link
we used the following values.
slamcore_camera_to_robot_base_transform:
ros__parameters:
parent_frame: slamcore/base_link
child_frame: base_link
xyz: [0.015, 0.236, -0.087]
rpy: [0.000, -1.571, 1.571]
Note
Note that, if you are also integrating wheel odometry measurements, the
slamcore/base_link
\(\rightarrow\) base_link
transform will be specified in
two places, once in the slamcore_camera_to_robot_base_transform
variables
in <robot>-nav2-demo-params.yaml
and a second time in the kinematic parameters
(see Visual-Inertial-Kinematic Calibration section) of the
slam-config.json
file. You can save some time and copy the T_SO.T
section given by the VIK config file to the
slamcore_camera_to_robot_base_transform
xyz
values in
<robot>-nav2-demo-params.yaml
. Note however that calibration procedure cannot
compute the height difference between the two frames since it’s not
observable in planar motion. Therefore, you will have to measure and edit the
(Y) value manually of slamcore_camera_to_robot_base_transform
’s xyz
variable.
Mapping and Live Navigation
Now that the robot has been set up with our software, there are two ways of running the examples:
Navigation in Single Session SLAM Mode, this involves creating a map in real-time and navigating inside it as the robot explores a space.
Navigation in Localisation Mode Using a Pre-recorded Map, this involves creating a map first and then loading the map to run in localisation mode.
Navigation in Single Session SLAM Mode
In single session SLAM mode, the robot will generate a map as it moves around a
space. When SLAM is stopped, the map will be discarded unless the slamcore/save_session
service is called before.
You can bring up the robot, SLAM and Nav2 in single session SLAM mode with the following commands on your robot.
Note
If you have created new SLAM and Nav2 configuration files, don’t forget to
pass these in with the config_file
and params_file
arguments
respectively, to override the default ones. The config_file
should
contain the VIK calibration parameters which will enable the robot to use
Visual-Inertial-Kinematic SLAM (if you have completed a VIK calibration) and
any additional Slamcore configuration parameters, detailed in
SLAM Configuration.
First, launch SLAM with:
$ ros2 launch slamcore_ros2_create3_example create3_slam_launch.py
Then, launch Nav2 with:
$ ros2 launch slamcore_ros2_create3_example create3_navigation_launch.py
Note
SLAM and Nav2 are launched in separate files, one after the other, to avoid
overloading the DDS network when launching everything at the same time, which
can cause the Create 3 robot to run out of memory. We have experienced this
issue when using Fast DDS
and have not tested Cyclone DDS
. Also note
that when running in Single Session SLAM mode, the order in which you launch
SLAM and Nav2 does not matter.
First, launch SLAM with:
$ ros2 launch slamcore_ros2_turtlebot4_example turtlebot4_slam_launch.py
Then, launch Nav2 with:
$ ros2 launch slamcore_ros2_turtlebot4_example turtlebot4_navigation_launch.py
Note
SLAM and Nav2 are launched in separate files, one after the other, to avoid
overloading the DDS network when launching everything at the same time, which
can cause the Create 3 robot to run out of memory. We have experienced this
issue when using Fast DDS
and have not tested Cyclone DDS
. Also note
that when running in Single Session SLAM mode, the order in which you launch
SLAM and Nav2 does not matter.
On the Kobuki, you can launch SLAM and Nav2 with a single launch file:
$ ros2 launch slamcore_ros2_examples kobuki_live_navigation_launch.py \
> config_file:=</path/to/slam/config/json> \
> params_file:=</path/to/params/yaml/>
Note
If you have already brought up the Kobuki with our
kobuki_setup_comms_launch.py
launch script, to for example teleoperate the
robot, you can launch the above file with the comms
argument set to
false
, to only launch the navigation and SLAM components.
$ ros2 launch slamcore_ros2_examples kobuki_live_navigation_launch.py \
> session_file:=<path/to/session/file> \
> config_file:=</path/to/slam/config/json> \
> params_file:=</path/to/params/yaml/> \
> comms:=false
Visualization
On your visualization machine, run the following to visualize the map being
created in RViz2
:
$ ros2 launch slamcore_ros2_create3_example create3_navigation_monitoring_launch.py
$ ros2 launch slamcore_ros2_turtlebot4_example turtlebot4_navigation_monitoring_launch.py
$ ros2 launch slamcore_ros2_kobuki_example kobuki_navigation_monitoring_launch.py
Once Nav2 is up and running, see Interact with the Navigation Demo to learn how to set single goals or multiple waypoints for navigation.
Save the map
If you would like to save the map to reuse it in the future, you can call the
slamcore/save_session
service from another terminal:
$ ros2 service call /slamcore/save_session std_srvs/Trigger
By default, the session file will be saved to the working directory from which
<robot>_slam_launch.py
was called and SLAM will be paused while
the file is being generated. If you would like the service to save the session
file to a specific directory, you must set the session_save_dir
parameter
when launching <robot>_slam_launch.py
.
Warning
Note that when using our Docker container, files should be saved inside the
slamcore-ros2-examples
directory of the container, as otherwise they will
not be saved to your machine after exiting the container.
The next time you launch navigation you can provide the path to the session file
using the session_file
argument as explained in the section below.
Navigation in Localisation Mode Using a Pre-recorded Map
In localisation mode, we can navigate in a map that has been recorded
previously. This map can either be generated from a recorded dataset or at the
end of a previous live run by using the slamcore/save_session
service as
mentioned above. If running on a limited compute platform, it might be
preferable to record a dataset and then generate the map on a more powerful
machine. Check the drop-down below to see the benefits and disadvantages of
creating a map live VS from a recorded dataset.
Pros/Cons of generating a map live vs from a recorded dataset
Instead of first generating a dataset and then creating a session and map from that dataset as the previous sections have described, you could alternatively create a session at the end of a standard SLAM run. Compared to the approach described above this has a few pros and cons worth mentioning:
✅ No need to record a dataset, or move it to another machine and run SLAM there
✅ You can interactively see the map as it gets built and potentially focus on the areas that are under-mapped
❌ Generating a session at the end of the run may take considerably longer if you are running on a Jetson NX compared to running on an x86_64 machine.
❌ You can’t modify the configuration file and see its effects as you would when having separate dataset recording and mapping steps.
❌ If something goes wrong in the pose estimation or mapping procedure, you don’t have the dataset to further investigate and potentially report the issue back to Slamcore
The two options for map creation are detailed below.
We can generate a map by first recording a dataset and then processing it
using slamcore_visualiser
(GUI) or slamcore_dataset_processor
(command line tool).
Record Dataset to Map the Environment
Assuming the main robot nodes are up and we can teleoperate the robot,
we will be using the dataset_recorder.launch.py
launch file to
capture a dataset that contains visual-inertial, depth as well as
kinematic information (if you have a VIK calibration).
To launch the Slamcore dataset recorder run:
$ ros2 launch slamcore_slam dataset_recorder.launch.py \
> override_realsense_depth:=true \
> realsense_depth_override_value:=true \
> odom_reading_topic:=/odom \
> ros_extra_params:=<path>/<to>/slamcore-ros2-examples/src/slamcore_ros2_examples_common/config/dataset-recorder-odom-override.yaml
$ ros2 launch slamcore_slam dataset_recorder.launch.py \
> override_realsense_depth:=true \
> realsense_depth_override_value:=true \
> odom_reading_topic:=/odom \
> ros_extra_params:=<path>/<to>/slamcore-ros2-examples/src/slamcore_ros2_examples_common/config/dataset-recorder-odom-override.yaml
$ ros2 launch slamcore_slam dataset_recorder.launch.py \
> override_realsense_depth:=true \
> realsense_depth_override_value:=true \
> odom_reading_topic:=/odom
Note that recording the kinematic measurements (by subscribing to a wheel odometry topic) is not necessary, since we can generate a map using purely the visual-inertial information from the camera. Kinematics will however increase the overall accuracy if recorded and used.
When you have covered all the space that you want to map, send a
Ctrl-c
signal to the application to stop. By default, the dataset
will be saved in the current working directory, unless the
output_dir
argument is specified.
Warning
Note that when using our Docker container, files should be saved
inside the slamcore-ros2-examples
directory of the container, as
otherwise they will not be saved to your machine after exiting the
container.
You now have to process this dataset and generate the .session
file. In
our case, we compressed and copied the dataset to an x86_64 machine in order
to accelerate the overall procedure.
$ tar -cvfz mydataset.tgz mydataset/
$ rsync --progress -avt mydataset.tgz <ip-addr-of-x86_64-machine>:
Create Session and Map for Navigation
Once you have the (uncompressed) dataset on the machine that you want to do
the processing at, use the slamcore_visualiser
to process the whole
dataset and at the end of it, save the resulting session.
$ # Launch slamcore_visualiser, enable mapping features - `-m`
$ slamcore_visualiser dataset \
> -u mydataset/ \
> -c /usr/share/slamcore/presets/mapping/default.json /<path>/<to>/<vik-calibration.json> \
> -m
Alternatively, you can use the slamcore_dataset_processor
command line
tool, however, you won’t be able to visualize the map being built.
$ # Launch slamcore_dataset_processor, enable mapping features `-m` and session saving `-s`
$ slamcore_dataset_processor dataset \
> -u mydataset/ \
> -c /usr/share/slamcore/presets/mapping/default.json /<path>/<to>/<vik-calibration.json> \
> -m \
> -s
Note
Refer to Step 2 - Prepare the mapping configuration file in case you want to tune the mapping configuration file in use.
Edit the Generated Session/Map
You can optionally use slamcore_session_explorer
and the editing tool of
your choice, e.g. Gimp
to create the final session and corresponding
embedded map. See Slamcore Session Explorer for more. When done, copy the
session file over to the machine that will be running SLAM, if not already
there.
Instead of recording a dataset, we can save a session map directly when running SLAM in Height Mapping/Single Session mode. If you are already running Nav2 in single session SLAM mode with the commands shown in Navigation in Single Session SLAM Mode or you simply launch our SLAM with the following:
$ ros2 launch slamcore_slam slam_publisher.launch.py generate_map2d:=true
you should be able to visualize the map being built in RViz2
by
subscribing to the /slamcore/map
topic. You can then call the
slamcore/save_session
service from another terminal to save the
current map that is being generated.
$ ros2 service call /slamcore/save_session std_srvs/Trigger
The session file will be saved to the current working directory by default.
Warning
Note that when using our Docker container, files should be saved inside the
slamcore-ros2-examples
directory of the container, as otherwise they will
not be saved to your machine after exiting the container.
Once you have the map, you can pass the path to the session file as an argument when launching navigation.
Note
If you have created new SLAM and Nav2 configuration files, don’t forget to
pass these in with the config_file
and params_file
arguments
respectively, to override the default ones. The config_file
should
contain the VIK calibration parameters which will enable the robot to use
Visual-Inertial-Kinematic SLAM (if you have completed a VIK calibration) and
any additional Slamcore configuration parameters, detailed in
SLAM Configuration.
First, load the navigation stack:
$ ros2 launch slamcore_ros2_create3_example create3_navigation_launch.py
Then, launch SLAM in localisation mode:
$ ros2 launch slamcore_ros2_create3_example create3_slam_launch.py \
> session_file:=</path/to/session/file> \
> config_file:=</path/to/slam/config/json> \
> params_file:=</path/to/params/yaml>
Warning
In localisation mode, the occupancy map is published by our software
in Quality of Service Durability Transient Local
, therefore,
make sure you launch Nav2 first before launching our SLAM. Launching
Nav2 after our SLAM might lead to Nav2 throwing an error saying the
map cannot be found.
First, load the navigation stack:
$ ros2 launch slamcore_ros2_turtlebot4_example turtlebot4_navigation_launch.py
Then, launch SLAM in localisation mode:
$ ros2 launch slamcore_ros2_turtlebot4_example turtlebot4_slam_launch.py \
> session_file:=</path/to/session/file> \
> config_file:=</path/to/slam/config/json> \
> params_file:=</path/to/params/yaml>
Warning
In localisation mode, the occupancy map is published by our software
in Quality of Service Durability Transient Local
, therefore,
make sure you launch Nav2 first before launching our SLAM. Launching
Nav2 after our SLAM might lead to Nav2 throwing an error saying the
map cannot be found.
$ ros2 launch slamcore_ros2_kobuki_example kobuki_live_navigation_launch.py \
> session_file:=</path/to/session/file> \
> config_file:=</path/to/slam/config/json> \
> params_file:=</path/to/params/yaml>
Note
If you have already brought up the Kobuki with our
kobuki_setup_comms_launch.py
launch script, to for example teleoperate the
robot, you can launch the above command with the comms
argument set to
false
, to only launch the navigation and SLAM components.
$ ros2 launch slamcore_ros2_examples kobuki_live_navigation_launch.py \
> session_file:=</path/to/session/file> \
> config_file:=</path/to/slam/config/json> \
> params_file:=</path/to/params/yaml> \
> comms:=false
Visualization
You can visualize the robot navigating in the map on a separate machine by running:
$ ros2 launch slamcore_ros2_create3_example create3_navigation_monitoring_launch.py
$ ros2 launch slamcore_ros2_turtlebot4_example turtlebot4_navigation_monitoring_launch.py
$ ros2 launch slamcore_ros2_kobuki_example kobuki_navigation_monitoring_launch.py
Warning
In localisation mode, the occupancy map is published by our software in
Quality of Service Durability Transient Local
, therefore, if you launch
our SLAM but cannot see the map in RViz2
open the Map
item in the
left sidebar of RViz2
,then open the Topic
drop down and change the
Durability Policy
to Transient Local
.

Note
If the map is rendered in RViz2
but the robot model does not appear, you
must manually drive the robot around until it relocalises in the loaded map.
We can see that the relocalisation took place by looking at the RViz2
view where the local and global costmaps start getting rendered, or by
subscribing to the /slamcore/pose
where we start seeing incoming
Pose
messages.
This is how the RViz2
view should look like after the robot has relocalised.

Fig. 75 RViz2
view during navigation
Once Nav2 is up and running, see Interact with the Navigation Demo below to learn how to set single goals or multiple waypoints for navigation.
Interact with the Navigation Demo
In RViz2
we can set individual navigation goals with the
Nav2 Goal
button or by publishing a Pose message to the /goal_pose
topic. The robot
will then try to plan a path and start navigating towards the goal if the path
is feasible.
Example - How to publish a goal to the /goal_pose
topic
$ ros2 topic pub /goal_pose geometry_msgs/PoseStamped "{header: {stamp: {sec: 0}, frame_id: 'map'}, pose: {position: {x: 0.2, y: 0.0, z: 0.0}, orientation: {w: 1.0}}}"

Fig. 76 Robot navigating towards single goal
It is also possible to issue multiple waypoints using Nav2’s Waypoint Mode
button. When Waypoint Mode
is selected, we can set multiple waypoints on the
map with the Nav2 Goal
button. When all waypoints have been set, press the
Start Navigation
button and the robot will attempt to navigate through all
the waypoints.

Fig. 77 Waypoint mode
Nav2 Configuration
The <robot>-nav2-demo-params.yaml
file (Create 3 file,
TurtleBot 4 Standard file,
Kobuki file)
contains parameters, such as the planner and costmap parameters, that can be
tuned to obtain the best navigation performance. The Nav2 docs include a
Configuration Guide
with information on the available parameters and how to use them.
In this file, we set the obstacle_layer
and voxel_layer
that are used in
the global and local costmap for obstacle avoidance. The observation source for
these is the /slamcore/local_point_cloud
published by our software. This
point cloud can be trimmed using a Slamcore JSON configuration file to, for example,
remove points below a certain height that should not be marked as obstacles. More details
on point cloud trimming can be found on the Point Cloud Configuration page.
Note
In addition to trimming the point cloud using a Slamcore JSON configuration
file, we set the ROS obstacle layer’s
min_obstacle_height
parameter in nav2-demo-params.yaml
to e.g.
-0.18
for the Kobuki.
This parameter lets you set a height (measured from the map
frame) from
which all points above are considered valid and can be marked as obstacles.
All points below are simply ignored (they are not removed, as is the case
with point cloud trimming). As in this example the map
frame is at the
camera height, we want to make sure that points in the cloud that are below
the camera (between camera height (23cm) and the floor) are included. If the
map frame was on the ground level, min_obstacle_height
could be kept at
e.g. 0.05
- only points 5cm above the map
Z coordinate would be
considered when marking obstacles.
This parameter can be used together with point cloud trimming, as is the case in this demo, or without point cloud trimming, if you would like to keep the raw local point cloud.
You may copy these files, found in the config folder, and adjust them to your setup. Remember to provide the path to the modified files when launching navigation to override the default ones.
Appendix
Troubleshooting
My TF tree seems to be split in two main parts
As described in Nav2 - Slamcore Integration, our ROS 2 Wrapper publishes both the map
\(\rightarrow\) odom
and odom
\(\rightarrow\) base_link
transformations in the TF tree. Thus, to avoid conflicts when
publishing the transforms, (note that TF allows only a single parent frame
for each frame) make sure that there is no other node publishing any of the
the aforementioned transformations. E.g it’s common for the wheel-odometry
node, e.g., the kobuki_node or
Create 3 to also publish its wheel-odometry estimates in the transformation
tree. We disable this behaviour by setting the Kobuki node’s publish_tf
parameter to False
in our nav2-demo-params.yaml
config file or in the case of the Create 3, via the robot’s webserver.
For reference, here’s a simplified version of how TF Tree looks like when executing the Slamcore Nav2 demo on the Kobuki robot:

Fig. 78 Reference TF Tree during Navigation
PS4 Button Mapping
Known Issues
In ROS 2 Foxy,
RViz2
may crash when setting a new goal if theController
visualization checkbox is enabled. (See https://github.com/ros2/rviz/issues/703 for more details)When loading the Create 3 model in
RViz2
you might see the following errorInvalid frame ID "left_wheel" passed to canTransform argument source_frame
or the bumper might appear offset. These issues have been reported in https://github.com/iRobotEducation/create3_sim/issues/125 and https://github.com/iRobotEducation/create3_sim/issues/170, and are being addressed with https://github.com/iRobotEducation/create3_sim/pull/200.