You’re reading an older version of the Slamcore SDK documenation. The latest one is 23.04.
Nav2 Integration
The current page presents a working example of integrating the SLAMcore SLAM algorithms into Nav2 (the ROS2 Navigation Stack) and using it as the core component to map the environment as well as to provide accurate positioning of the robotic platform, in ROS2 Foxy. This tutorial will include steps detailing how to run the example natively on Ubuntu 20.04 or using a docker container for systems like the NVIDIA Jetson NX, which do not yet support Ubuntu 20.04.
Note
Using ROS1? Visit the ROS1 Navigation Stack Integration Tutorial Page
Goal
The goal of this demonstration is to use the SLAMcore SDK as the main source of positioning during navigation as well as for mapping the environment before or during navigation. In the Nav2 documentation’s examples, these two tasks are normally carried out using SLAM Toolbox, an open-source 2D graph-based SLAM library which uses a 2D laser scan for these tasks. AMCL, which also uses 2D laser scans, is also suggested as a localisation alternative.
Instead, we’ll be using the 2D Occupancy Mapping capabilities of our SDK to generate an occupancy grid map and our visual-inertial SLAM positioning to localise in that map. Additionally, we will integrate wheel odometry into our SLAM system to increase localisation robustness - this is available for customers as a paid add-on.
Hardware Setup
We are using the Kobuki robotic platform, the Intel RealSense D435i camera and the NVIDIA Jetson NX during this demonstration. We’re also using a custom mounting plate for placing the board and the camera on the robot. Lastly, we will use a separate laptop as a visualisation machine.
Note
The Jetson Xavier NX used for this example uses JetPack 4.6 (based on Ubuntu 18.04), however, ROS2 Foxy targets Ubuntu 20.04. Therefore, for similar cases where Ubuntu 20.04 might not be available, instructions have been provided below on how to run this example using a docker container.
Robotic platform in use


Fig. 70 Main setup for navigation
Nav2 Setup
Traditionally Nav2 requires the following components to be in place:
An occupancy grid map of the environment, either generated ahead of time, or live.
A global planner and a controller (also known as local planner in ROS1) which guide your robot from the start to the end location. The default choice for these are NavFn for the global planner and DWB for the controller. Other available options are discussed in the Selecting the Algorithm Plugins section of the Nav2 docs.
A global and local costmap which assign computation costs to the aforementioned grid map so that the planner chooses to go through or to avoid certain routes in the map.
A localisation module, such as SLAM Toolbox or AMCL.
As discussed earlier, we’ll be using SLAMcore software to generate a map of the environment as well as localising the robot in the environment. On top of that, we’ll use the NavFn global planner and DWB controller for navigation. Lastly, we will be using the local point cloud published by our software for obstacle avoidance with costmap2D’s obstacle layer plugin.
Positioning information is transmitted to Nav2 using TF, so we will need to
make sure the correct transforms are set up and being broadcast for Nav2 to
function correctly. A short introduction to the required transforms is provided
in the Nav2 Setting Up Transformations tutorial page.
As explained in the drop-down below, the SLAMcore ROS Wrapper abides to REP-105
and will publish both the the map
\(\rightarrow\) odom
and odom
\(\rightarrow\) base_footprint
transforms required for
navigation by default.
Abiding to REP-105 - map
and odom
frames
Note
Many popular ROS frameworks, like the navigation stack abide to the ROS Coordinate Frames convention - REP-105, and thus requires two transformations to operate:
map
\(\rightarrow\)base_footprint
odom
\(\rightarrow\)base_footprint
where base_footprint
(or sometimes called base_link
) is the main
frame of reference of the robot platform in use.
This way a ROS node that’s interested in the pose of the robot can query
either map
\(\rightarrow\) base_footprint
or odom
\(\rightarrow\) base_footprint
. If it queries the former, it will get the most
accurate estimation of the robot pose, however, that may include
discontinuities or jumps. On the other hand, the odom
\(\rightarrow\) base_footprint
transform is
drift-y and overall less-accurate but is guaranteed to change smoothly
over time.
Traditionally the localisation module would compute the map
\(\rightarrow\) base_footprint
transform,
and would use the latest odom
\(\rightarrow\) base_footprint
, as published by the odometry node (e.g.
dead reckoning via wheel odometry) to eventually publish map
\(\rightarrow\) odom
and abide
to REP-105.
To abide to this standard and also increase the overall accuracy of these
transforms, the SLAMcore ROS Wrappers incorporate the latest odometry
information and publish both the map
\(\rightarrow\) odom
and odom
\(\rightarrow\) base_footprint
transforms. This way
we can provide a smooth odom
\(\rightarrow\) base_footprint
transform that potentially uses the wheel-odometry,
as well as information from the visual and inertial sensors.
For more on SLAMcore’s frames of reference convention, see Frames of Reference Convention.
As seen above, Nav2 has similar requirements to the ROS1 Navigation Stack - you
need a map, some sort of positioning (TF) and some sensor streams for obstacle
avoidance, and these are all provided by our software. The main difference with
ROS1 is that Nav2 no longer uses the move_base
finite state machine and
instead uses Behaviour Trees to call modular servers
to complete an action (e.g. compute a path, navigate…). This allows the user to
configure the navigation behaviour easily using plugins in a behaviour tree xml
file. A detailed comparison with the ROS1 Navigation stack can be found in the
ROS to ROS2 Navigation Nav2
docs page.

Fig. 71 SLAMcore integration into Nav2
Nav2’s parameters and plugins, which can be configured for your unique use case, have been included and can be easily modified in the nav2-demo-params yaml file, in our repository. Details about Nav2 configuration and obstacle avoidance parameters can be found in the Nav2 Configuration section further below.
Outline
Following is the list of steps for this demo.

Fig. 72 Outline of the demo
We’ll delve into each one of these steps in more detail in the next sections.
[OPTIONAL] Set Up Visualisation Machine
[OPTIONAL] Run Visual-Inertial-Kinematic Calibration, to improve the overall performance.
Compute the slamcore/base_link ➞ base_footprint Transformation
Create a Map and Run Live Navigation while teleoperating your robot. For detailed steps on map creation and navigation see, Navigation in single session SLAM mode or Navigation in localisation mode using a prerecorded map.
Interact with the Navigation Demo, set waypoints and navigation goals using
navigation_monitoring_launch.py
.
Set Up Robot
You will need to download the SLAMcore ROS2 wrapper, regardless of whether you would like to run this demo on Native Ubuntu 20.04 or through a docker container. See the Getting Started page for details on how to download the “SLAMcore Tools” and “ROS2 Wrapper” Debian packages. Installation of “SLAMcore Tools” on a separate laptop can be useful to inspect recorded datasets and the generated occupancy grids.
Once you have downloaded the SLAMcore ROS2 Wrapper you may continue with set up and installation following the steps below.
On Native Ubuntu 20.04, a working ROS2 installation is required before installing the SLAMcore ROS2 Wrapper. Follow the steps on the SLAMcore ROS2 Wrapper page for details on how to install ROS2 Foxy and the SLAMcore ROS2 Wrapper.
Set up Binary Dependencies
When not using a Dockerfile, you will need to manually install a series of
packages using apt
.
Installing apt
dependencies
$ apt-get update && \
> apt-get upgrade -y && \
> apt-get install --no-install-recommends --assume-yes \
> software-properties-common \
> udev \
> keyboard-configuration \
> python3-colcon-* \
> python3-pip \
> python3-rosdep \
> ros-foxy-diagnostic-updater \
> ros-foxy-ecl-build \
> ros-foxy-joint-state-publisher \
> ros-foxy-kobuki* \
> ros-foxy-nav2* \
> ros-foxy-navigation2* \
> ros-foxy-nonpersistent-voxel-layer
> ros-foxy-rviz2 \
> ros-foxy-xacro \
Set up ROS2 workspace
You will have to create a new ROS2 workspace by cloning the slamcore-ros2-examples repository. This repository holds all the navigation-related nodes and configuration for enabling the demo. Before compiling the workspace, install vcstool which is used for fetching the additional ROS2 source packages.
Install vcstool
$ pip3 install --user --upgrade vcstool
Collecting vcstool
Downloading https://files.pythonhosted.org/packages/86/ad/01fcd69b32933321858fc5c7cf6ec1fa29daa8942d37849637a8c87c7def/vcstool-0.2.15-py3-none-any.whl (42kB)
Collecting PyYAML (from vcstool)
Downloading https://files.pythonhosted.org/packages/7a/5b/bc0b5ab38247bba158504a410112b6c03f153c652734ece1849749e5f518/PyYAML-5.4.1-cp36-cp36m-manylinux1_x86_64.whl (640kB)
Collecting setuptools (from vcstool)
Downloading https://files.pythonhosted.org/packages/4e/78/56aa1b5f4d8ac548755ae767d84f0be54fdd9d404197a3d9e4659d272348/setuptools-57.0.0-py3-none-any.whl (821kB)
Installing collected packages: PyYAML, setuptools, vcstool
Successfully installed PyYAML-5.4.1 setuptools-57.0.0 vcstool-0.2.15
After that, clone the repository, run vcstool
and finally colcon build
the packages.
Setting up ROS2 Workspace
$ git clone git@github.com:slamcore/slamcore-ros2-examples
Cloning into 'slamcore-ros2-examples'...
$ cd slamcore-ros2-examples
$ vcs import src < repos.yaml
...
=== src/kobuki_ros (git) ===
Cloning into '.'...
=== src/kobuki_ros_interfaces (git) ===
Cloning into '.'...
$ colcon build
...
$ source install/setup.bash
...
Once you have set up the new workspace, make sure you set up the appropriate udev rules to communicate with the Kobuki, as explained below.
In this case, no ROS2 installation is required before installing the SLAMcore ROS2 wrapper. Follow these instructions to set up our ROS2 Wrapper and example repository in a docker container. This is the preferred method if you are on platforms such as NVIDIA’s Xavier NX, which do not yet support the installation of Ubuntu 20.04. or you prefer an isolated installation.
You should have downloaded the SLAMcore ROS2 Wrapper for your host system from the
Download SLAMcore Software
link on the SLAMcore Portal.Clone the slamcore-ros2-examples repository.
$ git clone https://github.com/slamcore/slamcore-ros2-examples.git
Set the SLAMCORE_DEB variable for the current shell to the path to your downloaded SLAMcore ROS2 Wrapper by using the following command.
$ export SLAMCORE_DEB=$HOME/path/to/the/downloaded/debian/package
Run
make build
from within the/slamcore-ros2-examples
directory. This will build the corresponding dockerfile image - downloading all the necessary Debian packages, setting up the ROS workspace, etc.$ cd slamcore-ros2-examples/ $ make build
To bring up the container for the first time use the following from within the current directory:
$ make run
To bring up additional terminals in the same running container, run the following from a new terminal:
$ make login
Note
You can run
make help
to see all available commands along with their descriptions.Now you should be in the container and should have
/ros_ws
in your $PATH. The/slamcore-ros2-examples
directory on your machine has been mounted in the container inside the/ros_ws
workspace. Therefore, files saved inside the/ros_ws/slamcore-ros2-examples
directory in the container should appear on the/slamcore-ros2-examples
directory on your machine.We will use
vcstool
to fetch the additional ROS2 source packages needed for this demo./ros_ws$ cd slamcore-ros2-examples/ /ros_ws/slamcore-ros2-examples$ vcs import src < repos.yaml
Build the packages from within the
/ros_ws
directory:/ros_ws/slamcore-ros2-examples$ cd .. /ros_ws$ colcon build
Once built, source the install/setup.bash file. You will need to source this file in every new terminal.
/ros_ws$ source install/setup.bash
If you would like to exit the running container, simply use Ctrl + D
or
type exit
.
Once you have set up your container, make sure you also set up the Kobuki udev rules correctly on your system (from outside the container) to communicate with Kobuki, as detailed below.
Warning
In order to communicate with the Kobuki, you will also need to set up the
appropriate udev rules. To do that, copy the 60-kobuki.rules
file,
available from the kobuki_ftdi
repository to the /etc/udev/rules.d
directory of your system.
# Download the file or clone the repository
$ wget https://raw.githubusercontent.com/kobuki-base/kobuki_ftdi/devel/60-kobuki.rules
# Copy it to the correct directory
$ sudo cp 60-kobuki.rules /etc/udev/rules.d
$ sudo service udev reload
$ sudo service udev restart
You may need to reboot your machine for changes to take effect.
Set Up Visualisation Machine
You may repeat the same steps outlined above on a second machine for visualisation. We will then be able to take advantage of ROS2’s network capabilities to visualise the topics being published by the robot on this second machine with minimal effort - as long as both machines are on the same network.
Note
You may use the dockerfile provided even if your system is Ubuntu 20.04 for an easier, isolated setup.
Run Visual-Inertial-Kinematic Calibration [Paid Add-on]
To increase the overall accuracy of the pose estimation we will fuse the wheel-odometry measurements of the robot encoders into our SLAM processing pipeline. This also makes our positioning robust to kidnapping issues (objects partially or totally blocking the camera field of view) since the algorithm can now depend on the odometry to maintain tracking.
To enable the wheel-odometry integration, follow the corresponding tutorial: Wheel Odometry Integration. After the aforementioned calibration step, you will receive a VIK configuration file similar to the one shown below:
VIK configuration file
{
"Version": "1.0.0",
"Base": {
"KinematicsEnabled": true,
"Synchroniser": {
"KinematicsPoseCameraOffset": "42.0ms"
}
},
"Position": {
"Backend": {
"Type": "VisualInertialKinematic"
},
"Odometry": {
"EstimationScaleY": false,
"ScaleTheta": 0.9364361585576231,
"ScaleX": 1.0009030227774691,
"ScaleY": 1.0,
"SigmaCauchyKernel": 0.0445684,
"SigmaTheta": 1.94824,
"SigmaX": 0.0212807,
"SigmaY": 0.00238471,
"T_SO": {
"R": [
0.5062407414595297,
0.49242403927156886,
-0.4916330719846658,
0.50944656222699
],
"T": [
0.015391235851986106,
0.23577251157419954,
-0.08736454907300172
]
}
}
}
}
Compute the slamcore/base_link
➞ base_footprint
Transformation
Before you can start navigating in the environment, you need to provide the transformation between the frame of reference of the robot base, base_footprint in our example and the frame of reference of the SLAMcore pose estimation algorithm, i.e., slamcore/base_link.
The translation and rotation parts of this transform should be specified in the
xyz
and rpy
fields of the slamcore_camera_to_robot_base_transform
parameter in the nav2-demo-params yaml config file. You may copy this file
and modify the parameters to suit your setup. You can then load this file by
passing in the path to the file using the params_file
argument when
launching navigation.
Order of Transforms in TF - Validation of specified transform
The order of transforms in the TF tree is as follows:
map
\(\rightarrow\) odom
\(\rightarrow\)
slamcore/base_link
\(\rightarrow\) base_footprint
Note that slamcore/base_link
, i.e. the frame of the SLAMcore pose
estimation is the parent of the base_footprint
. Thus,
xyz
and rpy
should encode the
transformation of the base_footprint
relative to the
slamcore/base_link
frame.
Also note that when the camera is pointing forwards and in parallel to the
robot platform surface, the axes of the slamcore/base_link
frame should
be:
Z pointing forwards
X pointing to the right side
Y pointing downwards
In contrast, the robot base_footprint
axes commonly are as follows:
X pointing forwards
Y pointing to the left side
Z pointing upwards
You can also visually inspect the validity of your transformation using the
view_model_launch.py
file. Note this might take some time to load during which RViz2
will be
unresponsive. Here’s the relative transform of the aforementioned frames in
our setup:
$ ros2 launch slamcore_ros2_examples view_model_launch.py

You can also refer to the Troubleshooting section for a simplified version of the overall TF Tree.
The more accurate this transform the better, but for now a rough estimation will
do. In our case, since the camera is placed 23cm above and 9.3cm in front of the
base_footprint
we used the following values.
slamcore_camera_to_robot_base_transform:
ros__parameters:
parent_frame: slamcore/base_link
child_frame: base_footprint
xyz: [0.015, 0.236, -0.087]
rpy: [0.000, -1.571, 1.571]
slamcore_camera_to_robot_base_transform variables and VIK configuration parameters
Note that, if you are also integrating wheel odometry measurements, the
slamcore/base_link
\(\rightarrow\) base_footprint
transform will be specified in
two places, once in the slamcore_camera_to_robot_base_transform
variables
in nav2-demo-params.yaml
and a second time in the kinematic parameters
(see Run Visual-Inertial-Kinematic Calibration [Paid Add-on] section) of the
slam-config.json
file. You can save some time and copy the T_SO.T
section given by the VIK config file to the
slamcore_camera_to_robot_base_transform
xyz
values in
nav2-demo-params.yaml
. Note however that calibration procedure cannot
compute the height difference between the two frames since it’s not
observable in planar motion. Therefore, you will have to measure and edit the
(Y) value manually of slamcore_camera_to_robot_base_transform
’s xyz
variable.
Create a Map and Run Live Navigation
There are two ways of running this example:
Navigation in single session SLAM mode, creating a map live and navigating inside it as the robot explores a space.
Navigation in localisation mode using a prerecorded map, creating a map first and then loading the map at startup.
Note
This demo assumes that the visualisation and the processing (SLAM and
navigation) happen on separate machines (a SLAM Machine and a Visualisation
Machine) and takes advantage of ROS2’s convenient network capabilities. If,
however, you have an external monitor connected to your robot, you can also
run the visualisation commands on your robot. We currently do not support
launching RViz2
via docker on Jetson platforms.
Navigation in single session SLAM mode
In single session SLAM mode, the robot will generate a map as it moves around a
space. When SLAM is stopped, the map will be discarded unless the slamcore/save_session
service is called before.
You can bring up the robot, SLAM and Nav2 in single session SLAM mode with the
following command on your robot. If you have created new SLAM and Nav2
configuration files, don’t forget to pass these in with the config_file
and params_file
arguments respectively, to override the default ones. The
config_file
should contain the VIK calibration parameters which will
enable the robot to use Visual-Inertial-Kinematic SLAM (if you have completed a
VIK calibration) and any additional SLAMcore configuration parameters, detailed
in SLAM Configuration.
$ ros2 launch slamcore_ros2_examples kobuki_live_navigation_launch.py \
> config_file:=</path/to/slam/config/json> \
> params_file:=</path/to/params/yaml/>
Note
If you have already brought up the Kobuki with our
kobuki_setup_comms_launch.py
launch script, to for example teleoperate the
robot, you can launch the above file with the comms
argument set to
false
, to only launch the navigation and SLAM components.
$ ros2 launch slamcore_ros2_examples kobuki_live_navigation_launch.py \
> session_file:=<path/to/session/file> \
> config_file:=</path/to/slam/config/json> \
> params_file:=</path/to/params/yaml/> \
> comms:=false
To create the map, you can teleoperate the robot, using your keyboard or a PS4 controller, by running the following in another terminal:
$ # Use the arrow keys to move around
$ ros2 run slamcore_ros2_examples kobuki_teleop_key
$ # Hold L1 and use the left joystick for Fwd/Back, right joystick for Left/Right
$ ros2 run slamcore_ros2_examples kobuki_teleop_joy
Note
If the above mapping does not work with your joystick/driver, you may try
the following alternative using the joystick_mode:=old
argument:
$ # Press L1 and use the left joystick for Fwd/Back, right joystick for Left/Right
$ ros2 run slamcore_ros2_examples kobuki_teleop_joy --ros-args -p joystick_mode:=old
See the PS4 Button Mapping section in the Appendix for an illustration of the PS4 controller buttons to be used.
On your visualisation machine, run the following to visualise the map being
created in RViz2
:
$ ros2 launch slamcore_ros2_examples navigation_monitoring_launch.py
Note
This demo assumes that the visualisation and the processing (SLAM and
navigation) happen on separate machines (a SLAM Machine and a Visualisation
Machine). If, however, you have an external monitor connected to your robot,
you can also open RViz2
by running the above in a new terminal or passing in
the use_rviz=true
argument when launching navigation with our launch
file. We currently do not support launching RViz2
via docker on Jetson
platforms.
$ ros2 launch slamcore_ros2_examples kobuki_live_navigation_launch.py use_rviz=true
Once Nav2 is up and running, see Interact with the Navigation Demo to learn how to set single goals or multiple waypoints for navigation.
If you would like to save the map to reuse it in the future, you can call the
slamcore/save_session
service from another terminal on your robot:
$ ros2 service call /slamcore/save_session std_srvs/Trigger
By default, the session file will be saved to the working directory from which
kobuki_live_navigation_launch.py
was called and SLAM will be paused while
the file is being generated. If you would like the service to save the session
file to a specific directory, you must set the session_save_dir
parameter
when launching kobuki_live_navigation_launch.py
. Note that when using our
Docker container, files should be saved inside the slamcore-ros2-docker
directory of the container, as otherwise they will not be saved to your machine
after exiting the container.
The next time you launch navigation you can provide the path to the session file
using the session_file
argument as explained in the section below.
Navigation in localisation mode using a prerecorded map
In localisation mode, we can navigate in a map that has been recorded
previously. This map can either be generated from a recorded dataset or at the
end of a previous live run by using the slamcore/save_session
service as
mentioned above. If running on a limited compute platform, it might be
preferable to record a dataset and then generate the map on a more powerful
machine. Check the drop-down below to see the benefits and disadvantages of
creating a map live VS from a recorded dataset.
Pros/Cons of generating a map live vs from a recorded dataset
Instead of first generating a dataset and then creating a session and map from that dataset as the previous sections have described, you could alternatively create a session at the end of a standard SLAM run. Compared to the approach described above this has a few pros and cons worth mentioning:
✅ No need to record a dataset, or move it to another machine and run SLAM there
✅ You can interactively see the map as it gets built and potentially focus on the areas that are under-mapped
❌ Generating a session at the end of the run may take considerably longer if you are running on a Jetson NX compared to running on an x86_64 machine.
❌ You can’t modify the configuration file and see its effects as you would when having separate dataset recording and mapping steps.
❌ If something goes wrong in the pose estimation or mapping procedure, you don’t have the dataset to further investigate and potentially report the issue back to SLAMcore
The two options for map creation are detailed below.
We can generate a map by first recording a dataset and then processing it
using SLAMcore Visualiser
(GUI) or slamcore_dataset_processor
(command line tool).
Record Dataset to Map the Environment
We will be using the dataset_recorder.launch.py
launch file to
capture a dataset that contains visual-inertial, depth as well as kinematic
information. We’ll also use the kobuki_teleop_key
script to teleoperate
the robot with the keyboard. We also need to bring up the Kobuki to be able
to drive it around.
First, bring up the Kobuki:
$ ros2 launch slamcore_ros2_examples kobuki_setup_comms_launch.py
Then run teleoperation on a separate terminal:
$ # For Keyboard Teleop - Use the arrow keys to move around
$ ros2 run slamcore_ros2_examples kobuki_teleop_key
$ # For Joystick Teleop - Hold L1 and use the left joystick for Fwd/Back, right joystick for Left/Right
$ ros2 run slamcore_ros2_examples kobuki_teleop_joy
Note
If the above mapping does not work with your joystick/driver, you may try
the following alternative using the joystick_mode:=old
argument:
$ # Press L1 and use the left joystick for Fwd/Back, right joystick for Left/Right
$ ros2 run slamcore_ros2_examples kobuki_teleop_joy --ros-args -p joystick_mode:=old
See the PS4 Button Mapping section in the Appendix for an illustration of the PS4 controller buttons to be used.
Finally, launch the SLAMcore dataset recorder:
$ ros2 launch slamcore_slam dataset_recorder.launch.py \
> override_realsense_depth:=true \
> realsense_depth_override_value:=true \
> odom_reading_topic:=/odom
Notice that recording the kinematic measurements (by subscribing to a wheel odometry topic) is not necessary, since we can generate a map using purely the visual-inertial information from the camera. Kinematics will however increase the overall accuracy if recorded and used.
When you have covered all the space that you want to map, send a Ctrl-c
signal
to the application to stop. By default, the dataset will be saved in the
current working directory, unless the output_dir
argument is specified.
Note that when using our Docker container, files should be saved inside the
slamcore-ros2-docker
directory of the container, as otherwise they will
not be saved to your machine after exiting the container.
You now have to process this dataset and generate the .session
file. In
our case, we compressed and copied the dataset to an x86_64 machine in order
to accelerate the overall procedure.
$ tar cvfz mydataset.tgz mydataset/
$ rsync --progress -avt mydataset.tgz <ip-addr-of-x86_64-machine>:
Create Session and Map for Navigation
Once you have the (uncompressed) dataset at the machine that you want to do
the processing at, use the slamcore_visualiser
to process the whole
dataset and at the end of it, save the resulting session.
$ # Launch slamcore_visualiser, enable mapping features - `-m`
$ slamcore_visualiser dataset \
> -u mydataset/ \
> -c /usr/share/slamcore/presets/mapping/default.json \
> -m
Alternatively, you can use the slamcore_dataset_processor
command line
tool, however, you won’t be able to visualise the map being built.
$ # Launch slamcore_dataset_processor, enable mapping features `-m` and session saving `-s`
$ slamcore_dataset_processor dataset \
> -u mydataset/ \
> -c /usr/share/slamcore/presets/mapping/default.json \
> -m \
> -s
Note
Refer to Step 2 - Prepare the mapping configuration file in case you want to tune the mapping configuration file in use or include the VIK configuration parameters.


2.5D Map and 2D Occupancy Grid being generated
Edit the Generated Session/Map
You can optionally use slamcore_session_explorer
and the editing tool of
your choice, e.g. Gimp
to create the final session and corresponding
embedded map. See SLAMcore Session Explorer for more. When done, copy the
session file over to the machine that will be running SLAM, if not already
there.
Instead of recording a dataset, we can save a session map directly when
running SLAM in Height Mapping/Single Session mode. If you are already
running Nav2 in single session SLAM mode with the commands shown in
Navigation in single session SLAM mode, you can simply call the
slamcore/save_session
service to save the current map that is being
generated.
$ ros2 service call /slamcore/save_session std_srvs/Trigger
Otherwise, we can create a new map from scratch interactively by simply running the robot teleoperation script and launching SLAM in height mapping mode (Note that with the commands below we do not bring up Nav2 and, therefore, autonomous navigation capabilities will not be available).
First, bring up the Kobuki:
$ ros2 launch slamcore_ros2_examples kobuki_setup_comms_launch.py
Then run teleoperation on a separate terminal:
$ # For Keyboard Teleop - Use the arrow keys to move around
$ ros2 run slamcore_ros2_examples kobuki_teleop_key
$ # For Joystick Teleop - Hold L1 and use the left joystick for Fwd/Back, right joystick for Left/Right
$ ros2 run slamcore_ros2_examples kobuki_teleop_joy
Note
If the above mapping does not work with your joystick/driver, you may try
the following alternative using the joystick_mode:=old
argument:
$ # Press L1 and use the left joystick for Fwd/Back, right joystick for Left/Right
$ ros2 run slamcore_ros2_examples kobuki_teleop_joy --ros-args -p joystick_mode:=old
See the PS4 Button Mapping section in the Appendix for an illustration of the PS4 controller buttons to be used.
Finally, launch SLAM in Height Mapping Mode:
$ ros2 launch slamcore_slam slam_publisher.launch.py generate_map2d:=true
You can visualise the map being created in RViz2
by subscribing to the
/slamcore/map
topic.
To save the map, call the slamcore/save_session
service from another
terminal on your robot:
$ ros2 service call /slamcore/save_session std_srvs/Trigger
The session file will be saved to the current working directory by default.
Note that when using our Docker container, files should be saved inside the
slamcore-ros2-docker
directory of the container, as otherwise they will
not be saved to your machine after exiting the container.
Once you have the map, you can pass the path to the session file as an argument
when launching navigation. If you have created new SLAM and Nav2 configuration
files, don’t forget to pass these in with the config_file
and
params_file
arguments respectively, to override the default ones. The
config_file
should contain the VIK calibration parameters which will
enable the robot to use Visual-Inertial-Kinematic SLAM (if you have completed a
VIK calibration) and any additional SLAMcore configuration parameters, detailed
in SLAM Configuration.
$ ros2 launch slamcore_ros2_examples kobuki_live_navigation_launch.py \
> session_file:=<path/to/session/file> \
> config_file:=</path/to/slam/config/json> \
> params_file:=</path/to/params/yaml/>
Note
If you have already brought up the Kobuki with our
kobuki_setup_comms_launch.py
launch script, to for example teleoperate the
robot, you can launch the above command with the comms
argument set to
false
, to only launch the navigation and SLAM components.
$ ros2 launch slamcore_ros2_examples kobuki_live_navigation_launch.py \
> session_file:=<path/to/session/file> \
> config_file:=</path/to/slam/config/json> \
> params_file:=</path/to/params/yaml/> \
> comms:=false
You can visualise the robot navigating in the map on a separate machine by running:
$ ros2 launch slamcore_ros2_examples navigation_monitoring_launch.py
Note
This demo assumes that the visualisation and the processing (SLAM and
navigation) happen on separate machines (a SLAM Machine and a Visualisation
Machine). If, however, you have an external monitor connected to your robot,
you can also open RViz2
by running the above in a new terminal or passing in
the use_rviz=true
argument when launching navigation with our launch
file. We currently do not support launching RViz2
via docker on Jetson
platforms.
$ ros2 launch slamcore_ros2_examples kobuki_live_navigation_launch.py use_rviz=true
If the map is not rendered in RViz2
, you can launch a teleoperation node to
manually drive the robot around until it relocalises in the loaded map:
$ # For Keyboard Teleop - Use the arrow keys to move around
$ ros2 run slamcore_ros2_examples kobuki_teleop_key
$ # For Joystick Teleop - Hold L1 and use the left joystick for Fwd/Back, right joystick for Left/Right
$ ros2 run slamcore_ros2_examples kobuki_teleop_joy
Note
If the above mapping does not work with your joystick/driver, you may try
the following alternative using the joystick_mode:=old
argument:
$ # Press L1 and use the left joystick for Fwd/Back, right joystick for Left/Right
$ ros2 run slamcore_ros2_examples kobuki_teleop_joy --ros-args -p joystick_mode:=old
See the PS4 Button Mapping section in the Appendix for an illustration of the PS4 controller buttons to be used.
Once Nav2 is up and running, see Interact with the Navigation Demo below to learn how to set single goals or multiple waypoints for navigation.
Interact with the Navigation Demo
At first, we may need to teleoperate the robot manually, until the SLAM
algorithm relocalises. We can see that the relocalisation took place by looking
at the RViz2
view where the local and global costmaps start getting rendered, or
by subscribing to the /slamcore/pose
where we start seeing incoming Pose
messages.
This is how the RViz2
view should look like after the robot has relocalised.

Fig. 73 RViz2
view during navigation
Similar to ROS1, in RViz2
we can set individual navigation goals with the
Nav2 Goal
button or by publishing a Pose message to the /goal_pose
topic. The robot
will then try to plan a path and start navigating towards the goal if the path
is feasible.
Example - How to publish a goal to the /goal_pose
topic
$ ros2 topic pub /goal_pose geometry_msgs/PoseStamped "{header: {stamp: {sec: 0}, frame_id: 'map'}, pose: {position: {x: 0.2, y: 0.0, z: 0.0}, orientation: {w: 1.0}}}"

Fig. 74 Robot navigating towards single goal
It is also possible to issue multiple waypoints using Nav2’s Waypoint Mode
button. When Waypoint Mode
is selected, we can set multiple waypoints on the
map with the Nav2 Goal
button. When all waypoints have been set, press the
Start Navigation
button and the robot will attempt to navigate through all
the waypoints.

Fig. 75 Waypoint mode
Nav2 Configuration
The nav2-demo-params yaml file contains parameters, such as the planner and costmap parameters, that can be tuned to obtain the best navigation performance. The Nav2 docs include a Configuration Guide with information on the available parameters and how to use them.
In this file, we set the obstacle_layer
and voxel_layer
that are used in
the global and local costmap for obstacle avoidance. The observation source for
these is the /slamcore/local_point_cloud
published by our software. This
point cloud can be trimmed using a SLAMcore JSON configuration file to, for example,
remove points below a certain height that should not be marked as obstacles. More details
on point cloud trimming can be found on the Point Cloud Configuration page.
Note
In addition to trimming the point cloud using a SLAMcore JSON configuration
file, we set the ROS obstacle layer’s
min_obstacle_height
parameter in nav2-demo-params.yaml
to -0.18
.
This parameter lets you set a height (measured from the map
frame) from
which all points above are considered valid and can be marked as obstacles.
All points below are simply ignored (they are not removed, as is the case
with point cloud trimming). As in this example the map
frame is at the
camera height, we want to make sure that points in the cloud that are below
the camera (between camera height (23cm) and the floor) are included. If the
map frame was on the ground level, min_obstacle_height
could be kept at
e.g. 0.05
- only points 5cm above the map
Z coordinate would be
considered when marking obstacles.
This parameter can be used together with point cloud trimming, as is the case in this demo, or without point cloud trimming, if you would like to keep the raw local point cloud.
You may copy these files, found in the config folder, and adjust them to your setup. Remember to provide the path to the modified files when launching navigation to override the default ones.
$ ros2 launch slamcore_ros2_examples kobuki_live_navigation_launch.py \ > session_file:=<path/to/session/file> \ > config_file:=</path/to/slam/config/json> \ > params_file:=</path/to/params/yaml/>
Appendix
Troubleshooting
My TF tree seems to be split in two main parts
As described in Nav2 Setup, our ROS2 Wrapper
publishes both the map
\(\rightarrow\) odom
and odom
\(\rightarrow\) base_footprint
transformations in the TF tree. Thus, to
avoid conflicts when publishing the transforms, (note that TF allows only a
single parent frame for each frame) make sure that there is no other node
publishing any of the the aforementioned transformations. E.g it’s common
for the wheel-odometry node, in our case the kobuki_node to also publish its wheel-odometry
estimates in the transformation tree. We disable this behaviour by setting the
Kobuki node’s publish_tf
parameter to False
in our
nav2-demo-params.yaml
config file.
For reference, here’s a simplified version of how TF Tree looks like when executing the SLAMcore Nav2 demo:

Fig. 76 Reference TF Tree during Navigation
PS4 Button Mapping
Existing issues in Nav2 / ROS2
RViz2
may crash when setting a new goal if theController
visualisation checkbox is enabled. (See https://github.com/ros2/rviz/issues/703 for more details)