You’re reading an older version of the Slamcore SDK documenation. The latest one is 23.01.

ROS1 Navigation Stack Integration

The current page presents a working example of integrating the SLAMcore SLAM algorithms into the ROS1 Navigation stack and using it as the core component to map the environment as well as provide accurate positioning of the robotic platform.

Note

Using ROS2? Visit the Nav2 Integration Tutorial Page

Goal

The goal of this demonstration is to use the SLAMcore SDK as the main source of positioning during navigation and also use it for mapping the environment before the navigation. These two tasks have traditionally been handled by components such as AMCL and gmapping respectively, both of which, by default, use a 2D laser scan information for achieving their goal.

Instead, we’ll be using the 2D Occupancy Mapping capabilities of the SDK to generate the initial occupancy grid map and we’ll be using our SLAM positioning to localise in that map.

Hardware Setup

We are using the Kobuki robotic platform, the D435i camera and the NVIDIA Jetson NX during this demonstration. We’re also using a custom mounting plate for placing the board and the camera on the robot.

Robotic platform in use
_images/kobuki.png
_images/slamcore-ros-setup.png

Fig. 64 Main setup for navigation

ROS1 Navigation Stack Setup

Traditionally the ROS1 navigation stack, move_base, requires the following components to be in place:

  • An occupancy grid map of the environment, either generated ahead of time, or live.

  • A global and local planner which guide your robot from the start to the end location. Common choices for these are navfn and DWA as the global and local planner of choice.

  • A global and local costmap which assign computation costs to the aforementioned grid map so that the planner chooses to go through or to avoid certain routes in the map.

  • A localisation module, such as AMCL or Cartographer

As discussed earlier, we’ll be using our software to generate a map of the environment before the navigation as well as localising the robot in the environment. On top of that, we’ll use the GlobalPlanner and the TebLocalPlanner instead of navfn and DWALocalPlanner respectively since this combination produced the best results during navigation. Lastly, we will be using the local point cloud published by our software for obstacle avoidance with costmap2D’s obstacle layer, as detailed in the Obstacle Avoidance section.

_images/slamcore-navstack.png

Fig. 65 SLAMcore integration into ROS1 Navigation Stack

Outline

Following is the list of steps for this demo. We’ll delve into each one of these steps in more detail in the next sections.

  1. Set up Dependencies

  2. [OPTIONAL] Run Visual-Inertial-Kinematic Calibration, to improve the overall performance.

  3. Compute the slamcore/base_link ➞ base_footprint Transformation

  4. Record Dataset to Map the Environment by teleoperating your robot.

  5. Create Session and Map for Navigation , based on the map of the previous step

  6. [OPTIONAL] Edit the Generated Session/Map, in case of small inaccuracies or artifacts.

  7. Launch Live Navigation , based on the generated session file, using the kobuki_live_navigation.launch file.

  8. [OPTIONAL] Interact with the Navigation Demo, using navigation_monitoring_slamcore.launch

Here’s also a graphical representation of the above:

_images/steps.png

Fig. 66 Outline of the demo

Set up Dependencies

Set up Binary Dependencies

See the Getting Started page to install the “SLAMcore Tools” and the “ROS1 Wrapper” Debian packages. We also have to set up the RealSense D435i as described in Setting up a Camera.

We also need to install a series of packages using apt.

Installing apt dependencies
$ apt-get update && \
>   apt-get upgrade -y && \
>   apt-get install --no-install-recommends --assume-yes \
>   software-properties-common \
>   udev \
>   ros-melodic-compressed-image-transport \
>   ros-melodic-depthimage-to-laserscan \
>   ros-melodic-interactive-markers \
>   ros-melodic-joy \
>   ros-melodic-joy-teleop \
>   ros-melodic-map-server \
>   ros-melodic-move-base \
>   ros-melodic-navigation \
>   ros-melodic-rosserial-arduino \
>   ros-melodic-rosserial-client \
>   ros-melodic-rosserial-msgs \
>   ros-melodic-rosserial-python \
>   ros-melodic-rosserial-server \
>   ros-melodic-rqt-image-view \
>   ros-melodic-teb-local-planner \
>   ros-melodic-teleop-twist-joy \
>   ros-melodic-teleop-twist-keyboard \
>   ros-melodic-urdf \
>   ros-melodic-xacro \
>   ros-melodic-capabilities \
>   ros-melodic-ecl-exceptions \
>   ros-melodic-ecl-geometry \
>   ros-melodic-ecl-linear-algebra \
>   ros-melodic-ecl-sigslots \
>   ros-melodic-ecl-streams \
>   ros-melodic-ecl-threads \
>   ros-melodic-ecl-time \
>   ros-melodic-kobuki-dock-drive \
>   ros-melodic-kobuki-driver \
>   ros-melodic-kobuki-ftdi \
>   ros-melodic-kobuki-msgs \
>   ros-melodic-std-capabilities \
>   ros-melodic-yocs-cmd-vel-mux \
>   ros-melodic-yocs-controllers \
>   ros-melodic-yocs-velocity-smoother \
>   ros-melodic-ddynamic-reconfigure

Set up ROS1 work workspace

You have to create a new ROS1 workspace by cloning the slamcore-ros1-examples repository. This repository holds all the navigation-related nodes and configuration for enabling the demo. Before compiling the workspace, install vcstool which is used for fetching the additional ROS1 source packages.

Install vcstool
$ pip3 install --user --upgrade vcstool

Collecting vcstool
  Downloading https://files.pythonhosted.org/packages/86/ad/01fcd69b32933321858fc5c7cf6ec1fa29daa8942d37849637a8c87c7def/vcstool-0.2.15-py3-none-any.whl (42kB)
Collecting PyYAML (from vcstool)
  Downloading https://files.pythonhosted.org/packages/7a/5b/bc0b5ab38247bba158504a410112b6c03f153c652734ece1849749e5f518/PyYAML-5.4.1-cp36-cp36m-manylinux1_x86_64.whl (640kB)
Collecting setuptools (from vcstool)
  Downloading https://files.pythonhosted.org/packages/4e/78/56aa1b5f4d8ac548755ae767d84f0be54fdd9d404197a3d9e4659d272348/setuptools-57.0.0-py3-none-any.whl (821kB)
Installing collected packages: PyYAML, setuptools, vcstool
Successfully installed PyYAML-5.4.1 setuptools-57.0.0 vcstool-0.2.15
Setting up ROS1 Workspace
$ git clone git@github.com:slamcore/slamcore-ros1-examples
Cloning into 'slamcore-ros1-examples'...
remote: Enumerating objects: 76, done.
remote: Counting objects: 100% (76/76), done.
remote: Compressing objects: 100% (55/55), done.
remote: Total 76 (delta 17), reused 74 (delta 15), pack-reused 0
Receiving objects: 100% (76/76), 3.15 MiB | 313.00 KiB/s, done.
Resolving deltas: 100% (17/17), done.

$ cd slamcore-ros1-examples
$ vcs import src < repos.yaml
...
=== src/follow_waypoints (git) ===
Cloning into '.'...
=== src/kobuki (git) ===
Cloning into '.'...

$ catkin_make

...

In order to communicate with the Kobuki, you also need to set up the appropriate udev rules.

Run Visual-Inertial-Kinematic Calibration

To increase the overall accuracy of the pose estimation we will fuse the wheel-odometry measurements of the robot encoders into our SLAM processing pipeline. This also makes our positioning robust to kidnapping issues (objects partially or totally blocking the camera field of view) since the algorithm can now depend on the odometry to maintain tracking.

To enable the wheel-odometry integration, follow the corresponding tutorial: Wheel Odometry Integration. After the aforementioned calibration step, you will receive a VIK configuration file similar to the one shown below:

VIK configuration file
{
  "Version": "1.0.0",
  "Base": {
    "KinematicsEnabled": true,
    "Synchroniser": {
      "KinematicsPoseCameraOffset": "42.0ms"
    }
  },
  "Position": {
    "Backend": {
      "Type": "VisualInertialKinematic"
    },
    "Odometry": {
      "EstimationScaleY": false,
      "ScaleTheta": 0.9364361585576231,
      "ScaleX": 1.0009030227774691,
      "ScaleY": 1.0,
      "SigmaCauchyKernel": 0.0445684,
      "SigmaTheta": 1.94824,
      "SigmaX": 0.0212807,
      "SigmaY": 0.00238471,
      "T_SO": {
        "R": [
          0.5062407414595297,
          0.49242403927156886,
          -0.4916330719846658,
          0.50944656222699
        ],
        "T": [
          0.015391235851986106,
          0.23577251157419954,
          -0.08736454907300172
        ]
      }
    }
  }
}

Record Dataset to Map the Environment

In order to autonomously navigate the environment, we first need to generate a map of it. To do that, we’ll be using the run_dataset_recorder.launch script to capture a dataset that contains visual-inertial, depth as well as kinematic information. We’ll also use the kobuki_live_teleop_joy.launch script to teleoperate the robot using a PS4 Joystick.

Creating an initial dataset
$ # Launch teleoperation via the Joystick - enables the communication with the Kobuki by default
$ roslaunch slamcore_ros1_examples kobuki_live_teleop_joy.launch

$ # Alternatively, launch teleoperation via the keyboard
$ roslaunch slamcore_ros1_examples kobuki_live_teleop_key.launch

And on a separate terminal,

$ # Launch the dataset recording procedure
$ roslaunch slamcore_slam run_dataset_recorder.launch \
>   override_realsense_depth:=true \
>   realsense_depth_override_value:=true \
>   output_dir:=$HOME/mnt/nvme/20210505-dataset \
>   odom_reading_topic:=/odom

... logging to /home/slamcore/.ros/log/4e14f4e6-adcc-11eb-9e59-d8c0a6261b17/roslaunch-nikos-nx2-8368.log
Checking log directory for disk usage. This may take a while.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://nikos-nx2:40951/

SUMMARY
========

PARAMETERS
* /dataset_recorder/config_file:
* /dataset_recorder/enable_color: False
* /dataset_recorder/odom_reading_topic: /odom
* /dataset_recorder/output_dir: /home/slamcore/mn...
* /dataset_recorder/override_realsense_depth: True
* /dataset_recorder/realsense_depth_override_value: True
* /rosdistro: melodic
* /rosversion: 1.14.10

NODES
/
   dataset_recorder (slamcore_slam/dataset_recorder)

auto-starting new master
process[master]: started with pid [8378]
ROS_MASTER_URI=http://localhost:11311

setting /run_id to 4e14f4e6-adcc-11eb-9e59-d8c0a6261b17
process[rosout-1]: started with pid [8389]
started core service [/rosout]
process[dataset_recorder-2]: started with pid [8392]
[ INFO] [1620237858.769537582]: Parameter [config_file] not specified, defaulting to []
[ INFO] [1620237858.774477359]: Parameter [override_realsense_depth] set to [true]
[ INFO] [1620237858.776504214]: Parameter [realsense_depth_override_value] set to [true]
[ INFO] [1620237858.778771993]: Parameter [enable_color] set to [false]
[ INFO] [1620237858.780320134]: Parameter [output_dir] set to [/home/slamcore/mnt/nvme/20210505-dataset]
[ INFO] [1620237858.782198926]: Parameter [odom_reading_topic] set to [/odom]
WARNING: Logging before InitGoogleLogging() is written to STDERR
W20210505 21:04:18.784246  8392 ConfigFeeds.cpp:357] Auto-detecting your camera...
[ INFO] [1620237863.848342152]: Subscribing to odometry input topic: /odom
Invalid Value rs2_set_region_of_interest sensor:0x5590d44e00, min_x:0, min_y:0, max_x:848, max_y:480

Notice that recording the kinematic measurements (by subscribing to a wheel odometry topic) is not necessary, since we can generate a map using purely the visual information from the camera. Kinematics will however increase the overall accuracy if recorded and used.

When you have covered all the space that you want to map, send a <C-c> signal to the application to stop.

You now have to process this dataset and generate the .session file. In our case, we compressed and copied the dataset to an x86_64 machine in order to accelerate the overall procedure.

$ tar cvfz 20210505-dataset.tgz 20210505-dataset/
$ rsync --progress -avt 20210505-dataset.tgz <ip-addr-of-x86_64-machine>:

Create Session and Map for Navigation

Once you have the (uncompressed) dataset at the machine that you want to do the processing at, use the slamcore_visualiser to process the whole dataset and at the end of it, save the resulting session.

$ # Launch slamcore_visualizer, enable mapping features - `-m`
$ slamcore_visualiser dataset \
>   -u 20210505-dataset/ \
>   -c /usr/share/slamcore/presets/mapping/default.json \
>   -m

Note

Refer to Step 2 - Prepare the mapping configuration file in case you want to tune the mapping configuration file in use or include the VIK configuration parameters.

_images/map-generation.png _images/map-generation2.png

Edit the Generated Session/Map

You can optionally use slamcore_session_explorer and the editing tool of your choice, e.g. Gimp to create the final session and corresponding embedded map. See SLAMcore Session Explorer for more. When done, copy the session file over to the machine that will be running SLAM, if not already there.

[ALTERNATIVE APPROACH] Generating the session interactively

Instead of first generating a dataset and then creating a session and map from that dataset as the previous sections have described, you could alternatively create a session at the end of a standard SLAM run. Compared to the approach described above this has a few pros and cons worth mentioning:

  • ✅ No need to record a dataset, or move it to another machine and run SLAM there

  • ✅ You can interactively see the map as it gets built and potentially focus on the areas that are under-mapped

  • ❌ Generating a session at the end of the run may take considerably longer if you are running on a Jetson NX compared to running on an x86_64 machine.

  • ❌ You can’t modify the configuration file and see its effects as you would when having separate dataset recording and mapping steps.

  • ❌ If something goes wrong in the pose estimation or mapping procedure, you don’t have the dataset to further investigate and potentially report the issue back to SLAMcore

Specify the Session and the Configuration File Paths

The final step is to indicate the paths to the session file and to the configuration file. Additionally, if integrating wheel odometry, you can define the wheel odometry topic. The launchfile for navigation, kobuki_live_navigation.launch, reads these parameters from the environment variables SESSION_FILE, CONFIG_FILE and ODOM_READING_TOPIC respectively.

  • The session file was generated and copied to the Jetson NX in the previous step.

  • For the configuration file, you can use one of the presets, found at /usr/share/slamcore/presets/ or, if you are also integrating wheel-odometry information as shown in section Run Visual-Inertial-Kinematic Calibration, we will send you a new JSON configuration file with wheel-odometry in mind.

$ # Edit the ``nav-config.sh`` file created earlier
$ # see "Compute the ``slamcore/base_link`` ➞ ``base_footprint`` Transformation"
$ export SESSION_FILE="/path/to/session-file"
$ export CONFIG_FILE="/path/to/config-file"
$ export ODOM_READING_TOPIC="/odom"

$ # source nav-config.sh again for the changes to take effect
$ source nav-config.sh

Launch Live Navigation

With the previous pieces in place, we are now ready to start the autonomous live navigation.

  1. Launch kobuki_live_navigation.launch

  2. Launch a teleoperation node, in case you have to manually drive the robot around for a short while until it relocalises in the session of the previous run.

  3. Launch navigation_monitoring_slamcore.launch to visualise the process.

Note

If you have an external monitor connected to your Jetson NX, you can also run the “Visualisation Machine” commands along with the rest of the instructions on the Jetson NX itself. This demo assumes that the visualisation, and the processing (SLAM and navigation) happen on separate machines (a SLAM Machine and a Visualisation Machine). Thus, refer to Remote Visualisation of the Navigation Demo on how to set up remote visualisation using rviz and view the navigation on your x86_64 machine.

$ roslaunch slamcore_ros1_examples kobuki_live_navigation.launch
... logging to /home/slamcore/.ros/log/187b968c-b1be-11eb-a1ef-d8c0a6261b17/roslaunch-nikos-nx2-28609.log
Checking log directory for disk usage. This may take a while.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://nikos-nx2:46867/

...

And on a separate terminal,

$ roslaunch slamcore_ros1_examples kobuki_live_teleop_key.launch kinematics:=false

SUMMARY
========

PARAMETERS
* /keyop/angular_vel_max: 6.6
* /keyop/angular_vel_step: 0.33
* /keyop/linear_vel_max: 1.5
* /keyop/linear_vel_step: 0.05
* /keyop/wait_for_connection_: True
* /rosdistro: melodic
* /rosversion: 1.14.10

NODES
/
   keyop (kobuki_keyop/keyop)

ROS_MASTER_URI=http://localhost:11311

^[[Bprocess[keyop-1]: started with pid [25805]
[ INFO] [1620666607.587512385]: KeyOpCore : using linear  vel step [0.05].
[ INFO] [1620666607.591905406]: KeyOpCore : using linear  vel max  [1.5].
[ INFO] [1620666607.592041775]: KeyOpCore : using angular vel step [0.33].
[ INFO] [1620666607.592124614]: KeyOpCore : using angular vel max  [6.6].
[ WARN] [1620666607.603671407]: KeyOp: could not connect, trying again after 500ms...
[ INFO] [1620666608.104021379]: KeyOp: connected.
Reading from keyboard
---------------------------
Forward/back arrows : linear velocity incr/decr.
Right/left arrows : angular velocity incr/decr.
Spacebar : reset linear/angular velocities.
d : disable motors.
e : enable motors.
q : quit
$ roslaunch slamcore_ros1_examples navigation_monitoring_slamcore.launch

SUMMARY
========

PARAMETERS
* /rosdistro: melodic
* /rosversion: 1.14.10

NODES
/
   rqt (rqt_gui/rqt_gui)
   slam_visualiser (rviz/rviz)

ROS_MASTER_URI=http://nikos-nx2:11311

process[slam_visualiser-1]: started with pid [287]
QStandardPaths: XDG_RUNTIME_DIR not set, defaulting to '/tmp/runtime-berger'
INFO | ${node} | /tmp/binarydeb/ros-melodic-rviz-1.13.17/src/rviz/visualizer_app.cpp:114 | 1620671623.758027686 | rviz version 1.13.17

...

Interact with the Navigation Demo

At first, we may need to teleoperate the robot manually, until the SLAM algorithm relocalises. We can see that the relocalisation took place by looking at the rviz view where the local and global costmaps start getting rendered, or by subscribing to the /slamcore/pose where we start seeing incoming Pose messages.

This is how the rviz view will look like after the robot has relocalised.

_images/navstack-rviz-view.png

Fig. 67 rviz view during navigation

Waypoint Navigation

Aside from individual navigation goals which can be set using the 2D Nav Goal button of rviz or by publishing to the /move_base/goal topic, you can also issue a series of goals and have the navigation stack follow them one, after another automatically. To do that, issue each one of your intended goals by publishing a Pose message to the /initialpose topic either via the command line or via the 2D Pose Estimate button of rviz, ``rviz`` 2D Navigation Button.

_images/waypoints-rviz.png

Fig. 68 Providing waypoints to follow

When you have added all the waypoints that you want, call the /path_ready service so that the robot starts to follow them.

$ rosservice call /path_ready "{}"

If at any point you want to reset the list of given waypoints, call the /path_reset service.

$ rosservice call /path_reset "{}"

Finally if you want to disable following the given waypoints perpetually, disable the patrol_mode parameter.

$ rosrun dynamic_reconfigure dynparam set /follow_waypoints patrol_mode False

Note

You can also call these services or the dynamic reconfiguration from the RQT GUI launched as part of navigation_monitoring_slamcore.launch.

Appendix

Remote Visualisation of the Navigation Demo

You can visualise the navigation process using slamcore-ros1-examples/navigation_monitoring_slamcore.launch

Since the live navigation is running on the Jetson NX platform, we want to remotely visualise it on our x86_64 machine. To do this, we can make use of the remote capabilities of ROS.

Instructions for such a network configuration are provided in the NetworkPage of ROS1 and are summed up in the current section for completeness

Make sure that your x86_64 machine and Jetson NX are on the same network and you can ping by name one from the other machine.

# On the Jetson NX - hostname: nikos-nx2
# edit your /etc/hosts file and add an entry for your x86_64 machine

# Add an entry like the following:
192.168.50.130  draken

# make sure that pinging draken works
berger@nikos-nx2:~/ros_ws$ ping draken
PING draken (192.168.50.130) 56(84) bytes of data.
64 bytes from draken (192.168.50.130): icmp_seq=1 ttl=64 time=2.69 ms
64 bytes from draken (192.168.50.130): icmp_seq=2 ttl=64 time=41.2 ms
# On your x86_64 machine - hostname: draken
# edit your /etc/hosts file and add an entry for Jetson NX

# Add an entry like the following:
192.168.50.210 nikos-nx2

# make sure that pinging the Jetson NX works
berger@draken:~/ros_ws$ ping nikos-nx2
PING nikos-nx2 (192.168.50.210) 56(84) bytes of data.
64 bytes from nikos-nx2 (192.168.50.210): icmp_seq=1 ttl=64 time=3.58 ms
64 bytes from nikos-nx2 (192.168.50.210): icmp_seq=2 ttl=64 time=3.58 ms

Now in order to share a common ROS1 Master across these 2 computers, set the ROS_MASTER_URI environment variable on your x86_64 machine and run the ROS1 core either directly or via a launchfile on the Jetson NX.

$ # just launch roscore as usual.
$ roscore
Checking log directory for disk usage. This may take a while.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://nikos-nx2:37757/
ros_comm version 1.14.10


SUMMARY
========

PARAMETERS
* /rosdistro: melodic
* /rosversion: 1.14.10

NODES

auto-starting new master
process[master]: started with pid [23855]
ROS_MASTER_URI=http://nikos-nx2:11311/

setting /run_id to d9bc19b6-50fc-11eb-bda8-d8c0a6261b17
process[rosout-1]: started with pid [23866]
started core service [/rosout]

$ rostopic list
/rosout
/rosout_agg
$ # *without* running roscore on this machine, verify that we're connecting
$ # to the roscore on the Jetson NX
$ rostopic list
ERROR: Unable to communicate with master!

$ export ROS_MASTER_URI=http://nikos-nx2:11311

$ # now, after running roscore on the Jetson NX:
$ rostopic list
/rosout
/rosout_agg

At this point you should be able to launch the autonomous navigation stack on the Jetson NX and visualise the results using navigation_monitoring_slamcore.launch on your x86_64 machine.

Obstacle Avoidance

We use the /slamcore/local_point_cloud topic published by our software as an input to the costmap2D obstacle_layer, to mark and clear obstacles in the local and global costmaps during navigation. You can find more details of the implementation in our obstacle_avoidance_pointcloud.yaml file, found here. You can modify the local point cloud for obstacle avoidance by defining the boundaries of the point cloud in your configuration file, as explained in Point Cloud Configuration. This can be useful to exclude ground points and prevent them being marked as obstacles.

Note

ALTERNATIVE - Tweaking the local point cloud via the ROS costmap2d obstacle_layer min_obstacle_height parameter.

Instead of using our JSON LocalPointCloud parameters as explained in Point Cloud Configuration, you may want to use the ROS min_obstacle_height parameter as one of the observation source parameters in the obstacle_avoidance_pointcloud.yaml file. This parameter allows you to set a height (measured from the map frame) from which points are considered valid. In this case, the points are not removed from the cloud but simply ignored, allowing you, for example, to ignore ground points and prevent them being marked as obstacles. Setting min_obstacle_height to 0.02 would only consider points 2cm above the map Z coordinate when marking obstacles.

Troubleshooting

My TF tree seems to be split in two main parts

As described in ROS1 Navigation Stack Setup, our ROS1 Wrapper publishes both the map \(\rightarrow\) odom and odom \(\rightarrow\) base_link transformations in the TF tree. Thus, to avoid conflicts when publishing the transforms, (note that TF allows only a single parent frame for each frame) make sure that there is no other node publishing any of the the aforementioned transformations. E.g it’s common for the wheel-odometry node, in our case the kobuki_node to also publish its wheel-odometry estimates in the transformation tree. You should disable this behavior.

For reference, here’s a simplified version of how TF Tree looks like when executing the SLAMcore ROS1 Navigation demo:

_images/tf-tree.png

Fig. 69 Reference TF Tree during Navigation