Single Session SLAM Positioning Mode

In this mode, the system tracks and stores the location of natural features to create a live point-cloud which is used to calculate the real-time position of the robot. This mode is the most computationally intensive positioning mode because it is continually optimising the point-cloud to account for any drift that may have accumulated (loop closure).

This positioning mode operates as a single session so when it is powered down, all data/history is lost. There is the option to save the point-cloud at the end which can then be used to create your Master point-cloud in the Localisation using a pre-built point cloud position mode.

Step 1 - Create a Configuration File

You are able to change some of the core parameters for the system using a configuration file. Create a new file slam_mode.json in a convenient directory, e.g. ~/slamcore/custom/slam_mode.json.

Edit the file by running this in a terminal window:

$ gedit ~/slamcore/custom/slam_mode.json

Edit or paste the following text into the editor:

{
    "Version": "1.0.0",
    "Base":
    {
        "ProfileName": "slam_mode"
    },
    "Position":
    {
        "PositioningMode": "SLAM",
        "Frontend":
        {
            "NumKeypoints": 100
        }
    }
}

The ProfileName parameter can be set to any name that you wish to be displayed on the SLAMcore Visualiser when the configuration file is being used.

The positioning mode is controlled by the PositioningMode parameter and can be set to SLAM or ODOMETRY_ONLY. For this step we will set it to SLAM as shown above. Save the text file.

You can also use the configuration file to set other parameters which will affect the performance of the system. For example, we will use this config file to set the number of features that will be detected on each frame and used to triangulate the position of the sensor. The number of features can be varied by adjusting the NumKeypoints parameter. This can be set between 10 and 500. The higher the number the better the accuracy and robustness of the system but computational resource will also increase. Real-time performance will likely be required for this mode so we recommend setting the number of features to 100.

Step 2 - Process a Dataset

If you wish to try this mode using a live-sensor feed then you can skip this step. You can process any of the datasets that you recorded in the previous tutorials by typing the following in the terminal:

$ slamcore_visualiser dataset -u <path/to/dataset> -c ~/slamcore/custom/slam_mode.json

This will open the SLAMcore Visualiser tool:

_images/visualiser01.png

Fig. 33 SLAMcore Visualiser Opening Screen

Now click the Start button:

_images/visualiser_slam_buttons.png

The dataset will now be processed and live position estimate (6DoF axis) with historical trajectory (yellow line) displayed. You will see the point-cloud map appearing which is calculated, optimised and displayed in real-time. You may see this point-cloud adjust in size and shape as loop closures are triggered but the yellow trajectory will not change as it is a historical plot of the live position estimate.

_images/slam_pointcloud.png

Fig. 34 Point-cloud generation and position estimation with continual optimization of both

The current estimated position and velocities in the X,Y and Z axis along with an estimate of total distance travelled are displayed as numbers in the top left hand corner of the screen. When the dataset has been fully processed, you will see the following message:

_images/visualiser_end_of_dataset.png

Fig. 35 Message when dataset finished processing

Step 3 - Try the System in Live Mode

If no dataset is specified, the system will default to processing the data from the live sensor as long as you have a registered D435i plugged in. Launch the software by typing the following in the terminal:

$ slamcore_visualiser -c ~/slamcore/custom/slam_mode.json

This will open the SLAMcore Visualiser tool:

_images/visualiser01.png

Fig. 36 SLAMcore Visualiser Opening Screen

Now click the Start button:

_images/visualiser_slam_buttons.png
_images/slam_pointcloud.png

Fig. 37 Point-cloud generation and position estimation with continual optimization of both

The live sensor information will now be processed and its position will be estimated in real-time (6DoF axis) with a historical trajectory (yellow line) displayed. You will see the point-cloud map appearing as you move the sensor which is calculated, optimised and displayed in real-time. You may see this point-cloud adjust in size and shape as loop closures are triggered but the yellow trajectory will not change as it is a historical plot of the live position estimate.

The current estimated position and velocities in the X,Y and Z axis along with an estimate of total distance travelled are displayed as numbers in the top left hand corner of the screen.

The system will boot into the main opening screen but now when you click the Start button, the data from the D435i will be processed in real-time and the position, velocities and distance travelled will be estimated and displayed as before.

The point-cloud will continue to grow as new areas of the scene are observed. When observing areas that have already been mapped, the system will optimise the existing point-cloud without adding additional features to it. This means that once the entire scene has been observed and mapped, the point-cloud map will remain relatively static.