This positioning mode uses a previously created, offline point-cloud map of the area to operate. When operating in this mode the system matches the live view from the sensor against the offline map, providing an accurate, real-time position in the offline-map’s reference frame. This mode provides the most accurate, robust and computationally efficient performance. New landmarks are also discovered and used for tracking in this mode but these will not be added to the existing map or saved at the end of the run. It is also possible to share this offline point-cloud with multiple robots such that they all calculate their position at the same time, in the same spatial-reference frame. This is described in the final tutorial of this series: Multi-Agent Localisation.
For this tutorial you will first use the master dataset captured in the Capturing Datasets tutorial to create a point-cloud map of your entire test space. This will provide the global coordinate frame. You then have the choice of running the system live or using one of the Evaluation Datasets you captured previously. The system will look to match each frame from the cameras to a position within the offline point-cloud and provide you with the sensors global position within the offline point-cloud coordinate frame.
Step 1 - Create a master configuration file
You can change some of Slamcore’s default parameters for the system using a
configuration file. Download
localisation_mode_master.json to a convenient directory, e.g.
The ProfileName parameter can be set to any name that you wish to be displayed on the Slamcore Visualiser when the configuration file is being used.
The positioning mode is controlled by the PositioningMode parameter and can be set to
ODOMETRY_ONLY. For this tutorial we will set it to
You can also refer to Configuration Overview to set other parameters which will
affect the performance of the system. For example, in this tutorial we set the
number of features detected on each frame which are used to triangulate the
position of the sensor. The number of features can be changed with the
NumKeypoints parameter. The higher the number the more the system is able to
track but computational resource will also increase.
As this step does not need to operate in real-time, you can choose a higher number. We recommend >300.
Step 2 - Create the Offline Master Point-cloud
Step 2.1 - Launch Slamcore Visualiser
slamcore_visualiser tool using the master dataset you have captured
of the test environment:
$ slamcore_visualiser dataset -u <path/to/dataset> -c ~/slamcore/custom/localisation_mode_master.json
This will open the Slamcore Visualiser tool:
Step 2.2 - Generate the Offline Master Point-cloud
Click the Start button to start processing the dataset.
The tool will now process the dataset to create a point-cloud of the space.
When the dataset has been fully processed, you will see the following message:
Step 2.3 - Save the Point-cloud
Now save the point-cloud by clicking the
You will be asked where you want to store this file.
The point-cloud will be saved as a single
.session file which may take a few
minutes to an hour depending on how large the environment and how many points
have been mapped.
Step 3 - Create an Evaluation Configuration File
Now that you have a fixed point-cloud map of your space, you can start to evaluate how well the system localises the robot as it moves within it.
You will need to decide how many features you wish to track per frame during
the evaluation, and change the configuration file accordingly. This does not
need to be the same as the settings you used to create the original point-cloud.
Here real-time operation may be important so we recommend setting
NumKeypoints to 150, which is also our software’s default settings.
Step 4 - Evaluation: Localise within the Point-cloud
We will use the Evaluation Datasets you captured previously to localise within the point cloud generated in the previous step. We can now process any dataset captured within the same environment and the system will calculate the robot’s trajectory in the reference frame of the pre-built point-cloud. If you wish to run this with a live sensor feed you can skip this step.
Step 4.1 - Load the Evaluation Datasets
In the terminal window run:
$ slamcore_visualiser dataset -u <path/to/evaluation/dataset> -c ~/slamcore/custom/localisation_mode_eval.json
Note that you may skip Step 4.2 if you load the point-cloud in the command line with:
$ slamcore_visualiser dataset -u <path/to/evaluation/dataset> -l <path/to/session/file> -c ~/slamcore/custom/localisation_mode_eval.json
This will launch the Slamcore Visualiser tool.
Step 4.2 - Load the Point-cloud to Run the Dataset Within
Before clicking start, you will need to load the point-cloud you generated in the previous step. Click the Load button:
Step 4.3 - Process the Evaluation Dataset
Click the Start button to start processing the dataset:
It may take a few seconds for the system to recognise its location. When this happens you will see the 6DoF pose estimation marker update in real-time within the point-cloud.
The current estimated position in 6DoF along with an estimate of total distance travelled are displayed as numbers in the top left hand corner of the screen.
Step 5 - Evaluate the System with a Live Sensor Feed
If no dataset is specified when the software is launched, the system will default to processing the live feed from the sensor, as long as you have a registered RealSense camera plugged in.
Step 5.1 - Launch the Positioning Tool
To launch the position software run the following in the terminal:
$ slamcore_visualiser -c ~/slamcore/custom/localisation_eval.json
Note that you may skip Step 5.2 if you load the point-cloud in the command line with:
$ slamcore_visualiser -l <path/to/session/file> -c ~/slamcore/custom/localisation_mode_eval.json
This will launch the main positioning tool.
Step 5.2 - Load the Point-cloud to Use for Live Localisation
Before clicking start, you will need to load the saved point-cloud you wish to use to localise the live sensor feed. Click the Load button and select the session file you wish to use.
Step 5.3 - Run the System Live
Click the Start button to start processing the live sensor feed:
As before, it may take a few seconds for the system to recognise its location. When this happens you will see the 6DoF pose estimation marker update in real-time. If the system is not able to localise, the 6DoF axis will remain stationary even when you move the sensor. This can normally be fixed by moving the sensor in a figure of eight.
The system is very tolerant to both lighting and structural change. If there is an extreme difference in the conditions from when the original dataset was captured, then the localisation mode may struggle to estimate a position. To resolve this you will need to capture a new master dataset of the full environment, generate a point-cloud from this dataset and save this new point-cloud.