You’re reading an older version of the Slamcore SDK documenation. The latest one is 23.01.
Capturing Datasets
Whilst it is possible to run all positioning modes in real-time with a live sensor feed, for detailed evaluation it is often easier and more efficient to pre-record your datasets and process them at a later date. This allows you to repeat your tests (using the recorded dataset) and evaluate the effects of changing key localisation/mapping parameters of the system.
The positioning software has been optimised to work on a ground based robot but will work when the sensor is mounted on any mobile platform, including a drone, headset or even just being held and moved by hand.
To fully assess all of the positioning modes you should look to capture a minimum of two datasets in your test environment.
Master Dataset - This should be a single recording where the sensor is moved around the entire test environment.
Evaluation Dataset - This can be a number of shorter datasets that will represent the sort of paths you expect to follow during live operation.
Note
If you do not have a supported camera, you can setup one of our supported
sample datasets to use in the rest of the tutorials. You can use the
slamcore-setup-dataset
python script from the slamcore_utils package
for downloading and creating a dataset in the SLAMcore dataset format.
Sample dataset layout after running the slamcore-setup-dataset
$ tree -L 2 mav0/
mav0/
├── body.yaml
├── cam0
│ ├── data
│ ├── data.csv
│ └── sensor.yaml
├── cam1
│ ├── data
│ ├── data.csv
│ └── sensor.yaml
├── capture_info.json
├── imu0
│ ├── data.csv
│ └── sensor.yaml
├── leica0
│ ├── data.csv
│ └── sensor.yaml
└── state_groundtruth_estimate0
├── data.csv
└── sensor.yaml
7 directories, 12 files
Step 1 - Capture a Master Dataset of the entire test space
The aim of this step is to record a dataset of your entire test environment to create the master point-cloud that will be used for Localisation using a pre-built point-cloud positioning mode, also referred to as Localisation Mode.
Step 1.1 - Launch the Dataset Recorder
Type the following into the terminal:
$ slamcore_dataset_recorder --no-depth
The --no-depth
flag is used to turn off the infrared projector and depth
stream of the RealSense sensor. SLAMcore’s positioning software only requires the
passive stereo camera pair and IMU so these other sensor feeds are not required
(and may reduce map accuracy if left on). We will make use of the IR
functionality later in the Mapping Software Tutorial (coming soon).
You should see the following:

Fig. 29 SLAMcore Dataset Recorder
Step 1.2 - Start the recorder
Start recording your dataset by clicking the Record
button in the top left:

Fig. 30 Button to start recording
Click on the folder icon . The tool will automatically create a
folder with the current date in the location you specify here:

Fig. 31 Location where dataset will be saved
Step 1.3 - Move the sensor around the test environment twice
If your sensor is mounted on a robot, manually move/drive it around the test environment using a similar motion and speed to the way it will move during live operations. If the sensor is mounted on a wearable or simply being held by hand then try to move with a smooth motion around the environment at walking speed. Move the sensor around the entire test environment ensuring that the camera “sees” each area, preferably from less than two metres. For optimal results, travel the path twice during the same recording session.
Step 2 - Capture Evaluation Datasets
Once you have a full dataset of your test space, you may wish to record shorter test sequences within that space to evaluate the position estimation for different test cases using “Localisation in pre-built point-cloud” mode. You launch the recorder in the same way as before:
$ slamcore_dataset_recorder --no-depth
You can start and stop recording and set the location the dataset will be saved in the same way as described earlier.