Checkpoint 2

Last Updated: Nov 19, 2025

During the SLAM part of the lab, you will build an increasingly sophisticated algorithm for mapping and localizing in the environment. You will begin by constructing an occupancy grid using known poses. Following that, you’ll implement Monte Carlo Localization in a known map. Finally, you will put each of these pieces together to create a full SLAM system.

Contents

Design Lab

We’ve released the Design Lab Guide!

You need to design and build a forklift mechanism. The design file is due with your Checkpoint 2 submission, but it doesn’t need to be final, a prototype version is perfectly fine. The goal is to help you stay on track.

Designing, prototyping, and testing should continue alongside the main checkpoints until the final competition.

Task 2.1 Mapping

In this task, you will implement odometry-based occupancy grid mapping.

There is a demo video below shows the workflow after completing all TODOs in the template code. It demonstrates how to map and how to view your map in RViz or Foxglove Studio.

TODO

  1. Pull the latest code from mbot_ros_labs upstream to get the Checkpoint 2 template.
  2. Install Foxglove Bridge. We will introduce Foxglove Studio, a web-based visualization tool in this task, as an alternative to NoMachine.
     sudo apt install ros-$ROS_DISTRO-foxglove-bridge
    
  3. All work for this task is in the package mbot_mapping.
    • Start with mapping_node.cpp, search for TODOs.
    • TODOs in each file are numbered in order. The order is how you should work within that file, not the overall task order across the project.
  4. When finished, compile your code:
     cd ~/mbot_ros_labs
     colcon build --packages-select mbot_mapping
     source install/setup.bash
    

Explanation of TODOs

You have 3 major tasks in this task:

  1. Lidar ray interpolation
    • A laser scan isn’t instantaneous, it takes time to complete. If the robot moves during the scan:
      • The first beams are measured at start_pose.
      • The last beams are measured at end_pose.
    • We solve this in the function interpolateRay(), which estimates the robot’s pose for each beam.
  2. Bresenham’s algorithm
    • After interpolating all rays, use Bresenham’s algorithm to determine which cells in the occupancy grid each ray passes through.
  3. Update map cells
    • For each beam, update the corresponding cells using log-odds to reflect occupancy probabilities.

How to test?

  1. Run the following in VSCode Terminal #1:
     ros2 launch mbot_bringup mbot_bringup.launch.py
    
  2. Run the following in the VSCode Terminal #2 to start the mapping node:
     ros2 launch mbot_mapping mapping.launch.py
    
  3. Using teleop control to move the robot in the maze, run in VSCode Terminal #3:
     ros2 run teleop_twist_keyboard teleop_twist_keyboard
    
  4. Visualize the Progress:
    • Option 1: In RViz (via NoMachine):
        rviz2
      
    • Option 2: In Foxglove Studio:
        # Start the ROS2-Foxglove bridge:
        ros2 launch foxglove_bridge foxglove_bridge_launch.xml
      
      • You can now visualize topics in Foxglove Studio.
  5. After mapping the whole area, do not stop the mapping node yet. Save the map in VSCode Terminal #4:
     cd ~/mbot_ros_labs/src/mbot_mapping/maps
     ros2 run nav2_map_server map_saver_cli -f map_name
    
    • Replace map_name with the name you want for your map.
    • After saving, you can stop the mapping node.
  6. To view the map:
    • Option 1: Use a local viewer. Install a VSCode extension like PGM Viewer to open your map file.
    • Option 2: Publish the map in ROS2
        cd ~/mbot_ros_labs
        # we need to re-compile so the launch file can find the newly saved map
        colcon build --packages-select mbot_mapping
        source install/setup.bash
        ros2 launch mbot_mapping view_map.launch.py map_name:=map_name
      
      • This will publish the map to the /map topic.
      • You can now view it in RViz (via NoMachine) or Foxglove Studio.

Expected Result:

  • If your map looks similar to the image below, that’s a good result, small errors are expected.

Demo Video

Include a screenshot of your map. Analyze the map quality and explain why it is not ideal.

Task 2.2 Localization

Monte Carlo Localization (MCL) is a particle-filter-based localization algorithm. To implement MCL, you need three key components:

  • Action model: predicts the robot’s next pose.
  • Sensor model: calculates the likelihood of a pose given a sensor measurement.
  • Particle filter functions: draw samples, normalize particle weights, and compute the weighted mean pose.

There is a demo video below shows the workflow after completing all TODOs in the template code. It demonstrates how to play the ros bag and visualize localization progress in RViz or Foxglove Studio.

TODO

  1. All work for this task is in the package mbot_localization.
    • Start with localization_node.cpp, search for TODOs.
    • TODOs in each file are numbered in order. The order is how you should work within that file, not the overall task order across the project.
  2. When finished, compile your code:
     cd ~/mbot_ros_labs
     colcon build --packages-select mbot_localization
     source install/setup.bash
    

Explanation of TODOs

Starting from localization_node.cpp, you have 2 main TODOs:

  1. Construct the obstacle distance grid. Pre-compute how far each cell is from the nearest obstacle. This will be used in the sensor model.
  2. Complete the particle filter (more complex)
    1. Resample particles – select particles based on their weights.
    2. Apply the action model – move the particles according to odometry, adding noise.
    3. Apply the sensor model – update each particle’s weight based on how well its predicted sensor readings match the actual laser scan. Here we can use the pre-computed obstacle distance grid to calculate these likelihoods.
    4. Estimate the new pose – compute the weighted mean of the particles based on the posterior distribution.

How to test?

  1. Run RViz on NoMachine using provided rviz config file:
     cd ~/mbot_ros_labs/src/mbot_localization/rviz
     ros2 run rviz2 rviz2 -d localization.rviz
    
  2. Run the localization node in the VSCode Terminal #1:
     ros2 run mbot_localization localization_node
    
  3. Play the ROS bag in the VSCode Terminal #2:
     cd ~/mbot_ros_labs/src/mbot_rosbags/maze1
     ros2 bag play maze1.mcap
    
    • We provide a rosbag because this task focuses on localization only, and we need a good map for that. Mapping the maze every time before testing your localization code would be a pain, and you probably can’t reuse your map since the maze changes when other people’s MBots bump into the walls. Using the bag saves time and lets you work anywhere.
    • This ROS bag includes the /cmd_vel topic. Unplug the Type-C cable from the Pi to the Pico to prevent the robot from driving away.
    • All the data required for this checkpoint comes from the ROS bag, you don’t have to manually publish /initialpose, and do not need to run ros2 launch mbot_bringup mbot_bringup.launch.py.
     # use the following command to check what is in the bag
     $ cd ~/mbot_ros_labs/src/mbot_rosbags
     $ ros2 bag info maze1
     ---
     Files:             maze1.mcap
     Bag size:          12.8 MiB
     Storage id:        mcap
     ROS Distro:        jazzy
     Duration:          86.383675802s
     Start:             Nov  3 2025 12:30:46.701887210 (1762191046.701887210)
     End:               Nov  3 2025 12:32:13.085563012 (1762191133.085563012)
     Messages:          16171
     Topic information: Topic: /amcl_pose | Type: geometry_msgs/msg/PoseWithCovarianceStamped | Count: 259 | Serialization Format: cdr
                     Topic: /cmd_vel | Type: geometry_msgs/msg/Twist | Count: 1251 | Serialization Format: cdr
                     Topic: /imu | Type: sensor_msgs/msg/Imu | Count: 8080 | Serialization Format: cdr
                     Topic: /initialpose | Type: geometry_msgs/msg/PoseWithCovarianceStamped | Count: 1 | Serialization Format: cdr
                     Topic: /map | Type: nav_msgs/msg/OccupancyGrid | Count: 1 | Serialization Format: cdr
                     Topic: /odom | Type: nav_msgs/msg/Odometry | Count: 2020 | Serialization Format: cdr
                     Topic: /scan | Type: sensor_msgs/msg/LaserScan | Count: 898 | Serialization Format: cdr
                     Topic: /tf | Type: tf2_msgs/msg/TFMessage | Count: 3660 | Serialization Format: cdr
                     Topic: /tf_static | Type: tf2_msgs/msg/TFMessage | Count: 1 | Serialization Format: cdr
     Service:           0
     Service information: 
    
    • /amcl_pose serves as the reference pose, the “ground truth” you can use to compare against your estimated pose.

Demo Video

1) Report in a table the time it takes to update the particle filter for 100, 500 and 1000 particles. Estimate the maximum number of particles your filter can support running at 10Hz.
2) Include a screenshot comparing your estimated pose path with the reference pose path.

Task 2.3 Simultaneous Localization and Mapping (SLAM)

You have now implemented mapping using known poses and localization using a known map. You can now implement the following simple SLAM algorithm:

  • Use the first laser scan received to construct an initial map.
  • For subsequent laser scans:
    • Localize using the map from the previous iteration.
    • Update the map using the pose estimate from your localization.

TODO

  1. Pull the latest code from mbot_ros_labs upstream to get the Task 2.3 template.
  2. All work for this task is in the package mbot_slam.
    • Start with slam_node.cpp, search for TODOs. Most TODOs come from Task 2.1 and Task 2.2, except lidar interpolate TODO#6 in slam_node.cpp.
    • You can simply copy and paste your code from those tasks, tune the parameters, and achieve a good working SLAM system. You don’t need to follow the TODOs strictly, feel free to implement them in your own preferred way.
  3. When finished, compile your code:
     cd ~/mbot_ros_labs
     colcon build --packages-select mbot_slam
     source install/setup.bash
    

Notice: The LiDAR is physically mounted backward, resulting in a 3.14 rad yaw offset between the base_footprint frame and the lidar frame. When projecting LiDAR data into the world frame, be sure to account for this offset.

  • This offset is defined in the URDF file located at: ~/mbot_ws/src/mbot_description/urdf
  • However, the TF tree is structured as follows: map → odom → base_footprint → base_link → lidar_link. When you query the transform odom → base_footprint, the system does not include the relationship between base_footprint → lidar_link, we have to manually add the offset. When you query odom → lidar_link, the yaw offset is already included, so no manual adjustment is needed.

Nov. 11 update - There is a bug in the mbot_slam template code. Please pull the latest, and see this commit in mbot_ros_labs. The issue is related to LiDAR scan interpolation. This fix should noticeably improve your SLAM performance if you’ve been struggling with mapping quality.

What was wrong:

According to the message definition of sensor_msgs/LaserScan Message:

Header header            # timestamp in the header is the acquisition time of 
                         # the first ray in the scan.

This means the odometry queried by the following line corresponds to the robot’s pose when the first LiDAR ray was fired:

geometry_msgs::msg::TransformStamped tf_odom_base = tf_buffer_->lookupTransform("odom", "base_footprint", scan->header.stamp);

However, in the template code before this fix, we had the following line, which is incorrect:

MovingLaserScan interpolated_scan(*scan, previous_est_pose_, est_pose);

If you want to interpolate the LiDAR rays, please check TODO #6 in slam_node.cpp:

  • The start pose should be est_pose.
  • The end pose should be computed based on the scan duration and the robot’s motion.

If you do not wish to interpolate the LiDAR rays, you can simply use the same pose for both start and end, as shown in the updated template code.

  • When the robot moves slowly, the difference will be small. However, when it moves quickly, not interpolating will noticeably degrade map quality.

Nov. 13 Update - important hints:

If your mapping and localization are working well, but your full SLAM system is struggling, the problem is often in parameter tuning. Here are some hints to guide your tuning:

  1. Action model action_model.hpp
    • Observe your particle cloud: What is its shape? Is it a tight circle, or an elongated oval following the robot’s path? If k2_ or k1_ is too high, your particle cloud might be a “long oval”.
    • We know the robot’s odometry isn’t perfect, but it’s also not completely wrong. Is a large noise value like 1.0 necessary, or would a much smaller value like 0.001 be more appropriate? Think about how much you trust the odometry.
  2. Sensor model sensor_model.hpp
    • sigma_hit_ is in meters. If your map resolution is 0.05m (5cm) and your sigma_hit_ is also 0.05m, what does that imply? It means a lidar ray landing one cell away from the obstacle in the map is already at one standard deviation of error.
    • A smaller sigma_hit_ makes your sensor model stricter, demanding scans align almost perfectly. A larger value is more forgiving. Given the map resolution, experiment with values larger or smaller than 0.05m to see how it affects particle weights.
  3. Mapping mapping.hpp
    • Do you see cells “flickering” between free and occupied? This can happen if a few bad scans can erase a well-defined wall. Tune the log-odds limits: Consider tunning kMaxLogOdds and kMinLogOdds. to make it harder for a cell’s state to be “flipped” after it’s been confidently marked as occupied or free.
  4. Map resolution slam_node.cpp
    • High-resolution maps help localization. Your Task 2.2 may have worked well because the provided map was “crispy” (e.g., 0.01m resolution). If you are using a coarser resolution like 0.05m, try decreasing the cell size (e.g., to 0.03m or 0.02m) here grid_(10.0, 10.0, 0.02, -5.0, -5.0).

Here are examples of “OK,” “Good,” and “Excellent” maps generated from the slam_test bag. These are related to how competition scoring will be evaluated.


How to test?

  1. Run the following in VSCode Terminal #1:
     ros2 launch mbot_bringup mbot_bringup.launch.py
    
  2. Put the robot in the maze and start the SLAM Node in VSCode Terminal #2:
     ros2 run mbot_slam slam_node
    
  3. Start Rviz on NoMachine:
     cd ~/mbot_ros_labs/src/mbot_slam/rviz
     ros2 run rviz2 rviz2 -d slam.rviz
    
  4. Map in the lab maze, drive the robot manually using the teleop in VSCode Terminal #3:
     ros2 run teleop_twist_keyboard teleop_twist_keyboard
    
    • Alternatively, you can work from anywhere using the provided rosbag:
       cd ~/mbot_ros_labs/src/mbot_rosbags
       ros2 bag play slam_test
      
      • This ROS bag includes the /cmd_vel topic. Unplug the Type-C cable from the Pi to the Pico before run the ros bag.
  5. After mapping the whole area, do not stop the mapping node yet. Save the map in VSCode Terminal #4:
     cd ~/mbot_ros_labs/src/mbot_slam/maps
     ros2 run nav2_map_server map_saver_cli -f map_name
    
    • Replace map_name with the name you want for your map.

1) Create a block diagram showing how the SLAM system components interact.
2) Include a screenshot of your map, and explain why your SLAM-generated map is better than the one from Task 2.1.

Checkpoint Submission


Demonstrate your SLAM system by mapping the maze used in Checkpoint 1. You may either manually drive the robot using teleop or use the motion controller from Checkpoint 1.

  1. Submit a screenshot of the generated map.
  2. Submit the map file itself.
  3. Submit a short description of your SLAM system, including how it works and any key observations.
  4. Submit your forklift design prototype, it can be in any form, such as a simple sketch, a CAD file, or other representation.