Checkpoint 3
Last Updated: Nov 18, 2025
Nov. 18 Update:
Please pull from mbot_ros_labs upstream to get the latest commit to mbot_nav. We added motion controller to the mbot_nav to resolve issues caused by the previous custom message definitions.
- Task 3.1 and Task 3.2 instructions have also been updated accordingly.
Nov. 19 Update:
We’ve released a guide on how to use slam_toolbox! If you’re not satisfied with your mapping performance and need a map to test your A* or exploration algorithms, feel free to take a look.
You may also use slam_toolbox for mapping in Competition Event 2 and Event 3 (with point deductions). For details, please check the competition page.
Using the SLAM algorithm you implemented previously, you can now construct a map of an environment with the MBot. In this checkpoint, you will add path planning and autonomous exploration capabilities.
Contents
- Contents
- Task 3.1 Path Planning
- Task 3.2 Map Exploration
- Task 3.3 Localization with Estimated and Unknown Starting Position
- Checkpoint Submission
Task 3.1 Path Planning
Write an A* path planner. The A* skeleton is provided in the mbot_nav package.
TODO
- Pull the latest code from mbot_ros_labs upstream to get the
mbot_nav. - All work for this task is in the package
mbot_nav.- Start with
navigation_node.cpp, search for TODOs. All the actual code writing is inastar.cpp. - You also need to complete
obstacle_distance_grid.cppandmotion_controller_diff.cpp. The TODOs match the earlier tasks, so you can reuse your previous implementations.obstacle_distance_grid.cppnow includes a new getOccupancy function. Note, do not copy/paste the entire old file, only reuse the TODO parts. - You don’t need to follow the TODOs strictly, feel free to implement them in your own preferred way.
- Start with
- When finished, compile your code:
cd ~/mbot_ros_labs colcon build --packages-select mbot_nav source install/setup.bash
How to test?
- Unit test: This test will simply test if the code can find a valid path.
ros2 run mbot_nav astar_test - Testing Mode: this mode, the navigation node listens to
/initialposeand/goal_pose. Setting both in RViz triggers the A* planner, the planned path will appear in RViz if successful.- Run launch file to publish map and run nagivation node in VSCode Terminal:
ros2 launch mbot_nav path_planning.launch.py map_name:=maze1 - Open Rviz to set initial pose and goal pose in NoMachine Terminal:
cd ~/mbot_ros_labs/src/mbot_nav/rviz ros2 run rviz2 rviz2 -d path_planning.rviz
- Run launch file to publish map and run nagivation node in VSCode Terminal:
- Real-world mode (with localization): After validating your planner in the previous tests, run in the real maze.
- Construct a map and save it in
mbot_ros_labs/src/mbot_nav/maps. Then compile thembot_navpackage:cd ~/mbot_ros_labs colcon build --packages-select mbot_nav source install/setup.bash - Run launch file to publish map and run nagivation node in VSCode Terminal #1:
ros2 launch mbot_nav path_planning.launch.py map_name:=your_map pose_source:=tf - Run localization node in VSCode Terminal #2:
ros2 run mbot_localization localization_node- Notice: In
localization_node.cpp, setpublish_map_odom_{true}for real-world operation. Then recompile thembot_localization.
- Notice: In
- Start rviz and set initial pose in NoMachine Terminal #1, localization node needs it to initialize particles.
cd ~/mbot_ros_labs/src/mbot_nav/rviz ros2 run rviz2 rviz2 -d path_planning.rviz - Run motion controller in VSCode Terminal #3:
ros2 run mbot_nav motion_controller_diff - Then set the goal pose on rviz.
- Construct a map and save it in
You may also test using rosbag playback. This is useful for A* debugging but does not reflect real-world localization performance. Instructions for rosbag testing are shown in the video demo.
ros2 run mbot_nav navigation_node --ros-args -p pose_source:=tf
cd ~/mbot_ros_labs/src/mbot_rosbags/maze1
ros2 bag play maze1.mcap
Video Demo
Provide a figure showing the planned path in the map.
Task 3.2 Map Exploration
Until now, the MBot has only moved using teleop commands or manually set goal poses. For this task, you will implement a frontier-based exploration algorithm that allows the MBot to autonomously select targets and explore the full environment.
This task is useful for competition but not required for Checkpoint 3 submission.
TODO
- All work is in
mbot_nav.- Start with
exploration_node.cpp, search for TODOs. All the actual code writing is infrontier_explorer.cpp. - You don’t need to follow the TODOs strictly, feel free to implement them in your own preferred way.
- Start with
- When finished, compile your code:
cd ~/mbot_ros_labs colcon build --packages-select mbot_nav source install/setup.bash
How to test?
- Start rviz in NoMachine Terminal #1:
cd ~/mbot_ros_labs/src/mbot_nav/rviz ros2 run rviz2 rviz2 -d path_planning.rviz - Run slam in VSCode Terminal #1:
ros2 run mbot_slam slam_node - Run motion controller in VSCode Terminal #2:
ros2 run mbot_nav motion_controller_diff - Run the exploration node in VSCode Terminal #3:
ros2 run mbot_nav exploration_node
You may also test with rosbag playback. This is useful for algorithm debugging but does not represent true performance with real motion control. Instructions for rosbag testing are shown in the video demo.
# first
ros2 run mbot_nav exploration_node
# then
ros2 run mbot_slam slam_node
cd ~/mbot_ros_labs/src/mbot_rosbags
ros2 bag play slam_test
Video Demo
Explain the strategy used for finding frontiers and any other details about your implementation that you found important for making your algorithm work.
Task 3.3 Localization with Estimated and Unknown Starting Position
For advanced competition levels, the MBot must localize itself in a known map without knowing its initial pose. This will require initializing your particles in some distribution in open space on the map, and converging on a pose. This is useful in the competition but does not required any submission in the Checkpoint 3.
Details please check competition event 2 - level 3.
Explain the methods used for initial localization.
Checkpoint Submission
Demonstrate your path planner (task 3.1) by showing your robot navigating a maze.
- Submit a video of your robot autonomously navigating in a maze environment.
- Your video should include the following:
- Set goal pose, then the robot driving in the real-world lab maze.
- Your visualization tool (RViz or Foxglove) displaying the map and planned path.
- Your video should include the following: