Search-and-Rescue Robot Dog: Autonomous Exploration and Visual SLAM with Unitree Go1

C++, ROS2/ROS, Python, SLAM, Computer Vision, Exploration, Legged Locomotion, Unitree, Zed, Jetson


Concept video of the final product.


Overview

The aim of this project is to enable the Unitree Go1 to autonomously explore an outdoor environment with uneven terrain, in order to search and locate a missing person. The motivation for this project is to allow robots to look for people missing in a natural environment (example), where gathering a search party ad-hoc is difficult. This leverages the unique advantages that legged robots as well as visual SLAM possess, in unmapped environments. The dog (a Unitree Go1) is equipped with a ZED 2i camera and uses its built in Visual-Inertial Odometry (VIO) for Simulataneous Localization and Mapping (SLAM), object detection for detecting human beings, a frontier exploration algorithm for searching an environment, and the Nav Stack for traveling to goal poses. Much of the project has centered around integrating many different sensors and computers using ROS2 humble on an Nvidia Jetson Orin Nano.

Given below is a road map for completing the project. Currently, I’m at a point where the exploration algorithm is only suitable for dense environments, and hence has not been deployed for large and sparse outdoor environments:




System design

The system consists of a Unitree Go1, a Zed 2i Camera and an Nvidia Jetson Orin Nano, and uses ROS2 humble. The Unitree and Jetson are connected using an ethernet cable while the Zed is connected to the Jetson through USB. All of these components are fixed to the Unitree using a 3-D printed mount designed by David Dorf.


Fig. 1. Assembled hardware. A buck converter (24V->12V) from the battery port (XT30) to the Jetson is also required.


All ROS2 nodes run on the Jetson, and can be activated and visualized using an external computer through ssh -Y over WiFi, using the same ROS2 distro (humble in my case). Wireless communication with ROS2 can be quite unreliable however, and a wired connection is recommended whenever possible, especially for testing.


Fig. 2. A block diagram of all the active elements in the system.


Unitree Go1

The Unitree Go1 serves as a robust, holonomic mobile base that can easily be deployed on moderately uneven terrain. The Unitree ROS2 Wrapper by Katie Hughes has been utilized for high level control. After connecting your computer to the Go1 via ethernet, communication needs to be initialized every time using:

1
2
3
4
5
user@jetson:~$ ifconfig # Tells you enpxxx, your computer's network interfaces 
user@jetson:~$ sudo ifconfig enpxxx down
user@jetson:~$ sudo ifconfig enpxxx 192.168.123.162/24
user@jetson:~$ sudo ifconfig enpxxx up
user@jetson:~$ ping 192.168.123.161 

I would recommend turning these commands into an alias and running it each time the Unitree and Jetson are switched on.


High Level Control on the Unitree Go1.


Jetson Orin Nano

The Jetson development boards by NVidia are compact, reliable and powerful Linux computers that can easily serve as the brain of various robotic systems. The Jetson Orin Nano has been flashed using a micro SD card with the Jetpack 5.1.2, and runs Ubuntu 20.0.4 and ROS2 humble. ROS2 can be installed by following this Isaac ROS tutorial. Shail Dalal and I tried multiple ways to flash the Jetpack 6.0 onto the Jetson Orin Nano (with Ubuntu 22.0.4 and ROS2 iron) but were unsuccessful, so we opted to use the Jetpack 5.1.2 instead. Setting up a Jetson from scratch and installing packages on it can quickly become a tiring process owing to the lack of software support and documentation by Nvidia, which must be anticipated beforehand. After being set up however, the Jetson should operate quite reliably. (meme)


Zed 2i camera

The Zed series of cameras by StereoLabs are highly accurate stereo cameras which have built in Visual-Inertial Odometry, 3-D Mapping, and Object (including human) Detection, among other features. Additionally, the cameras have a very well documented ROS2 wrapper making rapid deployment of these features possible.


Visual SLAM

The Zed camera’s superior localization and mapping capabilities can be used to get Visual SLAM straight out of the box. Multiple parameters and flags in the config/common.yaml file of the wrapper can be adjusted to get many customized behaviours:


Visual-Inertial Odometry

This feature is highly accurate and does not accumulate drift. However, since the depth perception of the camera does not work from 0-40cm, objects traveling in and out of this range can seriously disorient the camera’s perception. The odometry is also robust to the vibrations caused by the Unitree walking.


On the left is Stereo Lab's demonstration , and on the right is my test of the Zed camera's positional tracking.


3-D Map Building

The 3-D map building tool is also very powerful and does not accumulate noticeable drift over large distances. However, it can register false locations because of virtual objects in reflective surfaces. It is also not robust to dynamic obstacles which is addressed in the map filter.

Here’s a video of the 3-D map building integrated with the 2-D filtered map, while manually operating the Unitree:


Human Detection

Human detection is part of the object detection feature and can even track the motion of crowds of people . This is used to trigger a behaviour change upon encountering a human during the search and rescue operation.


2-D Map filtering

The 3D map needs to be projected onto a 2D grid to be utilized by Nav2. While RTAB Map is the most popular tool to project 3-D point clouds from LIDAR scans onto 2-D occupancy grids, the unique naunces associated with the Zed camera’s 3-D mapping (reflective surfaces, dynamic obstacles etc.) warrant a fair amount of post-processing making it more sensible to implement a custom node for this. For example, a human walking in front of the camera can register enough outliers to create the illusion of various pseudo-obstacles which can seriously bias the Nav2 costmap.


Fig. 3. Example of how dynamic obstacles can result in outliers in an otherwise well formed 3-D map.


The algorithm I have implemented for this is:

  1. Iterate over the point cloud and increment the cells of a 2-D occupancy grid for each point lying within a range of height values above the floor.

    1. If the number of points in a grid cell is above a certain threshold, classify it as an obstacle.
    2. Otherwise, if there are enough point close to and below the floor, or there are enough points close to the ceiling, classify it as a free cell.
    3. Otherwise, classify it as an unexplored cell.
  2. Erode the map a few times in order to get rid of outliers. False obstacles are more harmful to the cost map than false free spaces.

  3. Pad the boundaries of the map with obstacles for safety.

  4. Clear the initial heading of the Unitree assuming that it will be deployed facing no immediate obstacles.

This algorithm is robust to smaller dynamic obstacles but not large ones.


Fig. 4. A processed 2-D map suitable for Nav2.


I have used the unitree_nav package by Nick Morales , Marno Nel , and Katie Hughes . This hosts a convenient service which allows setting and updating of nav2 goal poses. Since this package is integrated with a RoboSense LIDAR, custom launch files have been made to integrate the Zed camera instead.


Unitree Go1 fully integrated with the Nav Stack.


Frontier Exploration

There are certain salient considerations that were kept in mind while writing the frontier exploration algorithm.

  • The Zed camera’s depth resolution does not work for distances smaller than 30cm. The position of the base_footprint must be adjusted to not let the zed camera be too close to an obstacle while traveling to or reaching a frontier.
  • A Stereo Camera has a limited viewing angle. When initialized in a new environment, any frontiers behind the camera will immediately be the closest to travel to and might not lead to desired behaviour.
  • The Unitree is a holonomic mobile base and therefore has decoupled view and heading. If used correctly, this can be a major advantage in efficiently exploraing an area.
  • The map may have small encalves of unexplored regions which can be outliers for the exploration algorithm.

My frontier exploration algorithm works as follows:

  1. Dilate the free space of the map to cover any enclaves.
  2. Locate the position and direction of each frontier by evaluating neighboring unexplored cells around free cells.
  3. Update the goal pose by picking the closest frontier at every time step. This means that traveling exactly to a particular frontier is less of a priority than constantly moving and exploring.


Fig. 5. Frontiers are identified and the closest frontier is chosen as the goal pose.


Deploying the Algorithm Outdoors


Frontier exploration successfully working in a tight space.


While the exploration worked in a smaller indoor environment, it wasn’t suitable to be deployed in a sparse outdoor enviornment. This is because the current exploration algorithm picks the closest reachable frontier which is more explorative and less exploitative akin to a Breadth First Search. This leads to the robot spending a large time exploring its immediate surroundings rather than commiting to a path and covering larger ground. On the other hand, a highly exploitative algorithm akin to Depth First Search which is biased to covering larger ground in less time is not guaranteed to explore the space properly which might lead to swathes of unexplored regions.


Acknowledments

Thanks Stella, for starring in my video.

Thanks Srikanth, for lending me your Jackal when the dog broke.