Dexterous Manipulation through Virtual Reality

Python, ROS2/ROS, Computer Vision, Manipulation, Motion Planning, Gazebo, Shadow Hands, Emika Franka Robot Arm, Intel RealSense, ABB Gofa Arm



Overview

The goal of this project was to design a teleoperation system which leverages object models in order to make the user’s experience more intuitive. The idea is that a human operator can stack virtual rings in a simulated environment, all while getting instantaneous haptic feedback thanks to the local object models and low-latency simulation environment. The user’s impact on the world is then replicated by an avatar robot, i.e. the robot does the same thing with the rings that the human user did. While we did not end up bridging the operator station and avatar station, we developed several key packages which will contribute to this end goal. These packages are outlined below, and more detailed information about how to run each package, etc. can be found in the READMEs of each individual package.



What Problem Does This Solve?

Typically, a human operator and robot avatar pair involves live visual and tactile feedback, in that the operator can adjust their motion based on what they see/feel through the avatar. This sort of a setup is not very good in high latency networks which would be the case if the avatar is on a different continent, or maybe even in space. Our approach tries to solve this problem by:

  1. Simulating a virtual environment for the human operator, and
  2. Prescribing some intelligence to the robot avatar to adapt to the real world environment.

Team Members

This team effort was part of the final project in the ME 495 Embedded Systems course at Northwestern. I worked on this project with:


Packages:

This projects consists of the following packages:

Instructions on how to run/launch nodes in each package can be found within the READMEs of the individual packages.


teleop_tasks Package

The teleop_tasks package defines and launches virtual tasks, which can be completed by the user. We took two approaches to this:

  • Programming some crude physics into RViz
  • Using the physics engine that comes with Gazebo

RViz:

The video below shows an example of a user completing a ring stacking task. The user can feel the mass of the rings due to the Franka robots attached to their hands. The HaptX gloves allow the user to feel the rings in their hands by activating the finger brakes and inflating the tactors when the user picks up a ring.


Gazebo:

The video below shows a hand moving around in Gazebo. This hand can interact with the objects in the Gazebo world, and the hand’s state is dictated by the hand of the human operator.


teleop_haptics Package

The teleop_haptics package can be used to provide haptic feedback to the user. There are 2 main forms of haptic feedback:

  • Kinesthetic Feedback: Provided by a Franka robot attached to the user’s wrist, the kinesthetic feedback allows a user to experience virtual masses when picking up objects.
  • Cutaneous Feedback: Provided by the HaptX gloves, the cutaneous feedback allows a user to feel objects on their fingertips.


teleop_sensing Package

The teleop_sensing package uses computer vision to calculate the locations of objects present in the avatar robot’s workspace.

A demo of the package’s ability to detect the positions of the rings is shown below:


teleop_visualization Package

The teleop_visualization package allows you to visualize the avatar’s workspace in RViz.

Real Life RViz


teleop_avatar Package

My primary contribution to the project was the teleop_avatar package.
The teleop_avatar package controls the avatar robot to pick up objects and move them. In order to pick up an object, you can simply call a service specifying the object’s id. There is also a service which allows you to pick up rings and place them on the peg by simply specifying the object’s id.

Here is an example of the teleop_avatar package picking up rings and stacking them on a peg:


teleop_interfaces Package

The teleop_interfaces package contains all the custom messages and services used by the teleoperation system.

Custom Messages:

  • FingerWrenches
  • ObjectState

Custom Services:

  • Grasp
  • ExecuteTrajectory
  • SetWrench


teleop Package

The teleop package is a work in progress, but it is meant to bridge the operator and avatar stations and orchestrate the entire teleoperation experience. It will launch all the nodes necessary for the user to complete the task in a simulated environments, as well as all the nodes needed to run the avatar station. When a user begins moving objects in the simulated world, it will sense this and begin publishing the transforms of the object to the avatar station. Once the user is done moving the object, the nodes within this package will prompt the avatar station to begin executing the same trajectories with the real-life objects.