Inspiration

The first missions to Mars will be composed of teams of few astronauts with many robots to reduce the risk of loss of human life. To effectively lead robots on these missions, the humans must be able to communicate effectively with the robots. We thought the most intuitive way is through natural language and gestures, so we decided to implement a framework to communicate with the robots through those mediums.

What it does

Through hand gestures and vocal commands, the robot can be controlled wirelessly. The demo robot understands the following hand gestures:

  • Left-hand palm facing forward -> turn left
  • Right-hand palm facing forward -> turn right
  • Hand forming a fist -> move forward
  • Palm facing user -> move backwards
  • 1 finger up -> 1 LED lights up
  • 2 finger up -> 2 LEDs light up
  • 3 finger up -> 3 LEDs light up

It also understands the follow vocal commands:

  • "Left" -> turn left
  • "Right" -> turn right
  • "Forward" -> move forward
  • "Backwards" -> move backwards
  • "One" -> 1 LED lights up
  • "Two" -> 2 LEDs light up
  • "Three" -> 3 LEDs light up
  • "Up" -> robot arm ascends
  • "Down" -> robot arm descends

How we built it

We used a leap motion sensor to understand hand gestures, and Google Cloud speech-to-text API to parse real-time voice commands in two separate Python scripts. Both Python scripts write the commands to a .txt file that is read by a third Python script who's role is to send those commands to the Arduino via Wi-Fi. The Arduino sets up a local server, so the computer running the Python scripts sends the commands to that server, and the Arduino executes them. The robot is built with an Arduino Uno board and communicates with the computer via Wi-Fi using the ESP8266 Serial WiFi Module. The Arduino controls a servo that can rotate 180 degrees to ascend or descend the robotic arm, and two DC motors connected to two wheels to move around.

Challenges we ran into

It was difficult to control the robot wirelessly, as the commands were sent byte by byte to the server, rather than as a string. To ignore the header information, we use an escape key, that when read, will tell the Arduino that the following bytes are the commands we want to execute on the robot.

The WiFi module requires a lot of power, and when the robot engines begin to work, there isn't enough power, so it disconnects. The solution is to give it its own dedicated 3.3V power source.

Accomplishments that we're proud of

We were able to implement a framework that is very intuitive to humans since communication between people is most natural with hand gestures and speech. This proof of concept hack proves that one day we can use robots to explore space without complex user interfaces; robots will be able to follow the astronauts on their missions and provide immediate assistance as if they were speaking to humans.

What we learned

We learned how to transmit machine learning powered API data to the Arduino via WiFi.

What's next for Human Robot Symbiotic Framework for Space Exploration

This proof of concept only executed simple algorithms after each gesture or a vocal command. For the space robot, the algorithms it executes could be much more advanced, such as screwing in a bolt when the astronaut says "screw that in" while pointing at the hole.

Share this project:
×

Updates