Inspiration

A popular belief is that necessity is the mother of invention and we feel that the father is probably laziness. Buddy was born from hearing about IBM - Watson's speech processing capabilities and our own laziness when it came to simple tasks (i.e. fetch and deliver). We envisioned a personal assistant or helper hand to aid humans in our daily routines.

What it does

Buddy has microphones that listen for any instructions through a PS3 eye. When a user gives a fetch-and-deliver command such as "buddy, give Greg a pepsi", the robot will look around for Greg and give him a can of pepsi. This is very limited deonstration of the robot. As we didn't finish all that we set out to do, the robot could have also looked for a requested object first, and then delivered it to a desired location.

How I built it

The robot itself was already a built system beforehand. We used the hackathon to accomplish a Watson speech processing interface, a method of object recognition, and inverse kinematics for the arm in order to grab a detected object. Unfortunately, we pursued a nueral network approach for the object detection which did not lead to much progress. We resorted to using fiducial markers called CHILI-tags which were used to classify and approach humans. Although we were able to get a model for the kinematics of the robot arm, we didn't have the time to test and finish the implementation and decided that we would just start the robot off with a can of soda.

Challenges I ran into

Caffe, the deep learning framework that we hoped would allow us to classify objects, was not cooperating with any attempts of training. Time was a big challenge as we did not have enough time to work on grabbing objects and programming a more complex AI.

Accomplishments that I'm proud of

We were able to implement speech processing on an Raspberry Pi and convey those commands to a robot. We learned how to use caffe to classify objects.

What I learned

Time management and parallel execution of work were both the key takeways from this project. We found that it was beneficial to have subteams with concurrent tasks because, just as we experienced, if one of the teams' part did not work, then we may still be able to have some finished project with the contributions from the other parts.

What's next for Buddy

Buddy will soon have a complete kinematic model as well and a more robust method of recognizing objects. Next, we will incorporate a more complex AI architecture that has the ability to localize and plan more intricate paths. At the end of the day, buddy will be a personal assistant system as we expand upon the speech recognition capabilities through nlp.

Built With

Share this project:
×

Updates