What it does
^O^...grub (“O-Grub”) is designed to help feed the physically challenged, via ROS motion planning and 3D face detection. It allows user to choose from a food menu via hands-free gestures: tilt head to the left/right moves food selection to left/right, and nod head to confirm food selection. Then, our robot arm scoops the selected food and deliver to the user’s mouth. With facial expression recognization, based on the user’s reaction to the food, our machine also allows the user to upvote/downvote a particular food menu item, to signal the chef about the user’s taste preference.
Challenges we ran into
- Some struggle with ROS planning
- Finding the right component for our arm
- How to make robot arm structure, and “spoon” from raw materials like sheet metal, lube-penetrant, wood, etc.?
Accomplishments that we're proud of
- SW: Used Microsoft Kinect and EmoPy to detect facial position and expression
- Hardware: finish the whole structure with 6 servos, develop the spoon’s structure which helps scoop food probably.
What's next for O-Grub
- To develop multi-user support via facial recognition
- To physically build it in large scale
- Change the materials of the spoon so it can be replaced easily