Inspiration

We wanted to build something that would help those with limited mobility or eyesight. We wanted to make something that was as simple and intuitive as possible, while performing complex tasks. To this end, we designed a system to allow users to locate items around their home they might not normally be able to see.

What it does

Our fleet of robots allows the user to speak the name of the object they are looking for, and will then set off autonomously to track down the item. They will report back to the user once they have found the item, while the user can watch at every step along the way with a live video stream. The user can also take manual control of the robots at any time if they so wish.

How I built it

The robots were built using laser cut plates, a raspberry pi, DC motors, and a dual voltage power system. The software used a TCP/IP library for streaming video called FireEye to send video and data from the Raspberry Pi to our Node.js server. This server performed image processing and natural language processing to determine what the user was trying to find, and identify it when the camera picked the object up. The front end was built using React.js, with Socket.io acting as the method of communication between server and UI.

Challenges I ran into

We ran into many challenges. Many. Our first problems lay with trying to get a consistent video stream from the robot to our server, and then only grew more difficult. We faces challenges trying to communicate data from our server to the robot, and from our server to the front-end UI. We also have very little experience designing user interfaces, and ran into many implementation problems. Additionally, this was the first undertaking we have coded with Node.js, which we learned was substantially different than Python. (Looking back Python probably should have been the way to go...)

Accomplishments that I'm proud of

We are particularly proud of the overall tech stack we ended up using. There are many technologies that we had to get working, and then get to communicate before our system would become functional. We learned about TCP and Web sockets, as well as coding for hardware constraints, and how to perform cloud image processing.

What I learned

We learned a substantial amount overall, mostly as it related to socket programming, and how to have multiple components share stateful data. We also learned how to deal with the constraints of network speed, and raspberry pi processing power. As such we learned about multi-threading programs to make them run more efficiently.

What's next for MLE

We would like to expand our robots to include a robot arm, such that they would be able to retrieve and interact with the objects they are searching for. We would also like to make the robots bigger such that they can more effectively navigate. We also have plans to increase the overall speed of the system, and try to eliminate network and streaming latency.

Share this project:
×

Updates