We were inspired to create a product that could help make the lives of people easily, and we felt that helping the blind would be a great way to make impact. We thought about different struggles that different people go through every day.
What it does
Our idea was to make a smart "lazy susan" for the blind, where users could place objects (at a set location) on the table, the object would be identified by AlwaysAI, and when the user calls for the object using voice recognition, the table would bring that object to the user (at the same set location). The voice recognition would be done through a web application that communicates with the DragonBoard.
However, in the end, our project would only be able to rotate the table, which allows users to retrieve their items in a more sophisticated manner.
How I built it
We built the project using a Qualcomm DragonBoard. We would use the DragonBoard to run the servo and create an algorithmic pattern which the table would rotate. We also spent time working on the AlwaysAI API to perform object recognition.
Challenges I ran into
Our biggest challenge was updating the environment of the DragonBoard. The DragonBoard did not come in with many libraries which we needed. So our first 16 hours of the hackathon was spent setting up the DragonBoard, We needed to install Docker, Python, Java, node.js, GPIO accessors, and others. We ended up not being able to access the analog inputs, which prevented us from working tech such as connecting the camera with the AlwaysAI technology, and connecting that with the web application.
Accomplishments that I'm proud of
We were proud to be able to get the DragonBoard set up in the way we did. Although we were missing several features, we were able to at least get it to work with a camera and a servo, which is an accomplishment we're proud of.
What I learned
We learned that doing a hardware hack requires a lot of setup, which can make a break your project. However, we learned how to use Linux operations on the DragonBoard and how to set up a servo to rotate.
What's next for rotaTABLE
In the future, we would love to be able to connect the camera with the AlwaysAI application, detect the object's name, and summons the object upon being called by the web application. We would also love to connect the data to a database, which would update the web application to show the current state of the table on the laptop. Finally, we would upgrade the plate to be more stable (other than just cardboard).