Inspiration
We were inspired to create the TrayMax when we realized we all experienced the same inconvenience. Having to constantly move back and forth from our workspace/workshop to retrieve a tool/part we forgot on our last trip.
What it does
TrayMax is an engineer’s storage assistant that utilizes object detection to follow its owner and open containers based on hand gestures.
How you built it
For the software we used a combination of python and C++ to interact with both the NVIDIA Jetson Orin Nano Developer Kit (a small and powerful computer with ai processing) and the ESP32. We finetuned a YOLO object detection model and utilized the mediapipe-hands software package to allow for the detection of people and their positions, along with the specific articulation of their hands for proper gesture handling. For the hardware, we utilized Fusion 360 to create the 3D model, followed by printing the chassis on an Ultimaker 3. To save time, we used empty 3D printing filament spools as the base of the shelves, and laser cut wood pieces that were otherwise too big to fit on the print bed. An NVIDIA Jetson Orin Nano Developer Kit was utilized to power our computer vision algorithm, along with the two 3V motors moving the device.
Challenges you ran into
Getting the hardware to interact with the software proved to be challenging since we could not get the jetson to use object detection and control the motors. Because of this, we decided to move forward with an approach where we would use the jetson for object detection and an ESP32 for the motors.
Accomplishments you are proud of
We are proud of our webcam integration feature that allowed the robot to “see” and recognize humans and hand gestures. We are also proud that we built the storage system out of old plastic filament spools. This means that each TrayMax would be reusing several filament spools and saving them from being thrown out.
What you learned
We learned that preparation is key and that we should know how to interact with our tools before we bring them. We spent a majority of our time trying to figure out how to interact with a majority of the hardware, some prior research on common components could have saved us a lot of time and energy that could’ve been focused elsewhere.
Next steps for the project
Our next steps for the project are to continue adding software features to the robot (voice commands, pathfinding, etc) and physically scale the robot to be bigger and more suitable as a storage system. We also want to integrate servos for each storage container so that we can open them automatically with hand gestures.
Log in or sign up for Devpost to join the conversation.