Inspiration
What if there were sensors that observed the world for us when we couldn't? This was the thought we had about offering a potential correction to visual impairments. We wanted to assemble a sensor hub that is able to assist the blind when it comes to enhancing the intake of alternate senses of information.
What it does
Our group of sensors assemble together a system where someone could rely on to assist them through navigating through a certain space.
How we built it
We based our system around the Arduino Uno Q, a really interesting board that combined the Arduino's instruction chip with a Linux system.
Challenges we ran into
There were many challenges we faced including hardware constraints, hardware shortages, and software and network failures, but we managed to push through all of them with really creative solutions!
Otherwise, the most technical part of this project was the circuit design and the development of software to orchestrate the circuitry.
Accomplishments that we're proud of
Our harness looks awesome! We also got our sensors to work at once, and we managed to install our own CV model (YOLOv8-nano) to modify the Arduino App Lab's standard model for video object detection. We also managed to consolidate everything as planned in our design, and got something scalable to improve upon later!
What we learned
We learned a lot about edge machine learning, sensors orchestration, telemetry, the interaction of Linux and microcontroller systems, training YOLO models, how to program with the Arduino Uno Q board and wield the Arduino App Lab for fast microcontroller development, how to achieve sleep deprivation, and how to build something really cool!
What's next for ARIA
The machine learning model can learn to pick up many more things in the environment, such as stairs or ramps (fine-tuning and retraining the model is a hassle to do in 36 hours). Our system can be better configured using the right hardware such that audio and mp3 files have better output devices for the visually impaired to receive audible instructions. We can also employ the use of specialized models for depth perception and can also use special LiDAR sensors for a full, 360-degree sensory description of the environment. We can also consider improving the design of the harness, making it more compact and sustainable for longer periods of usage.
Log in or sign up for Devpost to join the conversation.