BootSense AI
Inspiration
Most computers communicate their status only through visual cues like logos, loading bars, and update screens. For blind users, this creates uncertainty and dependence. BootSense AI was inspired by the need to turn these silent visual signals into clear spoken feedback.
What it does
BootSense AI looks at a laptop screen and tells the user, through voice, whether the device is powered off, booting, updating, ready to use, or stuck. It gives blind users immediate awareness of what their computer is doing.
How we built it
We used a camera connected to a Raspberry Pi to capture the laptop screen. Computer vision and OCR analyze the screen to detect visual patterns and system messages. A Trae AI voice agent converts the detected state into natural spoken responses.
Challenges we ran into
Laptop screens differ across devices and operating systems, making state detection challenging. Another difficulty was avoiding incorrect feedback, which could confuse users. We addressed this by focusing on high-confidence screen states.
Accomplishments that we're proud of
- Built a working prototype in a single day
- Successfully detected multiple laptop states using vision
- Delivered clear, understandable voice feedback
- Created a truly hands-free accessibility tool
What we learned
We learned that small accessibility solutions can have a big impact. Clear communication matters more than complex features, and designing for blind users requires precision and empathy.
What's next for BootSense AI
We plan to support more operating systems, improve detection accuracy, and integrate the system directly into smart glasses for real-world daily use.
Built With
- edge
- ocr
- opencv
- python
- raspberry-pi
- trae
Log in or sign up for Devpost to join the conversation.