Inspiration
My team and I were fed up with seeing people unable to fix simple problems. Things that are often cheap and easy to repair feel intimidating without prior knowledge. From small electronics to everyday devices, modern technology has become opaque and disposable, leading to unnecessary e-waste and lost self-reliance. MIDAS was born from the idea that broken tech does not need to be replaced. It just needs the right guidance and a golden touch.
What it does
Meet MIDAS, an AI and AR powered repair assistant that helps users fix broken devices through visual and voice guided instructions. The user opens the MIDAS web app and grants camera and microphone access. They describe what is broken using their voice, providing contextual information. MIDAS then uses computer vision to analyze the device, identify key components, and diagnose the most likely fault. Using AR overlays and spoken step by step instructions, MIDAS guides the user through the repair process in real time. Our MVP demonstrates this workflow on a broken computer mouse, guiding the user through identifying and fixing a common issue such as an unresponsive or misaligned click switch.
How we built it
We built MIDAS as a mobile first web application using React and Vite for the frontend and FastAPI for the backend. The computer vision pipeline uses YOLOv11 and OpenCV to detect components and fault regions from the camera feed. We integrated OpenAI GPT to dynamically generate repair instructions based on detected components and user provided voice context. Speech input is handled through Whisper and the Web Speech API, allowing hands free interaction during repairs. Repair sessions and metadata are stored using Supabase with PostgreSQL. The AR experience overlays repair guidance directly on the device, creating an intuitive step by step repair flow.
Challenges we ran into
One major challenge was aligning real time computer vision with a web based AR experience while keeping latency low. Training and fine tuning object detection models for small hardware components was also difficult due to limited datasets. Another challenge was designing repair instructions that were both technically accurate and understandable to users with no prior repair experience. Balancing realism with hackathon stability required careful scoping and iteration.
Accomplishments that we are proud of
We built a complete end to end pipeline that goes from scan to diagnosis to repair and verification. We successfully integrated computer vision, large language models, speech input, and AR into a single experience. We delivered a polished and demo friendly MVP that demonstrates real world impact within hackathon constraints.
What we learned
1) How to design and leverage AI workflows to maximize efficiency and reliability 2) How to combine computer vision, language models, and Augmented Reality (AR) into a cohesive user experience 3) The importance of tight scoping to ship a stable and impactful demo
What is next for MIDAS
1) Expanding to a wider variety of use cases, including keyboards, controllers, phones, and other consumer electronics 2) Exploring non tech repairs such as basic automotive and household maintenance 3) Adding crowdsourced repair success data to improve future diagnoses 4) Fully voice guided repair sessions and beginner versus expert modes
Log in or sign up for Devpost to join the conversation.