Inspiration
We took inspiration from the Back to the Future's idea of what technology in the future may look like, by imagining what the evolution of a common household item, the mirror, may look like within the next century.
What it does
By detecting the user in real time, our system will overlay different outfits (popular in the year 3024) onto the person, allowing them to visualize their options for their outfit of the day. Additionally, it displays the current date (10/27/3024) as well as headlines for that day.
How we built it
Our hardware consists of a two-way mirror with a monitor hidden behind it, along with a webcam to capture real-time footage.
Our software consists of the following:
YOLO and Roboflow for object detection and mapping out the user's body.
OpenCV and Python to process the webcam input and aligning the digital outfits.
HTML/CSS/JavaScript for the visual interface.
Dall-E to generate the futuristic clothing items.
Challenges we ran into
There was a LOT of alignment issues in the beginning, which had us retraining our model with newer data a lot to improve its accuracy. Lighting was also a problem, to which we mitigated by training it on models (different hackers that volunteered) that were placed under a variety of lighting conditions. The biggest hurdle we managed to overcome was connecting our backend to the frontend and figuring out the best way to do it while maintaining an acceptable latency.
Accomplishments that we're proud of
Successfully training our model utilizing only our own data was a massive achievement for us as we had a small pool to work with. Being able to accurately map out the body and overlay the images onto it was a big win for us as well. But overall, the fact that we were able to complete this from scratch in under 24 hours was our biggest cause for celebration. :)
What we learned
We all had prior (minimal) experience with object detection software, so this took our understanding of it to the next level. The creativity behind the project as well as our collaborative nature made this feel like less of a bore and more of a fun passion project, and we'd all like to continue to work and learn together in the future. We've realized first hand how massive the potential for AR interfaces is as well as the challenges and time it takes in generating a successful tool.
What's next for CompileToStyle
We'd like to expand on the outfit options (pants, headwear, jewelry, etc.), as well as adding gesture controls to allow the user to interact with the mirror, such as revolving through outfits with the swipe of a hand.
Log in or sign up for Devpost to join the conversation.