Inspiration
We wanted to do something with computer vision, incorporating it with a hardware component like a vehicle. Our group wanted to do something that would bring some laughter and levity to such an intense competition, thus we decided that we would use a drone to fly around to other contestants and give them a unique score based on their personal features.
What it does
The mogulator is a web app that allows the user to remotely control a drone. When the user clicks the "mog" button, we start capturing the live video feed from our drone and detect facial landmarks on people in our line of sight, giving them a score based on certain criteria. It also has a component that detects when a person in the frame is flexing their jawline, further increasing their score.
How we built it
Streamed drone data to python server run locally on laptop which uses MCTNN to find landmarks and calculate mog score. Factors of the score include canthal tilt, facial symmetry, facial harmony, and others. The video is them streamed to a react webpage which include buttons to control the drone with.
Challenges we ran into
Figuring out how to deal with conflicting use of ports when it came to integrating our drone code with our website, object detection issues with the mogging.
Accomplishments that we're proud of
Got better at using React.js to design our frontend. Also discovered using Flask to insert python scripts into our website. Got experience using opencv, working with a drone, and training our own machine learning model
What we learned
Learned Front end, learned Back end. learned CV and training ai models
What's next for Mogulator
We want to make it more accurate and consider more variables. It would also be great if we could make more improvements to the drone itself, even perhaps creating our own next time
Log in or sign up for Devpost to join the conversation.