The main interface
The output of the "Expressions" mode
Using the "Objects" mode
Songs searched by the "Objects" mode
Our main goal was to use facial recognition to detect emotions and play music based on that.
What it does
There's two modes: facial expression detection, and object detection. The former detects the emotions you're feeling and describes them, and the latter detects an object on the photo and plays a song about it. The app also features text to speech to inform the user about the results of the analysis. There's also a "Don't press me" but that's our little, spooky secret ;););)
How we built it
We built it using python and the integration of google cloud's object and face detection.
Challenges we ran into
- Spotify's python API being outdated. We searched for songs on YouTube instead.
- Building an interface with tkinter.
Accomplishments that we're proud of
- Had loads of fun!
- Working out how to use google cloud
- Figuring out a bit of tkinter
What we learned
- API usage (Google Cloud Vision)
What's next for Scannerama
- Integrating it with Spotify or other streaming service.