Inspiration

Imagine going through your everyday conversations – and not being able to read body language, social cues, or emotions.

This is a reality for people with autism.

Unfortunately, this leads to people with high-functioning autism being socially outcasted from early education to the work force – and beyond. Not because they do not understand emotion, but because they need to be explicitly told how someone is feeling to respond accordingly.

Our team set out to create a way to do that, discreetly and in real time: providing a way to check the dominant emotion of their conversations at a glance.

What it does

Ideally, emote would use technology like Snap Inc.'s Spectacles or Google Glass to feed a stream of images through our emotion interpreter in real time, resulting in a notification on a smartwatch or phone.

The notification would be bold and visual: an emoji paired with a word, ranging from anger to happiness. Though our interpreter recognizes both positive and negative emotions, we want to focus on sending notifications for negative emotions (such as contempt, fear, or sadness) that could potentially damage relationships at home, at school, or in the work force if they aren't addressed.

Each user would have the freedom to tailor their experience with emote. For example, you could choose to turn push notifications off during conversations, but then be able to check a time log of your conversations at any point of the day. (Think: a diary of each day’s social interactions and the emotions from them.)

How we built it

What is a pirate’s favorite programming language? R.

That’s also what we wrote our backend in!

We utilized the Microsoft Face API to analyze the dominant emotion in an image and then utilized a separate library to send that emotion as an email notification. We exposed the facial analysis function as a RESTful API endpoint and built a frontend to hit that with an HTTP "get" request — sending the image url as a query parameter.

The pictures we used are not our own, but are a part of the public domain.

Challenges we ran into

Our team had never used image recognition software in the past, used a Pebble, or created a web app.

This hackathon was certainly one of firsts.

Another issue was time constraint and hardware – we'd love to be able to use streaming pictures on Spectacles or Google glass, but that was not available. We also wanted to use sentiment analysis on text recordings, but Pebble lacks many of the necessary components for that step.

What's next for emote

We've gotten a lot of interest from our awesome friends and family over the past 36 hours, and we would love to continue our work and make this project a reality. It is especially meaningful for Elle and Tyler, who have relatives with high-functioning autism and cerebral palsy.

If you're interested in our project, send us a message! We'd love any advice, perspective, or collaboration you can send our way.

Built With

Share this project:
×

Updates