Inspiration

Imagine all of a sudden, you lost your ability to move your hands and feet. You are limited by your body, constrained in your mind, grounded by your physical incapabilities. This unfortunately is the case for millions of people across the world who suffer from impaired motor function and muscle deficiencies. eMotion seeks to revolutionize the interaction between humans and computers--by detecting one's facial expressions. Despite the loss of ability to move your mouse or type on your keyboard, you can still control your facial expressions, and that's all you need to interact with the world like a healthy person would.

What it does

This application allows a user to use their facial-expressions as controllers. Our application uses a basic example of this kind of interface, where the user can control the pointer mouse of their computer using their facial expression. We used Azure's emotion detection service to determine the facial-expression given by the user. This allows for a convenient interface that can be applied to many problems in the real world.

How we built it

We first attempted (many times) to select an idea that all our members would be content with pursuing. It took us about a whole day to settle on a project. Needless to say, there was a lot of pivoting. But the technology we all we're interested in using was opencv. We thought it was an exciting piece of software and wanted to learn it. So our first idea that integrated opencv was a trained-model that was able to detect violence. This application could be used as a security tool that would alert the authorities when violence was occurring. Ultimately, we found out that this idea had been done already. But we we're still committed to OpenCV. We kept on brainstorming and came up with the idea of facial expression as an interface. Simply put, let a user use their face as a controller. We wanted to use a simple example that integrated this kind of interface.

Challenges we ran into

Our focus on creating a real-time application means that we cannot use the existing video sentiment analysis service provided by the Emotion API. but instead had to constantly upload frames by frames to Microsoft Azure for its API to process. As we were trying to refine our code, we realized that constant post-requests to Microsoft Azure Emotion API was incredibly slow, mainly due to 1) long distance to the server that's located on the west coast and 2) it takes time for Azure to provide feedback, especially when we are "bombarding" it with numerous frames constantly. This unfortunately was not something we solved at this hackathon. In the future we seek to create a local machine-learning based sentiment analysis algorithm that can remove this latency, therefore drastically improving the speed and user-friendliness of eMotion.

Accomplishments that we're proud of

It's the first hackathon for all of us to work with all new members. Although some of us knew each other before coming here, none of us ever worked together on a team, let alone at a hackathon. We had different interests and ideas coming into this hackathon, however resolving this difference took us longer than we expected. After numerous brainstorming sessions and debates, we finally settled on one idea, and coordinated each team member to contribute his specialties for the team. Being able to organize the team in an efficient manner and bring everyone together was something each and everyone of us was really proud of.

What we learned

We explored a plethora of modules for python, such as opencv, pyautogui, and Tkinter. OpenCV was something most of our team had no experience with, but we all thought was an interesting piece of software. We spend a lot of time reading the API documentation and trying to learn as fast as we could. We also had to get familiar with Microsoft's Azure cloud service. Overall, we learned a lot, including non-technical lessons, such as managing our time correctly. And most importantly, we all had fun along the way. :)

What's next for eMotion

eMotion is just the first step for giving a better future for patients with severe loss of motor functions. Patients suffering from Incomplete Locked-In Syndrome and spinal cord damages can benefit from using eMotion as a basis to interact with the outside world without any significant bodily movements.

Furthermore, we seek to improve upon current rehabilitation strategies for patients with facial paresis by creating a real-time, engaging video game based on eMotion. This video game, similar to "dance dance revolution" in arcades, will provide a series of instructions to lead the user to demonstrate a variety of facial expressions with music playing in the background. This can provide a fun and novel way not only for patients to enjoy the treatment process, but also facilitate neural plasticity and reform neuronal interconnections. Similar treatment processes (less fun, of course) that were effective were previously discussed in several published articles: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2884696/pdf/sps18047.pdf https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4433000/

Built With

Share this project:
×

Updates