Disclaimer:

Due to issues with our backend, the ML model was not able to run on the application. However, the computer vision component was fully working on our end. The following answers are more concept based:

What is EvoSense?

At present, we have created an app to showcase our camera technology since we couldn't think of any other way. However, our ultimate goal is to incorporate this technology into glasses for the visually impaired. The camera technology, called Evosense, is designed to bolster communication and emotional comprehension. It acts as a link between non-verbal cues and a better comprehension of emotions, granting individuals a fresh perspective on the emotional dynamics of their environment.

Inspiration

One of the most significant barriers that visually impaired individuals face is the inability to detect emotional and physical cues, leading to a lack of social interaction. Here at EvoSense, we are working to change that. By utilizing AI, machine learning, and computer vision, we have created a user-friendly solution to empower visually impaired individuals and give them a layer of interaction they so greatly need.

What it does

Our application assists people with visual disabilities in decoding the body language and hand gestures of the person in front of them. So far, it can detect 4 emotions: neutral, happy, sad, surprised, and 2 hand gestures: waving hello/goodbye and wanting to give a handshake. The identified body language then gets played out loud so the visually impaired person can hear it through earphones.

How we built it

ML model: The Mediapipe holistic model and OpenCV were used to make the facial and pose recognition model. Using Numpy and CSV module we were able to collect the coordinates for the poses and facial expressions. This data was then processed using Pandas and Scikit-learn's Logistic Regression Algorithm and exported as a model using the Pickle module.

Challenges we ran into

The main challenge we ran into during the creation of our web app was integrating the ML model. This included not only having the holistic model working on the app but also turning the outputs into sound which the user will then be able to receive it.

Accomplishments that we're proud of

We are proud that we were able to create a user-friendly application to empower the visually impaired through better social interactions. we celebrate the positive impact our product can have on user's emotional experiences, enhanced social engagement, and improved quality of life.

What's next for EvoSense

As we look forward, our ambitions soar. We're planning on integrating this technology into smart glasses that seamlessly embody Evosense's power. These glasses will discreetly house advanced cameras in the front, capturing a world full of emotions. Through finely tuned audio vibrations transmitted via the ear speaker, users will directly sense emotions. Imagine, the visually impaired gaining not just independence, but an intuitive link to emotions, fostering profound connections and reshaping their interaction with the world.

Evosense is not just a project; it's a vision for a more inclusive future. By leveraging technology, we aim to empower the visually impaired community, facilitating meaningful connections and enriching lives.

Share this project:

Updates