Inspiration
Every year, millions of people suffer from concussions without even realizing it. We wanted to tackle this problem by offering a simple solution that could be used by everyone to offer these people proper care.
What it does
Harnessing computer vision and NLP, we have developed a tool that can administer a concussion test in under 2 minutes. It can be easily done anywhere using just a smartphone. The test consists of multiple cognitive tasks in order to get a complete assessment of the situation.
How we built it
The application consists of a react-native app that can be installed on both iOS and Android devices. The app communicates with our servers, running a complex neural network model, which analyzes the user's pupils. It is then passed through a computer vision algorithm written in opencv. Next, there is a reaction game built in react-native to test reflexes. A speech analysis tool that uses the Google Cloud Platform's API is also used. Finally, there is a questionnaire regarding the symptoms of a user's concussion.
Challenges we ran into
It was very complicated to get a proper segmentation of the pupil because of how small it is. We also ran into some minor Google Cloud bugs.
Accomplishments that we're proud of
We are very proud of our handmade opencv algorithm. We also love how intuitive our UI looks.
What we learned
It was Antonio's first time using react-native and Alex's first time using torch.
What's next for BrainBud
By obtaining a bigger data set of pupils, we could improve the accuracy of the algorithm.
Built With
- gcp
- gin
- go
- javascript
- opencv
- python
- rabbitmq
- react-native
- redis
- tensorflow
- torch
Log in or sign up for Devpost to join the conversation.