Inspiration

Millions of people worldwide lose vital abilities through strokes. When it comes to preventing the dire consequences of a stroke, every second counts and saves ~30000 brain cells. The biggest issue is when a person does not acknowledge the fact that they just had a stroke, thinking the confusion and muscular freezes will go away soon and thus does not call an ambulance. We want to provide a safe an reliable way to analyse if the user had a stroke through StrokeDetect.

What it does

We use the FAST framework to recognize common symptoms of post-stroke patients by analysing F=Facial assymetry/paralisis, A=Arms (the ability of a patient to hold up both arms while closing their eyes) and finally S=Speech, recognizing mumbling and pronounciation issues. This framework is commonly accepted in the medical community.

How we built it

We used the multimodal capabilities of Google Gemini to analyse the three videos which get recorded during the FAST walkthrough. Gemini is carefully instructed to asses certain frames of each video and the audio of the speech part to come to a final conclusion about a potential stroke. Our backend is connected to our video-recording frontend via FastAPI.

Challenges we ran into

Communication from front to backend was a big challenge during the development phase. Furthermore tuning the promts for Gemini proved crucial to archieve consistent results.

Final words

We are proud of presenting a fully functional MVP at our Desk on Sunday and look forward to you inpirational questions from your side!

Built With

Share this project:

Updates