Inspiration

Mental Health is one of the biggest challenges students face being in a new environment away from their loved ones. They silently struggle with stress, anxiety, and burnout without realising how the environment and daily habits affect them. Existing solutions either rely on self-reported data which can be very unreliable or inconsistent, or specialist therapists which can be difficult.

We were inspired to create Neurosense as a sysem which can objectively monitor mental well-being using real-time data, reducing the need for users to assess themselves.

What it does

Neurosense is an AI-powered mental health assitant that monitors user's environment and behaviour to estimate their cognitive load. It detects the external enviroment ( e.g. lecture, meeting, study group ). It also estimates cognitive load depending on the task (low,medium,high). Basically, it provides a real time understanding of the user's situation and be further extended to offer personalised recommendations.

How we built it

It was built using YOLO (Ultralytics) which detects people in real time, Python and OpenCV to handle video processing. CLIP was used to classify the environment for scene recognition. OpenRounter generates wellbeing and journal guidance (with a fallback), and a StreamLit app that manages user input modes, and displays dashboards, insights, and AI feedback. It also includes an interpreter that combines EEG, physiological, behavioural and external environment data to predict mental state.

Challenges we ran into

There were dependency conflicts (Numpy, Pytorch, MediaPipe compatibility. It also runs heaving multiple heavy ML models simulatenously. Ensuring that a stable environment was detected with video processing. The CLIP was not detecting the environment it was currently. Coordinating a fast moving prototype across UI, sensing, and AI at the same time. It was difficult to determine how much we needed to polish the interpreter versus how much to stabilize the interpreter. There was messy handling between StreamLit, live camera logic, and LLM prompting

Accomplishments that we're proud of

We have successfully integrated multiple AI models into one system, which provides a real-time cognitive load estimater. A scalable system that can incorporate more sensors in the future can be designed. We made sure that the interpreter provided stablility instead of polishing it to make sure it is robust.

What we learned

We learned how to combine computer vision and AI models into a pipeline. We learned how important it is to ensure version compatibility. We learned how to deploy websites and how to use API keys to provide suggestions.

What's next for NeuroSense

There are many implementations in the project which are be implemented further such as a wig which records EEG signals, containing EEG electrodes connected using flexible PCB assemblies, so that it can be used for everyday use. A computer camera can be difficult to be used practically, however a design for glasses which has in-built 12MP camera for 3K video capture and a Snapdragon AR1 Gen1 Platform which supports Dual ISPs for premium video capture and next-gen AI capabilities. It is also optimized therefore it is lightweight and battery-efficient. We can improve accuracuy with better models and training.

Built With

Share this project:

Updates