Inspiration

A few of us had friends who were clearly struggling — stressed, anxious, quietly going through something — but never spoke up about it. That made us wonder: what if handwriting could reveal what a person won't say out loud?

Research confirmed it. Psychological conditions actually change how people write — heavier pressure, irregular spacing, drifting baselines. These are real neuromotor signals. We decided to build an AI that could read them.


What it does

You upload a handwriting sample. The system tells you whether the writer is showing signs of Stress, Anxiety, Depression, or Normal condition — and explains why in plain language.

It runs through 4 intelligent agents:

  • Perception Agent — cleans and preprocesses the image
  • Feature Structuring Agent — extracts letter size, spacing, slant, pressure, baseline
  • Cognitive Reasoning Agent — runs the Hybrid DNN and fuses results: $$P_{Final} = 0.7 \cdot P_{DNN} + 0.3 \cdot P_{Feature}$$
  • Decision & Explanation Agent — outputs the condition with a human-readable reason

Accuracy: 93–95%. Prediction time: under 2 seconds.


How we built it

We collected 413,701 handwriting images across 4 classes, with full consent and anonymisation. Then built a preprocessing pipeline using OpenCV, trained a Hybrid DNN (Adam optimizer, 30% dropout, categorical cross-entropy loss), and wrapped everything in a 4-agent Agentic AI pipeline.

We tuned the fusion weight α through cross-validation — testing 0.5, 0.6, 0.7, and 0.8 — and landed on α = 0.7 at peak accuracy 94.6%.

Deployed as a real-time Streamlit web app with image upload and live camera support.

Stack: Python · TensorFlow/Keras · OpenCV · Streamlit · Scikit-learn


Challenges we ran into

  • Dataset cleaning — 413k images sounds great until you're manually fixing mislabeled and corrupted batches for weeks
  • Stroke detection bug — our system once detected 78 strokes in a single sentence (should be 10–15). One mishandled character gap was breaking everything
  • Anxiety vs Stress confusion — they share too many handwriting traits. The multi-agent layer was what finally separated them properly
  • Fusion weight — equal weighting (α=0.5) actually hurt accuracy. The DNN features need more trust than hand-engineered ones

Accomplishments that we're proud of

  • 93–95% accuracy — up from ~70% where we started
  • F1 > 0.90 across all four classes individually, not just average
  • Built a full 4-agent Agentic AI pipeline with built-in explainability — not just a black box result
  • Collected and curated 413,701 images as a student team, with proper ethics and consent
  • Real-time prediction in under 2 seconds on CPU

What we learned

The fusion weight tuning taught us that combining two models is never as simple as averaging them — you have to test it properly.

We also learned that interpretability and accuracy are not opposites. Building explainability into the pipeline actually made debugging easier and the system more stable.

Most importantly — working on something in the mental health space forces you to slow down. Real people are on the other end of your model's predictions. That responsibility changes how you build.


What's next for Handwriting Pattern Analysis for Mental Health Detection

The biggest gap right now is that we work with static images — we capture the shape of handwriting but not the process.

Next step is switching to stylus tablet input (.svc format) so we can capture stroke velocity, pressure over time, and air-time between words. We'll swap the CNN for an LSTM or Vision Transformer to model writing as a sequence.

After that — a clinical validation study with psychologists comparing our results against DASS-21 assessments, and eventually a mobile app where someone can write a sentence on their phone and get a quiet, non-intrusive wellbeing check.

That last one is what we had in mind from day one.

Built With

Share this project:

Updates