Inspiration

MAITRI was inspired by real-world mission failures where human stress played a critical role—like the Skylab mutiny and space mission incidents. We extended this idea to soldiers in remote areas, submarine crews, and disaster zones, where isolation and lack of support often lead to unnoticed mental breakdowns.

What it does

MAITRI is a multimodal AI assistant that detects emotional and physical well-being using audio-video inputs. It provides real-time support, adaptive conversations, and early stress detection—working even in offline environments.

How we built it

We used Python with TensorFlow/PyTorch for AI models, OpenCV for facial analysis, Web Audio API for voice detection, and integrated APIs for context-aware conversations and memory.

Challenges we ran into

Ensuring accurate emotion detection, building a reliable offline system, optimizing for low-power environments, and designing a human-like interaction experience were major challenges.

Accomplishments that we're proud of

We created a working prototype that combines emotional detection, offline capability, and real-time AI support in one system applicable across space, defence, and civilian sectors.

What we learned

We learned that human factors are as critical as technology, and building empathetic AI requires both technical and psychological understanding.

What's next for Maitri.Ai

We aim to improve model accuracy, add wearable integration, enhance personalization, and deploy MAITRI in real-world environments like defence sectors, remote healthcare, and corporate wellness.

Built With

Share this project:

Updates