Inspiration
In the U.S. today, the largest provider of mental health support may not be a therapist, app, or clinic - it might be artificial intelligence. Millions of people now turn to AI chatbots like ChatGPT, Claude, or Gemini to talk through their thoughts and emotions. Despite growing awareness around mental health, access to care hasn’t caught up. Therapy remains expensive, time-consuming, and often out of reach. In a recent Sentio survey, 48.7% of users who report mental health challenges said they use AI for therapeutic support. The need is clear: people are looking for connection, empathy, and guidance that’s available anytime, anywhere. That’s why we created AILA, a wellness companion that merges the emotional intelligence of immersive therapy with the accessibility of AI. AILA uses voice interaction, emotion detection, and adaptive environments to make support feel personal, responsive, and human-centered. It’s designed not just to talk, but to listen: to help users manage stress, anxiety, and mood in a way that feels real, safe, and continuous. AILA isn’t meant to replace therapy, just to redefine access to it.
What it does
AILA is your personalized wellness companion that listens without judgment. Whether you’re dealing with work stress, relationship struggles, or simply need someone to talk to, AILA offers a safe, empathetic space to open up. Available 24/7, AILA listens attentively, responds with understanding, and offers insights, encouragement, and gentle guidance tailored to you. Conversations are fully anonymous and unbiased, giving you the freedom to express yourself honestly. You can even customize how AILA speaks, switching between empathetic and supportive, friendly and conversational, or analytical and objective tones, depending on what you need in the moment. While AILA doesn’t diagnose conditions or replace professional care, it serves as a powerful complement to traditional therapy. It’s a supportive tool for reflection, emotional regulation, and clarity between sessions. Whether you’re unwinding after a long day or processing late-night thoughts when no one else is around, AILA is always there to listen.
How we built it
We built a prototype using ThreeJS and React, and tested our solution on the PICO 4. For the front end, we also leveraged Figma, Blender, and Meshy AI for designing and prototyping our dashboards and models. For the front end, we used React 19 for the UI framework, TypeScript for the type-safe development, and Vite to build the tool and dev server. We initially used WebSpatial for the Vision OS, but we moved to WebXR. We also incorporated three.js and React Three Fiber for rendering. Lastly, we used Open AI REaltime API, WebSocket, Web Audio API, a Voice Activity Detection (VAD), and MediaStream API for microphone capture.
Challenges we ran into
We faced difficulties integrating our backend to run efficiently on the cloud while maintaining compatibility with VisionOS. Web-spatial integration also caused instability due to internal rendering bugs, which took time to debug. Optimizing real-time ChatGPT responses without driving up API costs was another challenge, as was stabilizing our voice detection model, which initially caused ChatGPT to miss user input. Once we fixed these issues, AILA’s performance improved significantly, becoming faster, smoother, and more cost-efficient overall.
We also faced some challenges because our program was running properly on our computer, but we were not able to load it into the demo laptop. Therefore, we had to pivot to another hardware and software to have a working prototype by the deadline.
Accomplishments that we're proud of
We’re new to VR development, so it was an exciting challenge to learn VR and experiment with 3D modeling over the weekend!
What we learned
We learned the complexity of combining emotion, interaction, and immersion in real-time systems. At first, we aimed to build a full 3D world, but we realized that focusing on conversation quality and emotional authenticity delivered a much deeper experience. This pivot helped us refine AILA into something humanly believable, not just visually impressive. We also discovered the challenges of cross-platform AR design and how tools like Three.js can bridge the gap between web-based and headset-based prototypes.
What's next for AILA
We hope to improve our current demo with improved language capabilities and multilingual support. We also intend to integrate biometric data from other compatible products, such as heart rate, facial recognition, and motion data, to provide a more personalized, adaptive, and secure user experience. We also hope to update the frontend to reflect our original design.
Built With
- blender
- figma
- gpt-4o
- gpt-4orealtime
- meshy
- pico
- react19
- three.js
- typescript
- vite
Log in or sign up for Devpost to join the conversation.