Inspiration
Our project began with a simple idea: learning a new language should feel natural, intuitive, and accessible. Among the languages we found, sign language was one of the most difficult to internalize it without consistent practice or feedback. American Sign Language (ASL) relies heavily on visual comparison and visual feedback, which is surprisingly difficult to do when your hands are occupied performing signs. Traditional tools often force learners to pause, check their screen, rewind, or mirror an instructor, breaking the natural flow of motor learning. We wanted to explore a method where VR could provide a hands-free way to receive guidance while keeping both hands free to practice and gaining real-time feedback to preserve the motor/visual learning context in a way that mirrors natural signing.
What it does
We built an interactive VR experience where users learn and practice signs inside a fully immersive environment. Using built-in computer vision, the system evaluates hand motion and gesture accuracy in real time. Then utilizing an LLM, the system provides feedback to help users correct subtle errors and contextualize certain signs. We added a dedicated error detection to highlight issues the user might not consciously notice, making it possible to compare one’s movements to the correct gesture without disrupting the learning process. To make the experience more adaptive, we captured EEG readings using an OpenBCI headset and processed them through a Python pipeline to interpret cognitive responses. That data is streamed back into Unity where the platform is able to adjust pacing, difficulty, and feedback style based on whether the user appears confused, overloaded, or confidently engaged.
How we built it
Building this required Unity, Meta XR interaction system, OpenBCI’s GUI stream, and custom Python analysis code, guided throughout by our visual design in Figma.
Challenges we ran into
Getting all these parts to communicate smoothly was one of our biggest challenges. We ran into frequent issues with VR headset connectivity, Unity build failures, and timing mismatches between EEG data and in-game events. Much of our time was spent debugging low-level integration problems, but solving them taught us a huge amount about real-time XR pipelines and multimodal system design. Although technical constraints prevented us from fully integrating the spring environment into XR, we are still extremely proud of the work we did.
Accomplishments that we're proud of
Despite the difficulties, we're extremely proud of this prototype. The final experience blends VR interaction design, computer-vision gesture evaluation, LLM-based coaching, and EEG signal processing into one tool. Furthermore, each scene was intentionally designed with comfort in mind. We explored warm colors and a spring-themed backdrop to create a welcoming space for beginners.
What we learned
Through this project, we saw firsthand how powerful VR can be as an accessibility tool, especially for educational experiences that are difficult to recreate in real life. VR removes barriers such as constant instructor availability and the pressure of performing in front of others, while giving learners a safe, immersive environment to dive into the world of learning.
What's next for XR Hackathon
Going forward, we want to generalize this approach to a wider range of learning tasks. The combination of immersive environments, brain monitoring, real-time motor evaluation, and adaptive AI feedback could help people who struggle in traditional classrooms or who benefit from more individualized, embodied learning experiences. This project is only a first step toward more personalized, responsive educational tools that bring together neurotechnology, XR, and AI.
Log in or sign up for Devpost to join the conversation.