Inspiration
We were inspired by the need to make sign language learning more accessible, especially for the 96% of Deaf individuals born to hearing parents (Mitchell & Karchmer, Sign Language Studies, 2004) and the many others seeking to bridge communication gaps. Traditional tools often lack interactivity and real-time feedback, leading to frustration and slow progress.
We also drew inspiration from Duolingo’s gamified learning system. With Signari, users build vocabulary and confidence through progressive lessons, challenges, and practical scenarios, making learning both effective and enjoyable.
For all those looking to break down communication barriers, Signari combines immersive XR technology with educational design to change how the world learns sign language.
What it does
Signari is a VR-based educational tool that teaches American Sign Language (ASL) through animated hand models and immersive experiences. Users can see signs demonstrated virtually and learn their meanings, with plans to provide feedback on their own hand motions through hand tracking.
How we built it
We downloaded a virtual hand from PolyPizza and then used Blender to model and animate hands demonstrating ASL signs. These animations were integrated into 8th Wall (Niantic Studio) to create an immersive XR experience, allowing users to see and learn signs in a mixed reality setting. We also used AI tools and tutorials to troubleshoot technical issues and optimize our development workflow.
Challenges we ran into
- Setting up development environments with SDKs and NDKs took significant time.
- Hand animation was complex and time-consuming.
- We initially planned to use Unity, but compatibility issues led us to switch to 8th Wall.
- Full hand tracking couldn’t be implemented due to premium restrictions in Niantic Studio.
Accomplishments that we're proud of
- Successfully created and animated ASL hand signs in Blender.
- Integrated animations into 8th Wall, allowing for an immersive experience.
- Overcame multiple technical challenges and pivoted platforms when necessary.
- Built a foundation for a tool that has real-world impact.
What we learned
We learned how to model and animate in Blender, work with 8th Wall for XR experiences, and solve real development challenges under time constraints. We also gained insight into accessibility design such as Unity and the importance of user-friendly learning environments.
What's next for Signari: Immersive Sign Language Learning in VR
- Integrate hand tracking and machine learning for real-time feedback.
- Expand the sign library to include more words and full sentences.
- Design a kid-friendly version with simplified content.
- Add support for multiple sign languages.
- Explore real-time sign recognition for live conversations.
Reference: Mitchell, R. E., & Karchmer, M. A. (2004). Chasing the mythical ten percent: Parental hearing status of Deaf and hard of hearing students in the United States. Sign Language Studies, 4(2), 138–163.
Log in or sign up for Devpost to join the conversation.