Inspiration
AI has become well known for its ability to handle natural written language. Recently, I learned that AI has not had the same impact on sign language. Sign language has many layers of complexity that written text does not – it makes use of your hands, body language, and noise, simultaneously. To replicate the step-change improvements we see in written language translation, generation, and education, we will have to look for better suited technology.
Separately, the features of Meta’s recent headsets are extremely impressive. I feel, for a long time, immersion has been the primary focus of VR development. Spatial computers have focused on projecting another world onto yours – immersing you in amazing worlds and environments. While this is amazing, I think this undersells something else truly unique about the form factor. A headset, innately, has something that every other computing form factor does not: it can see the world through your eyes. No other device we own can take this very personal perspective. This “egocentric” form of computing is something truly unique, and is the element of immersive technologies that I think is the most exciting.
What do we get if we combine these ideas then? I propose that a Meta Quest provides a natural way to digitally teach sign language, by allowing you to directly see the shapes your hands should make.
What it does
BSL Teacher is a proof-of-concept app that uses the AR and hand-tracking capabilities of the Meta Quest 3 to teach you basic sign language.
• When you load in, you can see the spelling guide, and choose (by poking with your hands, or using the controllers ray) a tab to learn a letter. By using hand tracking, the app can now judge if you have made the correct shape. • You can open and close this main guide with the palm up or down hand gesture (or the hand triggers of the controller). • With the guide closed, you can now practise the last letter the guide was on, without any hints • From the settings menu (available with the left-hand pinch gesture), you can change a few settings, which will be remembered for later sessions.
Though simple, this is an experience that can only be done on a VR headset (and given the features of the prototype), potentially only possible on the Meta Quest 3. Traditional forms of computing – phones or laptops – cannot get the unique perspective the Quest 3 can. Because of this, it can teach you how to sign from the most natural perspective there is: your own.
How we built it
The app is built entirely using Meta’s Presence Platform within Unity. The key features used in development are:
- Hand Tracking: Meta’s robust hand tracking allowed us to create a way of understanding whether someone has shaped their hands correctly for a sign
I spent a lot of time considering how to make the digital AR objects blend as seamlessly as we could into the “real” physical world. In the end, this has been done using a few techniques:
- Depth API: This allows us to add occlusion to our main menus. This is a subtle (but I think very important) effect – allowing the digital world to more seamlessly blend into your physical surroundings.
- Haptics Studio: Having feedback from the AR objects is important to make you believe they are there. We used the Haptics to give feedback when you use the controllers.
- Multimodal Hand/ Controller: Giving people a choice on how to interact with the app, especially if they are new to spatial computing, is important. To help this we allow the menus to be accessed with either your hands or controllers (some of this has been built using the Interaction API)
I did user testing with a range of people I know, which really helped me iterate towards something more usable. I added features like the info tab, and the ability to close the guide and still practise your signing from direct feedback we had.
Challenges we ran into
There were a few main challenges:
- Depth API is very simple to set up initially. However, implementing occlusion on the menus proved tough for me. Getting text to show up clearly, while using an occlusion material, is not perfected yet. Regardless, I was committed to including occlusion, and after research and experimentation I have managed to get a setup I am happy with.
- Given its newness, there are some unresolved bugs in the Multimodal Hand/ Controller API that I had to work around (for example, when this is enabled, Unity sometimes mistakes pinches for the A button on the controller). This was overcome by experimenting with different control schemes until I found one that felt natural.
- Getting the amount of information needed to help some learn to sign into each tab was hard. I was able to find people to test it for me, which really helped me streamline the menu system.
Accomplishments that we're proud of
While it is a small app, demonstrating an idea, there are still multiple parts I am proud of:
- I am proud of the level of polish I managed to accomplish in the time – managing to get haptics, audio, and being able to save your settings through Unity’s PlayerPrefs system, all set up.
- I really like the feedback when you correctly make the shape with your hands – I find it quite satisfying when the “pop” noise tells you you successfully made the sign!
- I find being able to use occlusion on objects and menus very cool - I love how much more convincing it makes the illusion that the objects actually exist in your real space
What we learned
I had not developed an app that uses the Depth API or hand gestures before. I had to learn how to use these parts of the presence platform to allow me to make the multi-hand gestures needed for a BSL.
More generally, I learned a lot about developing for AR. The importance of feedback from the app to make the digital objects feel “real”, but also how much occlusion gives you a sense that the menus really exist in your space.
What's next for BSL Teacher
I would love to continue developing BSL Teacher, with an initial goal of having the complete alphabet and a release onto the App Lab. While I think this app demonstrates the potential, I am committed to having a full release. The idea of sign language in VR isn’t new, but (as far as I am aware), there is no standalone app for it on the Quest, let alone one that makes use of AR. Given the unique way the Meta Quest 3 can help someone learn sign language, and the popularity of language apps like Noun Town on the app store, I feel a fully developed version of BSL Teacher would be very well received. To do this though, I would need your support!
There are a set of small updates that could be made, such as adding the entire alphabet, and a left-handed mode, that would round out this initial prototype. However, after that base is set, there is huge potential for expansion. The most obvious one is to add a Duolingo-style teaching path, allowing a user to learn full words and sentences, and eventually become fluent.
Across the world, over 70 million people communicate with sign language, there are hundreds of languages that are used aside from British Sign Language. A long-term goal would be to start to include these. A main priority, if development continued, would be reaching out to the British Sign Language organisation to ensure that during development I had continuous input from people with lived experience on the subject.
Overall, my goal, if I were to win an award from this hackathon, would be to use that to kickstart full development of an app. Specifically, it would fund (i) a better prototype and (ii) more robust user testing. Awards of hardware, money, or an application to the MR Incubation Program, would provide me with the space and support from Meta I would require to make this happen. I am passionate about mixed reality, and would love the chance to build out the app in full.






Log in or sign up for Devpost to join the conversation.