Inspiration
My goal was to create something with hand tracking and Interaction SDK from the beginning. As my focus in previous hackathons was gaming, I decided to go for an educational approach this time. Something that would help people learning something, providing a benefit and would be helpful at the same time. After short research I decided to try a sign language learning application, which doesn't exist yet. Having something new, challenging and useful made the decision to go for it pretty easy. :)
What it does
A.S.L.A.N. or the "American Sign language Learning And Navigation" experience helps people learning the american sign language alphabet. Inspired by the GorillaZilla project, utilizing learning chunks in form of waves, that are distributed over the space, really helps to memorize these gestures. The american sign language alphabet consists of one handed gestures, which are possible recognizing with Interaction SDK of Quest3 hand tracking.
How I built it
First checking if all gestures are possible to be recognized, building the foundation was rather easy using Unity Meta Quest BuildingBlocks and GorillaZilla. I am using the RoomModel, Passthrough, a simple logic for grid distribution of the single learning gestures, SyntheticHands and Interaction SDK for gesture recognition. The Meta Quest Unity packages provided the Hand Grab Pose Recorder, which was used to record the gestures one-by-one. The recorded data was combined with the Oculus Ghost Hand to have a proper visual representation of the performing gesture. Finally a lion statue (ASLAN means Lion in turkish) and simple indicators for finding the learning content in the room and the app was finished.
Challenges I ran into
Recording and creating the gestures was rather difficult, as the Hand Grab Pose Recorder is usually used for grabbing an object and recording that poses. So I had to place a "fake object" for grabbing, record all the poses (letters) of the sign language alphabet. There is one big downside of this, the recorded hand pose data is only visible, when the object is grabbed or the data is focused in the editor. Luckily the Oculus Ghost Hands have a HandGhost component, which accepts a HandGrabPose object and changes the hand mesh according to that pose info (under Optionals - Hand Grab Pose in the Unity editor).
Accomplishments that I'm proud of
Establishing the workflow from creating custom hand poses and persist them in objects and custom recognition of this data, creating ShapeRecognizer ScriptableObjects for every letter and merge all that together into a simple, but fun learning experience. Creating something that provides value, teaching something that AND shows a cool use-csae for passthrough and hand tracking is another thing I'm very proud of. Make it look easy ;)
What I learned
A lot about custom hand poses, how to use them and general usage methods for the RoomModel. Although I thought I already know quite a bit about the Meta All-In-One SDK, I learned a lot I haven't touched so far.
What's next for A.S.L.A.N.
Although the learning success is already tracked locally and with every new run, other not so often learned letters are shown, to have a better retention and learning, A.S.L.A.N. is still missing progress reports, more information about what's already learned and what not. Adding simple vocabulary and more fun to the experience itself, maybe with little riddles or levels (like in tiny towers) would defenitely be fun for everyone. I'm thinking about making A.S.L.A.N. open-source too, but we'll see :)

Log in or sign up for Devpost to join the conversation.