Note: I didn't finish this, but I will leave up my Devpost description since I can't revoke my submission from Lovelace Hacks and CivHacks. The above video was one of my test animations for the sign "what" (or a shrug, very fitting), which is publicly available. It was necessary for me to link any video in order to make Lovelace Hack's 7am PST Devpost submission deadline.

This was a good exercise in creating a project by myself. It's good to work in a team, and it's also good to learn how to work by yourself too, so I'm still thankful for this experience as I learned a ton. :D I got to deploy on the Hololens I have access to, though not anything to show properly.

Nonetheless, I hope others find my attempt useful to learn from if they also want to compete as a solo team. I know I'm going to try again soon. <3

  • Ryan

Welcome to Ursign -- a Mixed Reality app for translating vocal speech to American Sign Language, brought to you by Hans, the Ursine Interpreter.

Features

Vocal Speech-to-ASL Translation: Utilizing Hololens/MRTK API, Ursign listens to vocal speech and translates it into American Sign Language through its ursine interpreter buddy

How Ursign Was Built

Inspired by SignGlass and HoloHear, Ursign was first conceptualized as a project to learn about developing on the Hololens 2. The project evolved into a mini passion project, and coupled with Table 1's success with Catiator at TreeHacks 2021, I was inspired to learn more about American Sign Language, the Deaf Community, and different modes of communication. I myself am hearing, though I have damaged my hearing significantly. I expect to need to use more than my vocal cords to communicate in the future, and I thought this would be a good time to pair this curiosity with the eagerness to use a Hololens. I'm also interested in exploring more ways to provide tools to the differently-abled if they are interested in using them for assistance.

Ursign was created by me as a solo project. I am interested in extended reality (XR), including mixed reality (MR). I went through quick iterative processes before diving in to actual implementation in 3D and Unity development.

Character Design-Fueled UX

General Inspiration

I drew a lot of inspiration from https://healthiar.com/holohear-translates-the-spoken-word-to-asl-in-mixed-reality#:~:text=Created%20at%20the%20Microsoft%20HoloLens,reasonably%20close%20to%20real%2Dtime.. This project is very similar to HoloHear, but because it was also a hackathon project, I felt it was missing a lot of essential UX to make it even more usable. And though it makes sense logically to use, I also felt that the robotic 3D model used to sign is not very inviting and hard to see (per the demo video), so I wanted to try tackling these issues. Sign Glass is a great alternative that makes use of a live interpreter. From what I read, it's primary use is for students who are attending lectures.

The issues I saw with HoloHear mainly fell on the robot character interpreter:

  • The robot's hands are the same color as the majority of its model
  • The robot itself is very small upon load in, and no resizing was demonstrated
  • The robot looks like it may have a face rig for necessary expression when gesturing, but I did not see it demonstrated (possibly because it's so small)

I wanted to tackle this by designing the character, nicknamed Hans, to try and address this:

Quickly designing Hans, the Ursine Interpreter (silhouette sketches + quick hand drawings)

Quickly designing Hans, the Ursine Interpreter (Quick Turnaround)

Hans is a golden bear as a tribute to my first alma mater, UC Berkeley. <3

Technical Design and Implementation

Ursign was built with Unity3D and Microsoft's Mixed Reality Toolkit (MRTK). The focus of development went into using detectable audio phrases to trigger animation states in Hans, the Ursine Interpreter. Hans is visible through use of mixed reality on a Hololens -- they hang out in the view of the user, listening for phrases to translate into American Sign Language until the user is done using Ursign.

Technologies / Hardware

  • Mixed Reality: Hololens 2, Unity 3D, C#, MRTK
  • 3D Modeling/Animation: Autodesk Maya, Unity 3D, Miximo
  • UI/UX/Graphics: Adobe Photoshop

Challenges I Ran Into

  • Designing a bear with actual hands and not making it look (that) creepy
  • Getting Unity and MRTK to re-download onto my laptop due to missing lib files :(
  • Long build times
  • Outdated documentation
  • Using VSCode to sideload
  • Mentally screaming

Accomplishments We're Proud Of

  • Developing on a Hololens! Despite not finishing
  • Creating a clean enough mesh to work with an autorigger on first pass
  • Decent time management for a solo project until I passed out
  • Participating in a hackathon by myself

What I Learned

  • More about VSCode
  • Blocky cartoon hands make it more difficult to sign
  • Character design is another form of UX
  • Bob with hands will haunt me

What's Next for Ursign

  • Face/lip recognition for the purpose of reading lips
  • Support for SEE and PSE
  • Different interpreter characters to choose from

Project Credits/Citations

Special Thanks

  • Nancy Zuo for her feedback on character design and letting me draw from her Devpost setups
  • Mitchell Kuppersmith for Bob with hands
  • Rahul Khanna for giving me the OK to use the Hololens!
  • Table 1 for existing
Share this project:

Updates