Inspiration

We are inspired to augmented reality to create an experience for those who are hearing and speech impaired to be able to communicate via sound and text.

What it does

SignAR is the next generation of sign language application, providing non-verbal communicators new tools to communicate. Core features include a gesture/sign language to voice/text translation, the ability to customize vocabulary and phrases [hot keys] to be able to verbally communicate with friends, family, coworkers and loved ones.

How we built it

  • We built this experience in Unity, utilizing leap motion for hand-tracking, oculus rift which was hacked with a camera to empower an AR function, several laptops, a Bluetooth speaker - as no directional audio speaker was available.

  • Usability and readability were our main concerns of UX and UI. For UX, we worked on a user persona and sketched low fidelity prototyping with a functional flow. We focused on visual indicators/feedbacks for hearing impaired users and more comfortable face to face communication. To differentiate a user and a conversation partner, the complementary colour was used. We used a user's eye level as a reference for the position of UI so that, augmented reality UI harmonized well with reality and do not disturb a user's conversation. We leveraged Adobe Photoshop and Illustrator for the UI and branding elements.

Challenges I ran into

  • We ran into challenges around teaching the experience to detect human hands and the associated meaning behind the gestures made by human hands.
    None of the team previously knew American sign language and it was a fairly steep learning curve, but we learned enough to start the creation of a gestural vocabulary. Machine learning would be a fantastic addition to the experience in order to build the vocabulary efficiently and create a meaningful tool.
  • This experience would lend itself very well to the hololens 2 - however due to hardware constraints one was not available, so we hacked together hardware solutions intended to emulate the hololens2.

Accomplishments that I'm proud of

We are proud of numerous aspects of the project, but the top would be the technology created in a short time, thoughtfulness to the user experience and the development of the concept by scrambling to develop subject matter expertise within a sensitive area - non-verbal communication. We sourced the hackathon crowd and leveraged mentors to unearth a wonderful person named Shannon, an educator who works with non-verbal communicators, whom we interviewed and talked with extensively in order to build a sensible and thoughtful approach to creating this product.

What we learned

We learned the immense challenges and barriers that exist for non-verbal communicators. 98% of deaf people do not receive education in sign language. 72% of families do not sign with their deaf children - which is heartbreaking. 70% of deaf people don't work or are underemployed and this certainly doesn't bode well for their confidence or happiness. 1 in 4 non-verbal communicators have left a job due to discrimination. All of these facts drive us to create a better way for non-verbal communicators to express themselves in a personal and professional context.
Technically, we learned how to work with existing hardware limitations - lack of hololens2 - and work with oculus rift. Ideally the next step for this project would be to make the product hardware agnostic in order to empower meaningful communication for any wearer of any headset.

What's next for SignAR

  • This tool could be a very significant product for non-verbal communicators both locally and globally. Exploring Watson integration will allow machine learnings to power up the vocabulary to scale the potential for meaningful communication around the world. FUTURE FEATURES // 
  • INTEGRATE WITH OPEN SOURCE WATSON, LEVERAGE MACHINE LEARNINGS TO RAPIDLY BUILD OUT A ROBUST VOCABULARY DATABASE.   
  • INTEGRATE WITH OPEN SOURCE PAST MIT HACKATHON PROJECT
  • TO INTEGRATE EXISTING VOICE TO GESTURE PROJECT
  • EFFECTIVELY CLOSING THE LOOP ON MEANINGFUL COMMUNICATION FOR NON-VERBAL COMMUNICATORS.  
  • DEVELOP DIRECTIONAL AUDIO FUNCTIONALITY, ALLOWING FOR PRESENTATION MODE OR MORE PRIVATE CONVERSATIONS TO TAKE PLACE. 
Share this project:
×

Updates