Inspiration

I remember watching a video in my CMS class that depicted the struggles for the visually and auditorily impaired. After watching this video, my teammates and I realized the extra step the auditorily impaired had to take to do something that we take for granted. Ranging from needing to mutually understand sign language to intrusive devices (ex. Conversing through a phone translator in a real-time conversation), it has certainly not been easy. So, when we were presented with the opportunity to use the new and innovative Microsoft HoloLens 2, we knew we could use the HoloLens’s non-intrusive design to improve conversions for the auditorily impaired.

What it does

Our app utilizes the HoloLens’s input and output features to facilitate conversation for the auditorily impaired. The user is presented with a menu that has an option for text to speech and an option for speech to text. For users that are vocally impaired, text to speech allows the user to type a message in the HoloLens and have it read out. Pressing the speech to text creates an audio stream that grabs the words spoken by the nearest person and presents them as a caption in the HoloLens.

How we built it

First, we broke down our idea into two parts: speech recognition and mixed reality. We then split into two groups, one working on finding speech APIs and the other playing around with the unity environment. The core functionality of the app is built in Unity with C#. In addition, we utilized the mixed reality toolkit to get comfortable with the environment. We then went online to find existing Speech SDKs and APIs for our product’s functionality. After looking through many documentations and tutorials, we were able debug and get a working app on the mixed reality simulator. Through collaboration between Azure team members and similar hackers, we were finally able to present and deploy our app onto HoloLens!

Challenges we ran into

The first problem we ran into was finding an efficient way to work collaboratively. We created a GitHub repository and a Unity .gitignore. While doing this, we found out that we couldn’t push our Speech SDK as the files were too large. We decided to all download the SDK files individually but found out there weren’t ready SDKs for macOS. Being only able to develop on Windows, the rest of our team started to research how to deploy onto HoloLens. Encountering countless bugs and crashes we collaborated with mentors and fellow hackers to overcome the great learning curve.

Accomplishments that we're proud of

Our team is particularly proud of hacking on an unfamiliar and brand-new technology. We are proud of our collaboration and problem-solving process that went into creating our app.

What we learned

First, we learned how to create a game plan to make sure to stay on track and release by a deadline. In addition, we learned and applied many different ways to collaborate. Besides collaborating online through GitHub, we also collaborated with peers and mentors. Also, we learned how to problem solve and overcome unforeseen obstacles through our many resources. Lastly, we learned how to develop for a new cutting-edge technology.

What's next for Clens

For Clens, we want to implement new features! One that we got started on, but couldn’t finish, was using a database to store common response phrases for the user to press on instead of having to type. In addition, we want to implement new and better UI designs to create great user flow and aesthetics.

Built With

Share this project:
×

Updates