Inspiration

When we think of Zoom, Twitch, and other live streaming platforms, we usually do not see live captioning for the users. Presenters tend to keep the live transcript off during most of their presentations. Another issue is that most streaming platforms—with live transcript features—will only have their live captioning for their own services. This makes things much more difficult for users who depend on live captioning as part of their daily usage. Also, those who are hard of hearing, and/or need ASL to communicate, do not have an effective way of presenting besides typing.

What it does

What .readme does is provide a universal live captioning service, where anyone on any device/system (Windows, Mac, Linux, etc.) will be able to have live captioning on their desktops, regardless of what application the user will need it for. So for example, when users log on to Zoom and join a meeting, live captions will be directly overlay based on the onboard output sound of their device.

How we built it

We built this application/service using React, Typescript, Electron, and Google Cloud services (Speech to Text).

Challenges we ran into

One of the biggest challenges our team faced was the limitations of API's and their compatibility with our web frameworks. For instance, Google's Speech to Text API had numerous package errors when running the service on Electron. When working with the ASL models, ASL detection was available; however, the models to actually interpret such signs were not preemptively trained.

Accomplishments that we're proud of

A big accomplishment that we're proud of, is our seamless user interface. We wanted to market this to users for easy access and easy use. Another accomplishment we're proud of was how we were able to workaround some of the Google API problems. We were able to redirect the speech to text transcripts from another site that we built and ship those transcripts directly into our live captioning.

What we learned

For most of us, Electron was a first time use. While we all had Javascript and React experience, many of us had to learn some of the more basics of Electron and Typescript in order to build our app. At the same time, we also learned that most Google API's do not have too much support for Chromium shells like Electron, so adapting to these downfalls through creative means allowed us to learn more about hacky tricks to achieve our end goal.

What's next for DotReadme

The ultimate goal for DotReadme is to be accessible to everyone. This includes those who are in need of ASL to get through their daily lives. We propose that what’s next for DotReadme, is the ability to detect AND translate ASL words in order to better assist our users. In addition to this, we want to add in an ASL to speech service (using Google’s Text to Speech API) so that it would seem like the presenter is actually speaking to their presentees.

Share this project:

Updates