Inspiration

The idea for Iris was born out of a mutual agreement on the need for interconnectivity between different languages. While technology has continued to allow more and more connectivity across far distances, one of the most significant barriers to this connectivity is the language barrier. We resolve this issue with Iris.

What it does

Iris is a full-stack, web-based that creates translated transcriptions of live audio. A class instructor, keynote speaker, or any other user can make a room, have other users join the room, and immediately send live translated transcriptions to all of the users in the room.

How we built it

The frontend of Iris was constructed using React, React-Router, and Javascript. The backend was made in Python and interacted with AssemblyAI API and the Google Translate API. The frontend and backend communicate with each other using WebSockets.

Challenges we ran into

Due to the frontend and backend being written in different languages, converting the recorded audio to an acceptable format for the AssemblyAI API was challenging and took many hours of testing and debugging. While we made ample use of the documentation provided by AssemblyAI, there were no snippets that showcased inter-language interactions. Trial and error eventually prevailed, but not without a challenge.

Setting up Google Translate API was also a challenge. There was a tutorial on setting up Google Translate API, but we executed the code via the Google Cloud console. The commands given in the tutorial only worked with the Google Cloud SDK, and there wasn't a guide on converting those commands into Python. This was frustrating as we needed to debug the translation code outside of the Google Cloud console.

Accomplishments that we're proud of

Iris accomplishes what we set out to do. It supports many languages and even allows for multiple simultaneous rooms displaying translated text. Overall, we are very proud of what we have accomplished with Iris especially considering the 36 hour timeframe.

What we learned

We learned how to employ WebSockets for a continuous connection between the front and backend throughout this hackathon. Additionally, everyone also learned how to use the AssemblyAI and Google Cloud APIs and their documentation to help us build Iris.

What's next for Iris

One potential avenue that we could take is implementing live video feeds in the future. Additionally, allowing hosts to upload recordings of their files for translation through an upload menu can further increase accessibility for Iris users.

Built With

Share this project:

Updates