Inspiration
Most of our parents are non-native speakers of English and immigrants from foreign countries. When they first came to the US, they had a hard time adapting to the culture and the English language. Even though they all learned English in school, it's one thing to learn something on paper, and a totally different thing to put it to practice at a high level. It's the same reason why we can take Spanish or French all four years of high school and still not be able to have a conversation with native speakers in Mexico or France.
We wanted to build something that broke down this language barrier by giving people a new way to learn languages that required actual conversation instead of memorizing textbooks and vocab words.
What it does
Adapt is software for smart glasses that directly translates conversations into your language. It also picks up on slang, common phrases, and key words and memorizes those words. You can later go into the web dashboard to see all the phrases and words you heard throughout the day, using those words to literally learn the language from scratch. Adapt also allows you to switch into context mode, which doesn't provide the wearer with the entire translation of someone's speech, but only gives the wearer the common phrases and sayings, so that the wearer can practice their understanding of the language with minimal hints from the glasses. This is so that the user can have multiple ways to test their knowledge to see if they're really learning the language or not.
How we built it
We built it using Next.JS, TypeScript, JavaScript, Gemini API, Supabase, and Open source technologies for aiding with translation features.
- We first booted up the smart glasses with an open source SDK and started mapping hexcode values to actionable events on the glasses.
- After that, we built functions for transcription using an LC3 stream from the microphone.
- With the help of Google Gemini, we were able to classify languages and pass it through a translate open source package and relay the information back to the glasses display.
Once we had this, we set up our Supabase instance, which needed to include:
- User preferences
- Common sayings picked up through conversation, and their descriptors
- User's familiarity with languages, phrases, and their progress in learning these languages
After our supabase and authflow was set up, we worked on the AI language assistant, which included fun games the user could play in order to learn languages.
- These games used a RAG model that pulled information straight from the user's log of conversations they've had.
- Using historical interactions, the AI (powered by Gemini API) would suggest flashcards to drill unfamiliar or confusing terminologies.
- The AI would also allow the user to speak and have conversations with it, correcting the user of misuses of certain phrases and then teaching them the proper way to say certain things.
Since our app is all about education, we built an analytics tab as well for the user to view their progress and visually see how familiar they are with languages, as well as understand what they might need some extra work on.
- We believe that this will encourage users to keep up their learning streak as they are motivated to make progress and firsthand see the practical use of their language skills in their next conversation.
Challenges we ran into
Working with the hardware of the glasses was a challenge, since we had to boot up the glasses on a custom SDK. There were a lot of bugs with slow transcription service, incorrect translations, and not picking up on common phrases that were said in conversation
We solved these issues by digging deep into documentation, finding work arounds that people had posted on discord servers and reddit threads, and literally drawing out our architecture on a whiteboard many times before implementing it.
Accomplishments that we're proud of
- We are super proud that we got a working educational tool on the smart glasses. We hope that you can see just how big the impact for such a tool would be. In just over a day, we were able to build a rough version of a tool that could change the way people learn languages and connect forever.
What we learned
- We enjoyed learning challenging skills from this project, We aren't the most experienced with hardware, but we wanted to learn it eventually since we believe hardware is the future. We understood how to go from JavaScript code to displaying translations on a smart glass display.
What's next for Adapt
- We want to expand to other smart glasses. Our glasses have a display, but no camera. We would love to put our software on smart glasses that have a camera so that people can look at road signs, restaurant menus, etc. in different languages and understand what it means.
- We also want to make this product more polished and bring it into the hands of hearing-impaired individuals and immigrants how have a hard time speaking the language of the country they immigrated to.
- Imagine a world where language isn't a barrier anymore and people from all over the world can connect and communicate with each other. With Adapt, you can speak other people's languages and they can speak yours. And the minute you speak someone else's language, they'll never forget it for the rest of their life.
Let's make Adapt the new way people learn languages and get educated about diverse cultures!
Built With
- geminiapi
- javascript
- next
- smartglasses
- supabase
- typescript


Log in or sign up for Devpost to join the conversation.