💡 Inspiration

Since the dawn of human civilization, speech has played an integral part in our society. Speech allows us to express ourselves and form communities; to speak is to be human. Now, what if this was all taken away? How would you live?

Well, the grim reality is that 1 in 10 Canadians face this issue every day due to speech disorders stemming from injuries, strokes, and other conditions. Individuals with speech disorders may feel embarrassed and isolated, and also experience discrimination in the workplace.

In most cases, the only option is to seek help from a speech therapist. However, it’s incredibly difficult to consistently work with a therapist due to barriers of location, time and money; especially for Canadians in rural areas.

Introducing Viva! A fun, learning-style mobile app that uses speech recognition technology to provide you with personalized feedback on how well you are pronouncing words.

🔍 What it does

Viva's machine learning engine analyzes the way you say words and provides you with personalized feedback that will help improve your pronunciation. Our application empowers individuals to build confidence and live life to its fullest!

⚙️ Our Tech Stack

Mobile

  • Frontend: React Native, Expo, Expo APIs (ex. Expo Audio, Expo Secure Store)
  • Backend: TypeORM, TypeScript, Express.js

Web

  • Frontend: Vanilla HTML/JS/CSS
  • Backend: Django, SQL

Infrastructure

  • Google Cloud AI
  • Text to Speech APIs
  • CockroachDB

🚧 Challenges we ran into

  • Developing an encrypted data store for Speech to Text and Text to Speech hosted on Google Cloud in only 24 hours.
  • Coordinating with team members who live in different timezones
  • The video editing software crashed without a save, causing us to lose a considerable amount of work for the pitch video

✔️ Accomplishments that we're proud of

  • Time-Management: being able to create both a web application and mobile application within the time frame of Hack the North 2021
  • Leveraging Google Cloud AI and a confidence score to estimate speech quality
  • Hosting an Express API on Heroku which interfaces perfectly with the mobile app

📚 What we learned

Throughout Hack the North 2021, we learned a variety of different frameworks, techniques, and APIs to build Viva, some of which include: React Native, Google Cloud AI APIs, and CockroachDB. Before the hackathon, most of us had limited experience with React Native and Mobile Development as a whole, and we learned a lot about the framework over the 24 hours. Furthermore, we had never worked with Google Cloud AI APIs nor CockroachDB in the past, and we had a blast learning about the different APIs and leveraging them for speech therapy.

🔭 What's next for Viva, Re-imagining Speech Therapy!

While we were able to complete a considerable amount of the project in the given timeframe, there are still places where we can improve. A few notable regions include fully completing all of the learn modes (ReadIt, Flash, Listen) with our previous target of 100 questions, improving the performance by integrating native code for crucial features such as the speech to text, as well as deploying it to the Apple App Store and Google Play Store.

💰Business Viability (Contrary Entrepreneurship Prize)

We believe that our hack can become a successful business due to the current need and hype for learning mobile applications. Many adults love to learn on the go, and with Viva, all adults can improve their communication and develop their vocabulary! Thus, through a free and paid subscription service, Viva can blossom into a very successful business!

Please note: As of now, the iOS version of the app does not support the speech-to-text or text-to-speech APIs.

Built With

Share this project:

Updates