View the firebase link (3rd link) to try our project.

Inspiration

One of the greatest problems faced by any video conferencer is the often enormous black grid of disabled video participants. To combat this, we set out to create avatars that replace these black screens.

What it does

Participants with disabled videos are given life through their words. Using AssemblyAI to transcribe the speaker's audio and Cohere.ai to classify that speaker's text as positive or negative we gather real time data that is used to alter the expressions of the speaker's avatar. In doing so, we bring life to an otherwise dark conversation.

How we built it

By considering sustainable UX designs we settled on a simplistic but intuitive design for our web app. In doing so, we wanted to minimize the amount of time spent navigating our app and increase the time spent in calls.

Challenges we ran into

We found it difficult to implement the APIs in the ways we needed to use them. Also, we ran into the challenge of deploying our app to Heroku.

Accomplishments that we're proud of

  • Creating a functional database with Firebase.
  • Successfully implementing both AssemblyAI and Cohere.ai.
  • Successfully implementing Agora for video calling.

What we learned

We learned how to utilize resources from various sources to implement APIs into our project.

What's next for vidlytics

solidify firestore rules - anon logins, etc

Built With

Share this project:

Updates