Inspiration
Growing up, we struggled with speaking to audiences. We weren't able to effectively put our thoughts into words, and we felt that there aren't many tools that are easily accessible to tackle this issue. So, we decided to build our own tool!
What it does
Our AI tool allows you to record a live video on our website, and immediately receive a score across several categories, including hand gestures, eye contact, clarity (filler words/stutters), vocabulary, and an overall score.
How we built it
We built ArticuLens using MemryX's hardware & hand gesture tracking models, as well as OpenCV, OpenAI's Whisper model, GPT-4, and many more Python libraries/tools.
Challenges we ran into
The biggest challenge was implementing MemryX's hardware/software into the rest of our program. The Linux setup was extremely confusing at first, and it was very difficult to merge their program with our other features. However, through reading documentation and constant testing, we were able to make it work!
Accomplishments that we're proud of
The biggest accomplishment we're proud of is probably figuring out how to navigate Linux lol. We both have never used Linux before, and learning it on the spot alongside building our program was definitely a challenge, to say the least.
What we learned
How to effectively implement libraries, how to merge files, how to combine a backend and frontend, how to use Linux, etc.
What's next for ArticulLens
For ArticuLens, we plan on deploying our project to a cloud server rather than it's current local server, as well as developing new features and improving our front-end's UI.

Log in or sign up for Devpost to join the conversation.