Inspiration

We wanted to make an app that strives for accessibility and that would better people's daily lives.

What it does

Takes in visual camera input and describes surrounding.

How we built it

Using a GitHub repo, ultralytics, pyttx3, cvzone, using Python.

Challenges we ran into

Converting visual input into text to be spoken as well as issues involving setup.

Accomplishments that we're proud of

All of the core functionality is there.

What we learned

Learned a lot more about LLM's, full-stack development, and AI libraries.

What's next for VisAId

Full front-facing UI and deployed mobile app.

Built With

Share this project:

Updates