We were inspired by video descriptions in movies and shows.


Our product provides audio descriptions of the people in front of the glasses. This allows individuals with visual impairment to create a mental picture of who and what they are looking at.

Technologies Used

We used Google's Speech to Text API and Intel's development platforms.

Challenges I ran into

Making all of the APIs work together in a single file, then training the models. Learning this technology for the first time and setting it up.

Accomplishments that I'm proud of

Getting the APIs working.

What I learned

How to get the APIs to work, how to work with intel's development platforms. The capabilities of Intel's technologies.

What's next for Skeleton

Making and training an original model. Integrating external models into application. Integrating software with hardware. Integrate Object and Pose detection models.

Built With

Share this project: