Starting point

We started by developing an API for smart glasses that would allow users to get a description of their environment using IBM's machine learning protocols. This proved to be unfeasible because we didn't have access to any smart glasses, so we decided to build one of our own, as seen in the images below. We then also shifted our focus towards making the descriptions of the environment multilingual, so that a user could simply take a picture of an object, and would get a description of that object in whichever language they wished to learn. We then extended this to an iphone app.

Share this project: