Inspiration

Throughout history, the biggest breakthroughs in technology have been a consequence of personalizing it to the consumer. AI is the next emerging technology, therefore a logical progression would be to tailor machine-learnt services to the user. In this same vein, we have developed proof of concept for a tool that could potentially map an owner to an object. Not only could this be used in our daily lives, but we also thought of the impact that this tool could have on visually or physically impaired individuals that would benefit from an intelligent robot using our tool.

What it does

The software analyzes a photo of an object and identifies its owner. For the proof of concept, water bottles have been used as sample objects and 3 members have been used as test owners. Ideally, users would be able to add ownership to any object.

How we built it

We built it by using the MS Azure Custom Vision API, by feeding it images tagged with an individual’s name, such that it will learn to associate the object shown with its owner. The objects in the pictures are taken under different lighting and environments, for the MS Azure Custom Vision API to learn to associate the owner to the object itself and not its surroundings. Then we programmed the web app using Angular, JavaScript, HTML, and CSS. We also used the BLOB tool to transition between the Custom Vision API and our web app code.

Challenges we ran into

In order to optimize the Custom Vision API’s precision and accuracy, we must train it by feeding it large volumes of data, which is not easily achievable under our given time constraints, and figuring out how to transition between the API and the web app implementation. In addition, the sparse documentation for the custom vision API and blob service did cause us some minor setbacks.

Accomplishments that we're proud of

Figuring out how to use the tools at our disposal in under 24 hours.

What we learned

How to use MS’s Azure Custom Vision API, the Angular Framework, RESTful services and as well as the Azure toolkit.

What's next for AIdentify

With the proper tools, we plan to upgrade this software such that we can also input videos, whose frames will be extracted and analyze by the same logic as regular photos. As mentioned earlier, we look forward to a future where we would be able to implement our AIdentify Robot. Through this opportunity we were given a chance to implement our ideas and build a good basis for forthcoming iterations. Thank you :)

Share this project:
×

Updates