Inspiration

After looking into the azure Computer Vision API, one thought came across our minds: who needs object detection more than those who are visually impaired?

What it does

Describes verbally the main objects that composes an image. The platform is only to show case the technology and not to be used by the target audience.

How we built it

  • Using Flask, a micro web framework, as a server to connect our endpoints
  • Getting user local file upload through web application and into the azure REST Api
  • Returning the image description onto the web
  • Description voice-over using a Voice API

Challenges we ran into

  • Getting started
  • Switching from Javascript written API to Python: REST Api written in JS only allowed remote uploaded file URLs whereas we wanted the user to be able to upload locally from his own computer. Analyzing local images required us to use Python which in turn required a server in order to connect all the end points.
  • Learning how Python and Flask worked.
  • Grasping the concept of server connection.

Accomplishments that we're proud of

  • Actually finishing the project despite being the only 2 members left in the team.
  • We learned so much about the available ressources, technologies and languages, and managed to improve in our coding practices.

What we learned

  • How to connect front-end and back end: learned about GET, POST, PUT, UPDATE methods, end points and routes
  • How to use Python
  • API integration
  • Testing with Postman

What's next for iSEE

  • Mobile development for real-time image analysis
  • Integrated voice command
  • Interaction with the user
  • Object orientation distinction
Share this project:
×

Updates