Inspiration

We always wanted to get some information about something that's specified at advertisement boards, board signs, posts, books, meaning of words etc. It would take some considerable time to note it then type to a search engine. What if we could reduce the human effort using state-of-art technologies?

What it does

You could scan a board post/ advertisement or take a picture of it. Our app converts the image/scan to text using Microsoft vision API. Then you could tap on the desired word to get information. This is performed by passing the desired word as a parameter to Microsoft Bing web search API.

How We built it

We got the skeleton code from Computer Vision Android SDK(https://github.com/Microsoft/Cognitive-vision-android). We used "recognize text" feature of the aforementioned api to retrieve text from any image. We have number of event listeners on the texts and based on which we send as parameter to Bing web search API.

Challenges We ran into

The result of web search contains more noise(a number of unwanted information) which we had to parse and get quintessential information.

Accomplishments that We are proud of

We built an app which could be used by a person everyday life. A book reader could use our app to get the meaning of word instantly without having to look up a dictionary interrupting the flow.

What we learned

Power of team work, knowledge sharing, Computer Vision API, Bing Web Search API.

What's next for Bing Snap N Search

Optimize the retrieved information and improve user experience. Extend our app to reporting information to concerned authorities.

Built With

  • android
  • bing-web-search-api
  • java
  • microsoft-computer-vision-api
Share this project:

Updates