I wanted to build a tool that allowed people to learn about the world in more than one way, instead of merely googling words of the things they see or guessing as to what they could be.
What it does
Cloud Cipher Vision builds on one of my previous projects and a complete revamp of what it was. It is complete with math, text, and image recognition with some useful social features. It allows people to use their mobile phones, enter in all kinds of input (whether it be handwriting or pictures) and returns credible and curated information using the Wolfram Alpha API. It also forms a community for the users to discuss and talk about certain topics they're curious about.
How I built it
I built the handwriting recognition technology using the MyScript API (and added in new features like undo and redo), and coded the backend using REST API calls to several databases, most noticeably that of Wolfram Alpha. The app receives the information in raw JSON format and converts it into a neat table full of information. For the image recognition, the app uses a cloud-based API called the CloudSight API and performs a web call with Google Search, returning all kinds of information from shopping results to articles.
Challenges I ran into
It was difficult sorting through all the data that Wolfram Alpha and the CloudSight API returned, so I had to design some algorithms to pick apart and choose the relevant information to display to the user.
Accomplishments that I'm proud of
I'm proud that I was able to use so many APIs to create game-changing technology, and on top of that, come up with my own algorithms to sort through a lot of data.
What's Next for Cloud Cipher Vision
I hope to extend on Cloud Cipher Vision with better design/UI, more varieties of input, OCR, and better information return on image recognition to further increase the users' ease in discovering and learning more about the world around them.