We realized that a very prominent way that clubs on our school campus is through fliers, posters, and images, either physically around campus or in-person. Simply taking pictures is unproductive since we often forget to add them to our schedules, and collecting flyers is wasteful and unsustainable. Wouldn't it be great if our phones had a feature to instantly compartmentalize the information on these flyers and add them to our calendars?
What it does
It's a win-win for productivity! Our Android app allows the user to take a photo of a flyer, poster, etc. and puts the information about the event into the phone's calendar, so that the user can organize their lives on the fly. The user can easily fill in any missing information or add to the notes and description about the event before it is added to their calendar.
How we built it
We used the Amazon Web Services Rekognition API to extract relevant text information about the event from the poster, such as the title, description, date, and time. We used Android Studio to built the java application, and python to build the server connections and access the back-end APIs.
Challenges we ran into
While the Rekognition API returned keywords to our app, our service had to successfully parse these keywords into the proper components of a calendar event based on text size and thorough regular expression checking, which took us time to figure out and complete. Successfully creating a server to host our back-end service was another challenging hurdle we had to get through to successfully build our project.
Accomplishments that we're proud of
We are very proud that we were able to successfully parse and compartmentalize the information into different categories, and we were also glad to be able to figure how to standardize different types of human-stylized formats into a standard time/date format for the phone to recognize. We were also able to get our server up on EC2 after a few hours, which allowed our testing and development to run a lot more smoothly, so we're very proud of that!
What we learned
We learned a lot about the full-stack development process that goes into planning an application, and were able to learn more about developing our own service with the AWS APIs. The most valuable skills we picked up were our ability to segment the problem into different layers and work with each other to learn Android app development, HTTP requests and APIs, and using pre-built AI services to accomplish simple yet very useful tasks.
What's next for Calyzer
Our next step is to work on finishing the standardization of time/date information and polish our ability to recognize the title and relevant description based on more parameters we could explore. We also plan to implement a feature to pull from the camera roll as an option instead of taking a picture. It would also be valuable for us to train our model to see how humans place importance of design and different text elements to create useful and eye-catching infographics, so that we can segment our descriptions into more parameters, such as location, RSVP, and other elements.
Our hope is that in the future, rather than being an app, every phone uses our service in-built, so that any photo taken of a flyer or poster would pull up our form to add the event to the calendar intelligently, making peoples' lives faster, easier, and more organized.