Inspiration

Late nights on problem sets combined with scrambled riffling through pages of text spawned the thought, "What if you could get the most relevant information from your textbook at the click of a button?" Sumrize was thus born at Hacktech 2017.

What it does

Sumrize uses computer vision to extract the text from a user-inputted image. It then uses text analytics to summarize the text and provide a crisp, easy-to-digest summary.

How we built it

Sumrize is written in Swift, and uses Microsoft Azure's Optical Character Recognition API to convert the input image into JSON data. The data is parsed into text and used the Linguakit API to generate summarized content.

Challenges we ran into

Everyone on our team is a beginner; none of us knew much about Swift and iOS app development. The unfamiliarity with Swift gave us compiler issues. It was also a challenge identifying helpful Swift libraries and functions for API integration.

Accomplishments that we're proud of

Built familiarity with Swift, API integration in code, and User Interface.

What's next for Sumrize

We plan to add functionality to allow users to determine how in-depth they would like their summary to be, based on percentage of the original text length. We also plan to add options for users to upload images from a URL.

Built With

Share this project:
×

Updates