Inspiration

There's a lot of dense text on the internet, which makes some websites tough to read — especially for users struggling with a new language or with learning differences.

With the goal of making the internet more comprehensible for everyone, we built an extension that employs academically peer-reviewed and commonly employed strategies to improve reading comprehension.

We colour different words based on the part of speech, describe images using captions and tags, summarise the page and give users the option of having the page read to them.

We hope that our application will make the browsing experience better everyone.

What it does

cmpr a browser extension that makes any webpage more accessible to those who may have difficulty reading standard text.

cmpr colors the text on webpages based on the part of speech of a word. Nouns, verbs, and adjectives are all colored differently, making it easier to comprehend the sentence as a user is reading it .

Then cmpr captions all of the images on a page with tags and a description to make pictures and drawings accessible.

The extension also summarizes text on a webpage into a few easy-to-read sentences. We believe that by having a user read a summary prior to reading a webpage, they're more likely to comprehend it.

Finally, all the text on the page can be read aloud to the user at the click of a button.

We believe that this extension will make browsing the web more comprehensible and enjoyable to a wider array of people. We're excited to see the impact our product makes!

How we built it

The browser extension primarily communicates with our Python backend hosted on Amazon's Elastic Computer Cloud. When the user enables tagging, the text is parsed and tagged as different parts of speech by the server. using Microsoft Cognitive Services' Linguistics Analysis. This text is coloured based on what type of word it is.

For images in the article, we generate a caption for each image on the page by merging the results from the Microsoft Cognitive Services Computer Vision API, as well as Clarifai.

A summary is generated for each page with a self-written, graph-based, extractive summarization algorithm. We apply PageRank on a complete graph with vertex distance being the cosine similarity of the tf-idf vectorizations of two sentences. The result is the three sentences that are most central to the entire document.

The Chrome Text-To-Speech engine is used to enable users to listen to the article, instead of having to actually read the page themselves.

The reading speed is calculated for each page where tagging is enabled, and this data is saved locally. This allows detailed reports of reading speeds to be generated, including how a learner may have improved over time.

Challenges we ran into

It was quite a challenge to make all of these parts work together. The project originally started as just a text coloring service, but we thought of more ideas that would make the web more accessible as we hacked. Especially since some features could only be implemented on the backend, and some could only be implemented on the frontend, it made our hack quite complex, but we are very proud of our achievement.

Accomplishments that we're proud of

The coolest feature that we think that we implemented was the text coloring. In order to achieve this, we had to parse the HTML DOM and apply styles to all the visible text. Along the same lines, we also had to find all images and edit the DOM to add captions and tags.

What we learned

We learned a lot about all the various resources we used - in fact, it was our first time using most of these APIs and frameworks! But each time, we were surprised and excited about how easy and hassle-free they were to use. We think that this was a great experience for us to be exposed to all these different services during a great hackathon.

What's next for cmpr

  • Colourblind modes for parts of speech tagging
  • More advanced data visualization
Share this project:
×

Updates