Inspiration

1 in 5 children in the United States have learning disabilities and attention disorders like dyslexia, ADHD, Dyscalculia, Dysgraphia, Auditory Processing Disorder (APD) and Language Processing Disorder (LPD). Individuals with Dyslexia face significant challenges in educational settings, team presentations and meetings where a lot of the presentation is solely text, numbers and/or graphs. (https://ldaamerica.org/types-of-learning-disabilities/)

Some of the specific challenges faced by individuals lying on the spectrum of learning disabilities face are:

  • Difficulty reading normal text (Dyslexia) context blindness (figuring out senses of different words in different scenarios) and
  • Trouble focusing on dense, solely textual material without visualizations.

This is where readAR comes in. Through an array of services built into AR, our project aims at enhancing the learning and perceptory experience for individuals with these conditions.

What it does

readAR's main purpose is to make it easier for these individuals to learn, whether that be about what "work" actually refers to in a sentence, or understanding dense physics slides. We do this through using AR to re-render the world in a more dyslexic friendly font (like Dyslexie), and giving the option to users to add machine-learning inferred context to key phrases in the sentence. This enables us to pull visualizations and images that engage the user and enable him/her to better understand the concept. We also do SpeechtoText to transcribe the entire lecture/ meeting (especially helps individuals with auditory processing disabilities) for future reference. This also enables us to do intelligent quiz generation to assess the user's weak points in his understanding of a specific topic, and then give him customized resource recommendations. (Pulled using Bing's Custom Search API)

How we built it

We built the AR components and mobile app using Swift and ARKit. We also managed to to re-implement a pretty neat paper dealing with word disambiguation (Wiedemann, G., Remus, S., Chawla, A., Biemann, C. 2019) and serve it on the cloud (this is what allows us to provide context-specific definitions!). We utilized Azure's neat APIs, which allowed us to provide near-realtime OCR, intelligent search for resources and image suggestions, To top it all off, (almost) everything was Dockerized and deployed through Oracle Cloud Container Service to allow for some pretty nice scaleability.

Challenges we ran into

We all went into the hackathon knowing that this idea would be crazy -- a moonshot even. And that's what made it so challenging and fun. Some of the most significant challenges we faced were,

  • Finding a good way to render all the additional information without being overwhelming for the user
  • Training and serving our own word disambiguation model that was this accurate (there were almost zero resources about this)
  • Figuring out how to use AR for the first time

Accomplishments that we're proud of

One problem that we noticed in existing systems was how poorly they handled WSD (Word-sense disambiguation). We recognized this as an important issue for words in specific fields that have domain-specific meanings. E.g. "work" in the formal physics sense vs the general meaning. This is evident in the examples below:

im1

im2

We tackled this by creating and fine-tuning our own BERT model to be able to accurately disambiguate word meaning, through which we were able to reach 77% accuracy (5% off from the state of the art) in around 1.5 hours on our laptops.

What we learned

We learned to use and integrate a pretty large array of different services- ranging from our own ML models for WSD and Question Generation, Azure Cognitive Services, Oracle for deployment, with Swift for our AR app. It was quite the journey putting all these jigsaw pieces together.

What's next for readAR

We believe we've stumbled across an idea that definitely has significant potential to be taken forward and make an tangible impact. We hope to continue refining the project and hopefully also have it beta-tested by actual users with these disabilities.

Share this project:

Updates