Inspiration

For most of us, seeing comes as easily as breathing. You simply open your eyes, and the world is there, awash in millions of colors and replete with objects that you can not only look at, but understand. But for millions of people around the world, things are not so easy. pARtial is designed to help us remember that, by giving users some sense of what it is like to live with partial visual deficits, as tens of millions do each day. While there are many visual deficits we are all familiar with (e.g., short-sightedness), pARtial is designed to showcase widespread deficits most people may not even realize exist. One such issue is prosopagnosia -- aka "face blindness" -- which renders people unable to recognize even family members' faces, and affects up to one in 50 people worldwide. As you can imagine, this can sometimes lead to socially devastating consequences, as failure to recognize a friend or a work colleague passing by can be interpreted as an act of unprovoked rudeness. But if everyone knew the sheer prevalence of face blindness,for example, then perhaps people with the condition need not suffer socially or in a professional setting. With pARtial, it is our hope that we can raise awareness of these issues, and lead to a more visually empathetic world.

What it does

We have built a series of AR filters that are meant to give the user some sense of what it may be like to suffer from a number of different visual problems. Our filters include depictions of akinetopsia (motion blindness), prosopagnosia (face blindness), dyslexia, and deuteranopia (the most common form of color blindness). These filters allow you to use your phone as a window onto the world, exploring the environment via (somewhat artistic) renditions of these visual disorders. Research has shown that using VR/AR to help caregivers understand the world of patients with dementia and Alzheimers greatly improves quality of care and patient experience. link

We hope our service is used by educators, caregivers and colleagues of individuals with these challenges such that they can develop a more comprehensive level of understanding and empathy.

How we built it

We used Facebook's Spark AR platform to create these filters and load them onto our phones.

Challenges we ran into

We started out aiming to pipe our phones' video feeds through cloud machine learning APIs (or even using the JavaScript Textract library locally), but the current implementation of Spark AR does not allow for this because of privacy concerns. This would have been of most use in the case of our dyslexia filter, with which we had hoped to extract text via frame-based OCR. This would have allowed for us transform the original text into dyslexic friendly fonts (e.g. Dyslexie) as well as provide realtime audio feedback to enhance a dyslexic user's educational experience. After extensive discussions with Facebook engineers and mentors and hours of attempts at clever work-arounds (including ways to run OCR models locally), we realized making a service that spreads awareness about these challenges through simulations was feasible.

Accomplishments that we're proud of

Target-tracking in the dyslexia filter was a particular source of pride. Because we were not allowed to leverage external ML models to do the text recognition and manipulation, we instead trained Spark AR to randomly permute, distort, and rotate characters and digits inside of a given target (e.g., whiteboard, projector slide, blackboard, etc.).

What we learned

We learned how to generate novel filters in Spark AR, and to integrate custom logic into the filters using the Patch Editor.

What's next for pARtial

We have a large list of other visual and neurological problems we're aiming to build filter simulations for, including common eye-related issues (e.g., color blindness), and more rare neurological conditions (metamorphopsia). We aim to eventually transform pARtial into an accessibility service to help alleviate such issues in educational and professional settings, just as much as it is one to raise awareness of them. This goal will become much more viable with later iterations of Spark AR that allow for external API calls, allowing us to leverage the full range of cloud machine learning capabilities available to developers (including object and face recognition, which may be of great utility in the case of visual agnosia).

Built With

Share this project:

Updates