Inspiration

According to the National Center for Health Statistics, 26 Million Americans Adults experience significant vision loss, creating challenges with Depth Perception, Low-Light scenarios and proprioception. CleARsight for Magic Leap is a Spatial Computing Accessibility application, to improve the daily lives of individuals with Low Vision.

What it does

Using Magic Leap’s World Mapping, Haptic Feedback, Spatial Audio, and Holographic Visual Overlay, CleARsight illuminates your environment, outlining object edges with High Contrast colours, and highlighting horizontal planes with a vivid pattern.

Pulling the trigger of the 6DoF Controller, activates the environmental awareness system ‘virtual cane.’ Measuring the distance between the Controller, and the spatial mesh generated by the magic leap’s world reconstruction system, Haptic feedback increases in intensity as obstacles get closer. Like the use of echo-location in nature, this allows the CleARsight wearer to gain a spatial understanding of their immediate environment, without relying exclusively on Vision.

Additionally, a spatial audio file is played from the pointer’s position on the environment mesh. Like an Audible Pedestrian Crossing, this sound allows the CleARsight wearer to hear, in real space, their distance from the obstacle. Operating concurrently, this synchronized combination of haptics, spatial audio and High-Visibility Holographic outlines serve to supplement the CleARsight wearer’s reliance on Visual Navigation.

In the home, and other familiar locations, CleARsight allows the recording of contextual ‘Placed Audio Memos.’ Spatially affixed to their recording location, CleARsight wearers, or their caregivers can drop a message of their choosing, which will automatically playback when approached. Today, CleARsight introduces a novel use of Spatial Computing technology, to amplify the wearer’s senses. Tomorrow, improvements to this technology can be built upon with other digital integrations like Object Recognition, Voice Recognition, and IoT Controls to bolster a robust Sightless User Interface.

What's Next For the Project

We'd like to continue to extend the feature set of this application and further bolster its usability with additional user testing and integrations to further improve its offering to low vision individuals.

Some examples are: Dynamic Object Recognition

  • Spatial Mapping data persistently analyzed, matching and expanding 3D Object Database for on-demand contextual information and passive Machine Learning Training.

Voice Recognition

  • Custom voice-triggered commands, Action events (confirm, cancel, etc.), Speech-To-Text parsing and Personal Assistant integration.

IoT, Home & Ecosystem Integration

  • Open-Source API & SDK for developers to implement in products and services.

Geospatial Positional Synchronization

  • Landmarks, known paths, obstacles and other context-sensitive annotations.

High-Speed & Hazard Warnings

  • Detect high-speed movement alarm for vehicles, cliffed overhangs and environmental obstructions.

Slide Deck: link

Built With

Share this project:
×

Updates