Navigation inside a building is often a challenge for normally sighted people who can read clearly signs and maps. This becomes a real challenge for visually impaired people, who can’t benefit from visual cues. NAVIO is an indoor navigation system that helps the user to retrieve his/her position in a building and to navigate to a certain room. The App presents a very user-friendly interface with just few buttons, all properly highlighted with high contrast. NAVIO communicates to the user through both high-contrast big text and voice.
Because the Global Positioning System is useless inside buildings, we chose to use a context-based-image-retrieval method. Our algorithm will recognize the scene from the live camera view, matching it to the features of the correspondent panoramic pictures previously taken. Building management for the App drops down dramatically compared to all the other approaches that use beacons or WiFi signal. At the same time we are exempt from drift problems manifested in motion sensor based solutions.
The user is able to choose a database (provided from the building management) and find his/her position by just tapping the screen of the live view camera. It is easier than take a picture. After the system has properly calculated the position, it will automatically tell the user and show a map view with a flashing blue circle that highlights the user position. A high contrast green arrow will also show the user orientation based on a compass and help the navigation to the destination. The optimum calculated path is shown in the map for a better understanding.
Being part of an organization that takes care of the mobility and rehabilitation of low vision patients, we could constantly benefit from the feedback of our co-workers at Schepens Eye Research Institute, affiliated with the Harvard Medical School. We improved our user interface based on their comments and let them try our solution in our office space.
Based on our experience in the field, we believe there is a big need for the indoor navigation of visually impaired people. We also think that our solution, created for the Challegepost Hackathon, it is a starting point for solving the problem, and it could be further improved with more future testing and feedback. The Connectability Challenge experience will be very fruitful for it.
Our solution could be potentially used in museums, public buildings, hospitals, and shopping malls. For a better user experience in these environments the App can potentially include more tags representing relevant information to the user: doctors, artwork, brands, and employees names.