1. Inspiration
As stated by updated statistics, one billion people or 15% of the world's population, experience some form of disability, such as vision, mobility, auditory or neurological. But let’s go into a more detailed aspect – what does it mean having a form of disability? We can all find that info on Wikipedia, a broad search engine that states: ‘’A disability is any condition that makes it more difficult for a person to do certain activities or effectively interact with the world around them (socially or materially). These conditions, or impairments, may be cognitive, developmental, intellectual, mental, physical, sensory, or a combination of multiple factors. Impairments causing disability may be present from birth or can be acquired during a person's lifetime.’’ On paper it seems to be easily defined but in real life, people having a disability find numerous obstacles in living a normal life or even close to that, taking in consideration the world is not taking care of special needs or adapting itself to include benefits for all types of people – can you imagine losing your voice or your hearing for few days and then getting it back? Starting from this basic example, let’s put ourselves in a less fortunate person’s shoes and see that the struggle is real and society needs to develop more inclusive solutions for them as well – to be able to live a healthy, creative and normal life.
2. What it does
Now, let’s imagine a more complex situation – think of yourself as a person with a disability who needs to read a book they just received, but is not able to do that? Or even worse, you need to read the info about a product you are willing to buy but can’t find the way to do that? Real life for a disabled person is hard in most situations and we can work to improve that in the close future. Picture an app that could be your best friend in a bad day, assisting you in various situations and helping you stay on top of your game when fighting the disability, making it easy to interact in real life and solving big issues in seconds.
3. How we built it
The aim was to close the gap between blind users and the massive change around them. Relying on the AI was for sure the first option, and to minimize the hand use the users can control the system using VUI (voice user interface). The system contains three major components: 1- the Observers; 2-the Actuator; 3-the Sender. The scope of the Observers is to get the user's command via the (VUI) voice user interface, to achieve that Google assisted API which was selected. The apps will be integrated with the API, the voice will be recognized by the API and analyzed, then it can be converted to a command. On the other hand, we need to describe to users the world around them, so the mobile camera was integrated with object detection AI module (YOLO - You Only Look Once) and the detected object will be ''sent'' to the speaker. The Actuator will convert the user's command that was sent via the VUI into action (like if the user says ''zoom in'', or ''next page''). The Sender can send the info to the user, we have two types for the sender (speaker, braille device).
The project contains 2 main parts: software and hardware. Now how these structures work together? Let's take reading as a case study, a situation that blind people face on daily basis. Through the smart mobile app the user will upload the docs, and then through the AI power, the app will scan the docs. Afterwards, the user will select the suitable sender for him.
In another case study, the user needs to know the nearby events he can attend. The user can request from the app the nearby events by saying ''nearby events''. The app will get them from Google and the feedback for the request can be shared via the speakers or the Braille device.
More insights will follow in the video, for a better understanding of the concept.
4. Challenges we ran into
Being connected on the last minute has made it a bit more difficult to work as a team as it might seem we've lost the starting point but we've work hard to get back on track and divided the tasks as per skills and availability. Having a great design, taking specific requirements in consideration but also technical requirements were some of the challenges that we tried to tackle with patience and debating during the calls we managed to set. Anything can be overcome with creativity, by discussion and trying to see things from a new perspective.
5. Accomplishments that we're proud of
There are many aspects that we could mention - coding, inclusive design, brainstorming and many more. But most of all dividing the workload like pros and the teamwork we've managed to achieve in short time stand as proof that great things can be achieved when you have inspiration and will to help a big cause. Also, choosing to enhance this aspect of Braille devices shows that there is always room for improvement, regardless the domain or industry. On a more personal note, being able to work in a virtual team from the start and having notable results is another great point that we should mention.
6. What we learned
Working in a virtual team can be challenging, especially in an asynchronous mode, being on different time zones and trying to reach a common deadline. A global team requires, most of all, great communication skills and the ability to understand the cultural background of your peers. Although we've had greater flexibility by working remote on the project, we realised that in a virtual setting you need to be three times more organised than in real life - it takes attention to details from all the team and sharing your ideas at the right time to push the tasks forward.
7. What's next for The world in your hand - enhanced Braille devices
After completing the coding phase and launching the app, we should be able to promote VR-Braille to device manufacturers as much as possible, expanding progressively to medical facilities and medical personnel that interacts with this kind of patients. In a test phase, around 100 persons with this disability will be able to use the app and decide if it helps them on short and long term, also provide their feedback on its features. Taking the feedback into consideration, we would also approach specific NGOs to spread the info on a global scale. After the alpha phase of the app, it is necessary to understand and draw a conclusion if the app is socially accepted and what is the percentage of that, evaluating if this will have a positive impact on the mentioned segment of disability. If yes, working on the app to expand it for other disability segments and improve its features will be the next natural steps in this project.
Built With
- c++
- cloud
- javascript
- ocr
- tesseract



Log in or sign up for Devpost to join the conversation.