The ever increasing cost of products not only for the visually impaired, but for other differently abled individuals. Read the three paragraphs below if you would like a more in-depth description.
What it does
Using a camera attached to glasses, a co-processor, and a vision API it can detect and auditorily identify objects upon the issuance of a voice command or the tap of a button. Parents can also use the child monitoring feature.
How we built it
We used a Raspberry Pi and a USB webcam for our hardware. Then on the Raspberry Pi, we used the pre-installed node-red. We linked node-red to Google Cloud using JSON authentication, and we used the Google Vision API for our object detection. For Google Assistant integration, we used IFTTT to send a webhook to our node-red server, and trigger our code.
Challenges we ran into
Some challenges we ran into include the usage of Node-Red. None of us have ever really used node-red, and that was a challenge. Some other challenges include using Google Cloud APIs, as none of us have used them as well.
Accomplishments that we're proud of
-CG concept art -Use of Google Cloud to offload processing -Using Node-Red successfully for the first time.
What we learned
We learned not only about integrating Node-Red, the Raspberry Pi, and Google Cloud, but about what dedicated people can do in a surprisingly short period of time.
What's next for Simple Smart Glass
On the road ahead, Simple Smart Glass will be expanding in a number of ways. Firstly, we wish to improve the quality of the hardware, and move out of the prototyping phase and on to the final development phase. Second, we would like to be able to streamline the UI and UX, making it simpler for users to set up and use our hardware and software. We are also thinking about expanding to a larger market by providing more functions like Google Glass.
For someone with impaired vision, life can be an uphill battle. Trying to navigate one’s way through the ever changing landscape of our modern world is a challenge even for someone with perfect vision. Using modern technologies there are a number of ways to overcome this challenge, but unfortunately these solutions are incredibly expensive. The most common object detection systems on the market cost anywhere between $2,500 USD and $5,000 USD putting them out of reach for most individuals. The goal and inspiration behind Simply Smart Glass was to bring this price down to under $100 USD, making them not only affordable but offering a fully custom experience to the end user.
Simply Smart Glass is a sophisticated object detection system with full voice control. The system is designed to be mounted onto a pair of sunglasses or reading glasses by the user, and uses a co-processor and cloud vision processing to tell the user what they are seeing once activated using a voice command, or with the touch of a button. The near instant response gives the user an idea of what they’re “looking at”. Not only this, but the glasses allow remote access through a web browser, enabling parents, guardians, or caretakers to see what the individual wearing the glasses does in near real-time.
To accomplish this sophisticated design on a budget, we relied heavily on cloud vision processing. The data from the camera is sent to the cloud by a Raspberry Pi, a small computer that can be fit in a pocket, or bag. We used IFTT (If This Then That) to create a trigger, which is then processed by Node-Red and sent to Google Cloud, which runs an object detection algorithm to determine what the individual wearing the glasses is “looking” at. The reasoning behind this heavy reliance on cloud processing is what it means for the hardware. If the majority of the heavy lifting doesn’t have to be handled by the local hardware, the device can not only be made smaller, but less expensive.
We created something that we believe genuinely has the ability to better the lives of others. We’ve created something that, if taken to market, would improve the lives of people across the world. Along the way, we not only learned about the technologies involved and expanded our skillsets, but we learned about ourselves, and how to work as a team.