Inspiration

As a group of five diverse individuals, super excited to all be here in the United States of America on exchange, gratitude was high on our minds. It was Anushka’s birthday in the first week of February, and as newly made friends, we all celebrated with lots of food and fun party games. What was most memorable about this celebration, however, was when she video called her family back home in India, where it was an annual tradition to visit Arushi, an organization for differently abled children and celebrate with them. Seeing all these young children so full of life and ecstatic over the arrival of a chocolate cake, we realized how grateful we should be for the opportunities we have: the ease with which we can buy ourselves chocolate cakes, the ease with which we can navigate ourselves and our lack of dependence on others. It was at this moment that we all knew there was a need with no solution, and we wanted to involve ourselves in making that change in their lives, because this is comfort every single individual deserves. Here came about blend - we wanted to make the change from the letter ‘i’ in blind to blend, and make their integration into society seamless.

What it does

Blend is a software and hardware solution that brings numerous people together on a single platform, and guides the visually impaired in real time respectively. Blend uses the TensorFlow lite library and Raspberry Pi components for object detection and collecting surrounding data simultaneously. Our aim is to detect obstacles around a visually impaired person, in real time, and direct them safely to their destination. This is done by analyzing the objects as well as the distance between the user and the object, giving them, through audio presentation, a real time projection of navigation as well as surrounding awareness.

How we built-it

Our creation process was split into two segments, the software and hardware part. Since all of us wanted to be able to learn from this experience, we all mutually decided that each team member would get a chance to work in both segments. We started off by ideating for the first few hours, and finalized the logistics of our idea. After this, we scattered all around the campus, trying to look for components we can use in our prototype creation. We wanted to make our solution as realistic as possible and therefore wanted it to have a working demonstration for both software and hardware sections. The software section was smooth-sailing: we got started on it in almost no time, and before we knew it, we had a digital prototype that looked professional, designed by the artistic students in our team. However, the first struggle in the software section came when we wanted to implement Microsoft Azure’s Chatbot feature. Unfortunately, in the process of creating a chatbot that can communicate via audio speech, one has to edit the console window of a Windows computer, and since the group of people working on the software had Apple Macbooks, we were faced with a problem. Instead of wasting precious time by waiting for the switch to happen, we decided to then face this problem by transitioning to using Amazon Web Services’ chatbot, and after taking tutorial videos and reading teaching guides, we were able to construct Blendie, our very own Blend chatbot.

Challanges we ran into

As for the hardware side, today was a roller coaster. We started off by realizing that the Raspberry Pi board given to us did not have a slot for a secure digital card, which was how we had planned to transfer our code. Therefore, our first task was to find a Raspberry Pi board of a comparatively, more recent date. After the first visit to the nearest Safeway and then eventually Fry’s, two of our team members came back only to realize that we also needed a micro hdmi cable, because of which what could’ve been a one time trip, became a two time trip. After this, Amazon cancelled our same day delivery of an ultrasonic sensor, and our perfectly functioning camera strip eventually decided to suddenly not function at 945 PM, leaving us no time to go purchase another one. At this point, we were all honestly dejected - we were so close to just giving up, until a professor from Santa Clara told us he has an ultrasonic sensor we could use. We honestly saw this as a ray of hope, and from that moment onwards, our team built up momentum once again. The most important part about building the product today was our team dynamic - whenever anyone was low, we would all come together to motivate one another, and we believe this was our greatest strength.

Accomplishments that we're proud of

We successfully designed a device that recognizes obstacles in the user’s surroundings, and converted this information into audio form for the user to hear. We were also able to create a chatbot from scratch to make our application more user friendly, learning the procedure from both Amazon Web Services as well as Microsoft Azure, using Google APIs. In just a day, especially with the setbacks that we kept facing as a team, we are incredibly proud of how far we have come and look forward to hearing the judges’ feedback!

What we learned

This project was a learning experience for us all, because at numerous instants, we were all stepping outside our comfort zone. Theoretically, we learnt deep learning, text to speech recognition, the use of different electronic components for hardware and chatbot making and an efficient way to integrate these aspects together. However, holistically, we learnt the importance of working like a team and giving each other honest, constructive criticism, because in the end, this is what helps us grow.

What’s next for blend

Currently, Blend has successfully created the technology base for the implementation of both the hardware and software solution. The hardware solution has been fabricated and can simply be redesigned in order to be ready for commercial use. The software aspect, on the other hand, has a working digital prototype with the computational makeup of the chatbot ready, which means very soon, it will be launched in the market as an official application. Blend aims for the application to eventually pair the user and their close ones on the same application, giving all users information about each other, as well as giving each user personalized recommendations about products that they should purchase based on their previous experiences.

Built With

  • api-google
  • hardware-raspberrypi
  • languages-python
  • microsoft
  • tensorflow
Share this project:

Updates