Another angle of the GuideBot. The GuideBot serves as a visually impaired person's eyes into their environment. It can recognize obstacles.
Our prototype consists of an iPhone Camera with OpenCV software and internet access mounted on an NXT, which has basic motor capabilities.
A visually impaired person can control the Robot through speech as well as this gyroscopic controller (pictured here)
There is a person at our school who is unfortunately visually impaired. Usually, a person with a visual impairment would have to rely on either his own senses with a walking stick, the guidance of another individual, or the senses and guidance of a guide dog. This person in particular is allergic to dogs, so he usually has to navigate his way throughout the day with the aid of a friend or on his own with his walking stick. While brainstorming an idea, we were immediately drawn to the plight of this individual, and we decided to create something that would not only help change that particular instance of a struggling individual, but we sought to create a product that can be easily made into a small business that could be easily reproduce the product on a global scale, and completely revolutionize a rather currently cumbersome field.
What it does
On the surface, our solution is simple: we have a robot that accompanies the individual,responds to environmental stimuli, and effectively communicates the presence of such obstacles to the individual who then makes a response. In addition to this, it can recognize objects around the user and tell the user what they are. The robot can be controlled with voice commands or gyroscopic control. Unlike walking sticks or guide dogs, which tell the user that an obstacle in their path exists, our project can actually recognize obstacles in the user's path, and can alert the user through audio telling him or her what stands in their path. The user, now knowing more about what is in their way, can then say a command to the robot, telling it which direction to proceed in to continue its journey.
How we built it
We built a simple robot in mindstorms nxt as a platform to test on, our service will work with any robot which can hold a camera, such as a drone. We built a bluetooth serial library from the ground up which runs precompiled programs on the robot to move it through serial commands. We then ran this library off of a node js server, which coordinated data gathered from the iPhone camera and allowed the user to control the robot through either voice or gesture commands.
Challenges I ran into
Possibly the largest challenge we ran into was optimizing the code. We had a very complex stack with more than five different languages and we wanted the user to be able to control a robot responsively through a variety of inputs, including gesture, gyroscopic control, and speech. In the end, however, by using lightweight libraries, getting rid of as much networking as possible, and writing clean code in machine-close languages, we were able to make a responsive robot with a large tech stack.
Originally, our team was split in two. One group focused on the visual recognition capabilities of the robot, while another focused on using the Samsung Gear S2's gyroscopic capabilities to create a smooth control system for the robot which could be used to direct the robot in the direction of the user's walking path. The Gear S2 however, does not use Android Studio or Java to create apps, but rather an industrial grade IDE called Tizen which uses C. I had never worked with C before, and I had to pore over the API's to understand how to structure my code to access the gyroscope values. We also had to figure out how to get information from the robot's camera, to Firebase, and finally to the smartwatch which would read out the information via audio. While we were able to link information from the camera to Firebase, the development environment of the smartwatch proved to be too difficult to work with, and unfortunately we had to look to other options.
Another issue we ran into was controlling the NXT through Python. The NXT robot was originally used just for concept design and prototyping, because we had originally hoped to use a drone to complete our hack. When that was not an option available to us anymore, we decided to just go ahead with the NXT. Since we were not using a normal controlller, and since we wanted to transer NXT values to Firebase, we had to create our own control system over bluetooth from the ground up using Python. We also had to custom create a control system that allowed auditory controls in which the user could give auditory commands to control the direction of the robot, which also allowed us an alternate solution to robot control once we realized the smartwatch control system may not be viable.
Accomplishments that I'm proud of
First and foremost, as a seasoned team that has has won and mentored at many hackathons, including LA Hacks, HS Hacks I and II, Los Altos Hacks, and more, we welcomed other teams to ask us any questions they had and we were more than happy to set aside our own project when necessary and help out younger hackers. We love the hacker community, and we want to see it grow :).
Our team is extremely happy with the results of our hack. We were working with systems that were extremely high level, such as OpenCV for visual recognition, gyroscopic sensors, and audio recognition for commands. Although there were aspects of this hack where we unfortunately could not get to function, we realize that this is just an** MVP (Minimum Viable Product) **which is an essential first step in creating a small business that creates a revolutionary product. Since all members of our hacking team are extremely proud and passionate of this product, we are sure to take the lessons and accomplishments from this hackathon and apply it to further refinement of our product.
What we learned
We were not familiar with C, which we had to use for the photon and a smartwatch app we tried to implement. We plan to implement this app in the future because we were not able to fully connect the gyroscopic control. In addition to this, we learned a lot about code optimization, and how languages like C run much faster and much closer to the machine than languages like python.
We not only expanded upon our teamworking abilities, but also had the opportunity to learn a variety of coding technologies. For example, we obtained the knowledge of how to work with azure and interpet bluetooth information aong devices in order to seize the data which wa essential to our hack. We believe that the combination of hardware and software we had to create, as well as the extensive debugging process we ventured upon taught us how to polish our projects.
What's next for RoboVision
In the future we plan to expand our company and product and make Robovison a global brand. We envision us expanding our product beyond the device bot we have created here to additional products such as drones to accompany individuals, and will be released at a cheap and easily produced format to revolutionize the medically impaired assistance industry. People who are unable to pay the high cost for guide dogs but do not want to sacrifice their safety or mobility will now have a viable alternative.