Last year, our uncle suddenly developed cataracts and my grandma developed a retina problem. Both of them were losing the ability to see print. We thought about how our grandma doesn’t have a smartphone and can’t see the text on her computer and our uncle can barely see the words on his phone. This then made us think about how people without devices or access to the internet 24 hours a day can’t search information. We also thought about how our mom used to work with people who had low literacy skills. It can be embarrassing to search information when you can’t read as an adult. It made us think that a voice activated search would be helpful.

What it does

The purpose of this program is to give everyone access to digital information. The device is a large structure that resembles a phone booth. When the user enters the booth they can audibly ask a question and a device within the booth answers the questions with a voice. This device would have a screen so that the user is able to read the answer with subtitles. There is a slope between the door and the ground for easy access into the booth. There is a speaker, with a hemispheric design, above where the user is and it projects sound directly towards them and not to the outside world. The device's screen would be polarized so that people outside of the booth cannot see what the user is asking.

How we built it

We built an example application using Swift and Xcode. We also used ProCreate to design the features of the booth.

Challenges we ran into

When programming the map portion of the example app we crashed the app. Whenever we would try to click onto the page with the map, the app would crash and there was no obvious problem with the program. Another challenge was the voice interaction between the user and device. Right now, voice control is out of the scope of what we know how to program, so we can plan what we want the device to do but we cannot make the device just yet.

Accomplishments that we're proud of

We were able to solve the first challenge we ran into by testing the individual pieces and finding what was the cause of the map crash. We then were able to fix the problem. We are also proud of all the designing we were able to complete. We may not be able to create the device we want at this moment, but we have our design and know what we want to accomplish outside of PygHack. We are proud of our teamwork. We were able to individually work on objectives, and then we could collaborate to help each other with any issues. We did not feel like the hackathon was work, but was awesome.

What we learned

We learned how to connect Google to our Xcode app. We learned what it takes to design a structure that works for everyone. We were able to research already existing inventions that we can use in our booth, like the sound dome and polarized screens.

What's next for Helping Out Superman

The next goal is to learn how to make voice interaction between the user and device possible. Once we figure this out we can refine our design and hopefully make it happen.

Built With

Share this project: