Inspiration

In modern day, technology is everywhere. Parents equip their children with devices at a young age, so why not take advantage of this and improve their learning! We want to help children in recognizing objects, and as well as pronouncing them (via text to speech).

What it does

NVision teaches children the names of everyday objects in which they encounter (by taking a picture of it). The app then says the object name out loud.

How we built it

We developed NVision in Android Studio. Java and XML was used to program the back-end, including various .json libraries. For our image recognition, we used Google Vision, communicating to our app with post requests. We also used domain.com to create a website to advertise our application on the web, which can be viewed here: NVision

Challenges we ran into

At first, we encountered issues in implementing the camera feature into our app. Android studio is not exactly the most friendly programming interface! After a lot of debugging and some mentorship, we were able to get it working. Another challenge we faced was using the APIs within our app. We used Google Vision image recognition API to return a .json file corresponding to the image details. This required our app to communicate with the Google server, and none of us had experience with implementing network capabilities to our software.

Lastly, integrating our code together was a challenge, because we each worked separately, and we used used different libraries, code, and software. This app was relatively complex, so each of our parts were vastly different. We needed to first communicate with the camera, send the image to the server, retrieve the .json file, and parse it to a string array to be read outloud in a text to speech. Near the end of the 36 hours, we spent a lot of time simply putting together the pieces and making sure the app would run properly.

Accomplishments that we are proud of

With most of the team being new to Android development, this was definitely a difficult and daunting task at first. We are proud that we finished with a functional app that has most of the features we wanted to include. In addition, no one on our team had experience with APIs, so we are happy with what we have created as a team.

What we learned

Apart from the technical skills we gained, we learned that communication and teamwork is a crucial part of success when working on projects. By dividing the workload and having teammates to rely on really helped us be more efficient overall. Also, the level of programming we had to do was far beyond everything we did in school, so we had to both brush up on our comprehension as well as our coding skills. We also realized that our own knowledge could be severely lacking at times, and we should ask for help when needed.

What's next for NVision

In the future, we would like to expand NVision to target other cultures. There are many children around the world that do not speak English as their first language, so we'd like to have our application relay the object detected in other languages. This will help kids that not only do not speak English, but also those that are trying to learn a new language. We already have a way to change the language of the text to speech, available in German as well as English, but we would have to use Google translate API to fully realize it's potential.

Furthermore, we realize that young children spending too much time in front of their screen would be detrimental to their health. We wish to implement parental controls as well as a timer to limit the use of our app. Parents will also be able to track the progress of their children's learning, perhaps by integrating a feedback system, such as a microphone, to tell if their children are advancing their vocabulary.

Share this project:
×

Updates