Challenges we ran into

The programming language we used to read the Google APIs was Python. The best way to establish a website is to combine Python with Django or use a micro-framework like Python Web Flask. But, because we wanted to focus more on the concepts behind the website, and our lack of experience in building up a website with python, we only used HTML and PHP to simply come up with a website prototype to present our ideas. Another challenge was integrating the Python code for Google Translation API into the code for image analysis (via Google Vision API). We ultimately rendered the code to be more efficient and the Python program easier to use.

Accomplishments that we’re proud of

We believe that the combination of Google Vision API and Google Translator API in our project will be really helpful for people who want to learn foreign languages, and we’re proud of the fact that we built something we would have loved to have had access to during high school language courses. This was our first time using the Google Translator API as well, so we’re proud to have figured out how to use that and integrate it with our labels! It also took a while to learn how to open an image inside of a function, so we were excited to see that work at last. Last but not least, we’re so proud of ourselves for taking the time to do this during a busy grad school semester!

What we learned

We didn’t know there were so many Google APIs that are beneficial to developers. We can employ these API with mobile applications, websites, and even analytics platforms. As business analytics majors, we had an introduction to the Google Image API in class, and were able to really get a feel for it during the hackathon. We learned that the Google Vision and Translation APIs can be used with Python code to create a program that is practical and beneficial. We were surprised at the advanced level of machine learning that both APIs employ. For example, Google Vision API is able to recognize and analyze images via numerous machine learning algorithms and neural networks, whereas the Google Translation API has advanced features such as language detection. We also learned that with some practice, it became easy to integrate both APIs into our code, and are now curious to experiment with the other Google APIs in future projects.

What's next for Languages Learning

We really wish to continue our learning journey after this Hackathon. In the future, we can learn how to build up a website using python and connect the back-end parts with the Google APIs we have already coded. The same concept can apply to ‘smart’ devices where learners can download an application or an add-on and use their camera-enabled devices to take and upload photos for analysis and translation. In addition, we can use Google Cloud Firebase, which is a real-time database, to store leaners' photos and information, so learners can create their own accounts and review what they have learned through our website!

Built With

Share this project:

Updates