Inspiration
As an individual who has a passion for learning different languages and understanding the essence of establishing neurological pathways in learning a language, have decided to create a program that will allow for direct links in the language.
What it does
The program will ask the individual to select a setting. Then based on the setting that the user is in (i.e classroom) then the program will display a word of an object in a foreign language (Target language that the user is trying to learn), then a sound file will play with the pronunciation of the word will play. The user will then be asked to actively seek the object and take a photo of it. If the photo matches then the program will proceed to another word.
How we built it
Using two google-cloud APIs. The first was the google-cloud-vision API that was used to label the objects within a picture and see if it corresponds with the word (object) that was displayed prior. Then the second API that was used is the google-cloud-text-to-speech that will take the word in the foreign language and output the sound file with the pronunciation.
Challenges we ran into
- Using and implementing both the APIs.
- Using the camera in conjunction with the API and programming environment.
- Implementing the GUI for the program
Accomplishments that we're proud of
- Implementation of both APIs.
- Implementing the camera function that will take a photo and send it through the google cloud service to identify and label the image.
What we learned
- The use and execution of google's API for it's cloud service.
- GUIs are confusing to work with.
What's next for LLPro
- Version 2.0 Coming up soon.
Log in or sign up for Devpost to join the conversation.