Inspiration

We were inspired by the problem some people faced at our local nursery home where they had trouble hearing/understanding what we were saying to them when we volunteered.

What it does

The app records what the user says and then converts it to text. Next, the app uses sentiment analysis which can help identify the tone of the speaker (shown through emojis). Moreover, the analysis of key entities with the google natural language API allows for the selection of key terms. These key terms are then used by the Giphy API to find relevant gifs which can portray what the speaker is saying to the listener.

How we built it

We first built a basic Android app. We then used speech converters to allow the phone to understand what we were saying. Then, the words were analyzed through Google Cloud Platform. The keywords were then analyzed and then searched in Giphy. The words were then used to calculate the overall mood of the speaker using the Google Sentiment API.

Challenges we ran into

-learning Google cloud platform -we had to abandon our first idea leaving us with less time than we would have liked -food (we're picky eaters)

Accomplishments that we're proud of

-using APIs for the first time -using cloud platform for the first time -using machine learning for the first time -first Android app

What we learned

-Google cloud platform/APIs -machine learning

What's next for GifLang

Making the project more precise by using Google Syntax Analysis instead of Google Entity Analysis. This API would allow for specific phrases to be searched instead of singular words. Also we could add more languages by using the Google translator API. Moreover, we could add more emojis and make the conditions more specific so the emotions can be more accurate.

Built With

Share this project:

Updates