Inspiration

Our project fits in with both the Health and Education themes of HackDavis: the compliment generator reduces stress and improves mental health of the user with positive comments, while the fact generator uses Reddit to find the most surprising/interesting trending facts.

We were inspired to get the training data for our compliment generator from Reddit and Twitter, because those two platforms are normally viewed as the most toxic social media platforms. We thought that if we could use their content and turn it into something positive, it might inspire users to spread positivity and combat harassment on those sites.

What it does

Our project is an Alexa-integrated web app that has two major functionalities:

When the user expresses that they are unhappy, asks about their appearance, or asks for a compliment, Alexa responds with several unique compliments generated with a Markov chain.

When the user asks to learn something new, Alexa responds with an interesting fact from /r/todayilearned. A main highlight of our project is that our user interface is not an Echo device; instead, we implemented our own interface, created a mascot, and designed a smooth and immersive user experience based around that.

How we built it

We worked on the project in four phases: acquiring training data, implementing Markov chain for compliment generation, integrating with Amazon Voice Service/Amazon Alexa, implementing a front-end interface.

Training Data: The training data was collected from two different sources using Python: Compliments were collected from the Twitter account @TheNiceBot (using Twitter API and Twython) and the Subreddits (using Reddit API and PRAW) /r/toastme and /r/freecompliments. Facts were gathered from /r/todayilearned. Scripts were written to filter out inappropriate or excessively specific content, and we ended up with a set of 938 compliments and 762 interesting facts.

Markov Chain: We initially tested the feasibility of various key lengths for the Markov chain using an implementation we wrote in Python. The Markov chain uses the training data to generate text, in our case compliments, using a probability table. We decided that a key length of three words resulted in an acceptably low occurrence of error/incoherence in generated compliments (approx. 15-20%). We then re-implemented it in Javascript to use in our Lambda function for AWS.

Alexa: We programmed Alexa with a new skill named “Compbot” to understand when the user wants a compliment or a fact. Example utterances include: “I want to learn something new,” “I’m feeling down today,” or “Give me a compliment.” The back-end was written in NodeJS and depending on intent, either generates a response of five compliments or responds with a random fact.

Front-End Interface: We used a NodeJS library called alexa-voice-service, to help with integrating our interface with Amazon Voice Service. We created a cute mascot based off of one of the logos associated with Amazon Alexa, and animated it on a canvas to respond to user input.

Challenges we ran into

Since we weren’t able to acquire an Echo from the hardware lab, one challenge was creating an aesthetic interface that worked with Alexa. This was implemented as a front-end website that had a cute, interactive mascot.

We also wanted to avoid using the words “Alexa, tell Compbot” in every command, as that was immersion-breaking. The problem of the immersion-breaking wake command was solved using client-side Javascript; we researched audio encoding and processing techniques and appended our verbal commands to a pre-recorded “Alexa, tell Compbot” before sending requests to AVS.

The biggest challenge we encountered while implementing the aforementioned solution was our unfamiliarity with audio encoding formats and how requests to AVS worked in the context of the NodeJS library that we were using. This required hours of research and several implementations before we were able to process the audio successfully without causing a noticeable delay for the user.

Accomplishments that we're proud of

We’re proud of implementing our own voice control interface for Alexa, and for improving the user experience of using Alexa by eliminating the wake command and adding a mascot.

What we learned

This was our first time working with many of these technologies. We learned a lot about Alexa, Amazon Voice Services, and audio processing and encoding. We were also surprised with how effective Markov text generation was. In addition, we realized the importance of user experience.

What's next for Proton Positivity Generator

Short-term goals: Getting live posts from /r/todayilearned. Create a set of data to use to generate original, coherent jokes.

Long-term goals: Applying recurrent neural networks to improve the coherence of generated responses. Adding additional features and commands.

Built With

Share this project:

Updates