Inspiration
We were at the Hack Harassment talk, and we realized there was a substantial amount of stress placed on students who were bullied online. We wanted to help this situation by building a haven for stressed-out students to come and be understood without being judged. We wanted our program to understand the person on the other side of the computer, thus giving us the idea to include emotional analysis with our program as well as proactive images.
What it does
The first thing our program does is create an API call to Cisco Sparks to set up a chat room with a bot in it named Raine. It prompts the user to type in an email address so the user can talk to the bot. The user and the bot will both be in the chat room. It uses the Python module Request and Oauth to authenticate both the bot and the user into the chatroom. This was exceptionally difficult to pull off because we had a hard time figuring out how to distinguish our authentication keys from other elements and how to use it with Python to get Cisco to acknowledge our users. Whatever messages the user sends to the bot will be sent to a script that calls the IBM Watson Tone Analysis API to analyze the message for emotional content. Watson will decide what is the greatest emotion that is displayed by the user through his/her messages and then based on those emotions, Watson will come up with a proper response to alleviate negative emotions or reinforce positive emotions. We can run as many clients as we want at the same time and it only takes one computer.
How I built it
The IBM Watson Tone Analysis script was done in Python and uses the server credentials generated by Bluemix to call the API into Python. The script then inputs the text from the user and has the Toner Analysis tool analyze the message into a JSON output. The script then parses the JSON output, comparing emotion scores to find the max.
Challenges I ran into
Wanted a wide variety of responses using IBM Watson Dialog API, but not enough time. Wanted audio and video but not enough time.
Accomplishments that I'm proud of
Managed to get API calls to IBM Watson's Tone Analysis tool and parse JSON output for the most expressed emotion. Also managed to learn how to use IBM Watson's Dialog tool but didn't implement it.
What I learned
What's next for Hey Raine
We hope in the future that the messages the user inputs will be accompanied by images that correspond to the emotions, and they will change based on how the user changes his/her emotions throughout the chat session. We did not have enough time to implement them however. We also want to add text to speech and speech recognition of the user's voice. Also multilingual capabilities would be implemented as well including training cognitive APIs to detect sign language. As the program grows, we can train it with data and get it to be more humanlike.

Log in or sign up for Devpost to join the conversation.