What it does
A user gives Alexa a food that they are eating and first Alexa identifies the potential ingredients in the food. For example, hamburger ingredients could include cheese, ketchup, beef, onion, and pickles. Alexa lists the ingredients and asks the user to choose one to enhance the taste of. The chosen ingredient is classified as having either a predominantly bitter or sweet taste, and a song is played to complement the taste that is identified. When the user eats the food while listening to the music, the taste of the ingredient they chose will be highlighted in the flavor profile of the food as a whole.
How we built it
We created a custom Amazon Alexa skill using the Alexa skill development kit and wrote a serverless AWS lambda function in Python to handle the backend.
Challenges we ran into
We couldn't figure out how to show the logs when testing the lambda function and the skill, so it was hard to debug. We also had trouble getting Alexa to play music because Amazon has a lot of restrictions about playing audio files. We got around this issue by adding audio files to Amazon S3 and having the skill access them from there. Since we limited the taste categories we were considering to sweet and bitter (rather than all 5 taste receptors), it was sometimes hard to categorize ingredients that didn't fit either taste category well. There also were not any clear and complete mappings from taste to song that we could model our skill off of, so we had to try to incorporate the relationships defined by multiple different sources and were not able to find an easy way (i.e. a formula) to automate the process of finding a song to match a taste.
Accomplishments that we're proud of
Successfully creating our first Alexa skill.
This project helped us to understand more how to use Amazon AWS product, but also we build our first Amazon Alexa skills, learned and improve our skills about python.
BitterSweet Sounds aims to be artificially-intelligent so that any food and its corresponding ingredients can be recognized without being hard-coded into the skill and the classifications of the ingredients and songs as bitter or sweet can also be automated. We also hope to further explore sounds' influence on taste, for example, matching sounds to food texture or adding in the other taste receptors to consider (sour, salty, umami) to better enhance the taste. We also wanted to enhance the tastes through lighting and color, which could either be done by changing the color of the light ring on the Amazon Echo, or by changing external lighting, for example in a house, through Alexa.
There is potential for BitterSweet Sounds to be applied to health purposes such as increasing sweetness in food in order to satisfy someone's craving for sugar without them needing to eat a lot of sugar. It can also alter the taste of a food that a person doesn't like to make it taste better to them, which is useful in situations where a person needs to eat something they don't like the taste of. When someone is stressed their tastes are distorted, which in most cases means a craving for unhealthy food high in sugar and fat, so BitterSweet Sounds can provide a way to distort the taste of foods in order to match the distortion that stress is causing.