For many young children and children with disabilities, expressing their emotions can be difficult. By the age of two, most children are able to recognize emotions but aren’t able to convey them to others. This can cause distress for both the child and the parent. BayHax aims to fix this problem.

BayHax's name was inspired by Disney's 2014 animated feature film, "Big Hero 6" where BayMax - an inflatable marshmallow looking robot - serves as a personal healthcare companion.

What it does

BayHax is a handy tool for parents to look after their child's emotional health. Inside the BayHax teddy bear is a Raspberry Pi, a Pi Cam, a speaker, and three emoticon buttons that represent happiness, sadness, and anger. Every hour, the teddy bear will ask “How are you feeling?” as a pre-recorded message through the speaker and will prompt the child to press one of the three buttons. The child may also press the buttons at any time, unprompted.

The data from the teddy bear/Raspberry Pi is sent to the BayHax website for the parent to monitor. The website’s dashboard displays the child’s most recent mood while the calendar tracks the child’s mood changes over the period of a day, week, or month. The analytics page offers additional information on the psychology behind a child’s mood, as well as predictions about what time of day the child is the most likely to experience a certain emotion. The monitor page gives you the option to snap a picture to check in on your child. There is also a settings page where the parent can adjust the time interval of the teddy’s prompts, the volume of the teddy’s speaker, and whether or not to receive data.

By using BayHax, parents can get a clear picture of their child’s overall emotional health and become better equipped to handle their child’s mood swings. BayHax can also serve as a therapeutic friend for children to confide in.

How we built it

The first step in building BayHax was to find a teddy bear that we could use to house the BayHax system. We then opened the back of the teddy bear and replaced an eye with a camera. To detect when a child presses the bear, we soldered limit switches to wires which were connected to the General Purpose Input/Output (GPIO) pins of a Raspberry Pi 4. To monitor the child, we installed a Raspberry Pi Camera rev1.3, as well as a standard speaker connecting to the RasPi's 3.5mm jack to produce audio.

For the hardware development, there were four major components: the camera, the speaker, the buttons, and communication with the webapp. The camera proved to be the most difficult, requiring us to hunt through many forums and lots of documentation to find the specific drivers to install and libraries to use. The speaker implementation was relatively straightforward, using PyGame audio to play pre-recorded voiceover files. Limit switches were each assigned to a distinct mood, allowing the button presses to represent the mood of the child. Each of these buttons were marked with an emoji sticker, showing the child where to press when it feels a given emotion. Putting all of these together, upon pressing a button, the camera would take a picture and the speaker would play a comforting voiceover line, such as “Oh no. Please don’t feel that way. You have me!” when the child is sad, or “Yay! I’m happy for you!” when the child is happy. Finally, the id of the bear, the mood, the picture, the date, and the time were combined into a fetch request which was sent to the webapp, allowing it to save the data to the MongoDB database and display data directly from the bear.

We developed the website with HTML, CSS, and JavaScript. We used Chart.js to visualize our data, using for simultaneous online collaboration. We used MongoDB to store button responses indicating mood when the child pressed a button, as well as images taken as the child indicated their mood. To make the colorful backgrounds on the website, we used Canva.

Challenges we ran into

The Raspberry Pi Camera was hands down the largest struggle we faced in completing the project. Even after following the instructions from Raspberry Pi directly, the camera still refused to initialize. After scouring through forums, we found that the camera connects to the RasPi using I2C, directly contradicting the official Raspberry Pi tutorial. Furthermore, we found that the driver used to run the camera had not been set up properly in the software, and had to install and activate the driver manually in command prompt. After all this, we were able to connect the camera to our Python code and use it to take pictures.

After registering for a domain on, the DNS records on the site could not update quickly, so our repl could not be added as a CNAME to the site. Luckily, our team had registered a domain name, and we were able to create

When we played recordings, some would play at different speeds, even though they were all being played through the exact same function. We soon realized this was due to differences in sample rates between the recordings and added code to reinitialize the mixer with the sample rate of the audio file before the audio was played.

Accomplishments that we're proud of

We’re especially proud of the soothing and friendly aesthetic of our website. We wanted to create a calming user experience that is both informative and therapeutic. Our design attempts to reflect the special, sweet bond between a parent and child and act as a safe place for one to investigate emotional health.

We’re also proud of the communication between the Raspberry Pi and our website server to visualize the child’s moods over time. The Raspberry Pi embedded in our stuffed animal sends updates to the website just one second after the child pushes a button.

What we learned

On the software side, we learned that using grid containers are much easier to use to layout graphs and team photos instead of floats. We also learned how to efficiently use and structure a database. We also learned how to dynamically create graphs and add photos to our monitor page with updates from our database.

On the hardware side, we were able to learn how the Raspberry Pi interfaces with cameras and speakers through python scripts. We also learned how sample rates work, and how they have a drastic impact on the speed at which audio is played at. Connecting the two fronts, we also learned how to establish a connection between hardware and software, sending fetch requests from a Raspberry Pi to interface with a NodeJS server and a MongoDB database.

What's next for BayHax

Going forward, we hope to develop the option of adding multiple children to one profile for parents with multiple kids or potentially for pre-schools. We’re excited to add more voice responses and prompts for a greater variety of interactions. We also hope to have children and parents test BayHax to gain insight into how we can improve the user experience.

Share this project: