We originally started out with hacking a bread maker to give it a mind and voice, pseudo consciousness if you will. Unfortunately, we fried it's ass midday of Day 2 of the Hackathon and had to scramble for a new appliance. We ran to Walmart and found our holy grail -- or blender. We can now make pina coladas and make a quick conversation with our cool new appliance.

What it does

The webapp ( takes in various blender voice commands. These commands are executed through the rasberry pi to run the blender for variable times. The blender is also semi-conversational supporting questions, jokes, etc.

How we built it

The front end utilizes the p5.js speech recognition library to take user input via speech. The transcribed string is then pushed to a firebase database to be processed by the backend. When a text response is sent back from the database, the front end listens for a change in value and speaks the result. We used html/css for the styling but ideally we want to move it over to React after this.

The backend consists of a rasberry pi with a continuos python script that listens for changes in the firebase database to process user input commands. We utilize a word bank of key words that are relevant for blender commands to actuate a text response and a blender command.

Challenges we ran into

Well, as stated earlier, we originally were hacking a bread maker (RIP my loafy friend :c) until we shorted it's motherboard while trying to complete the buttons' circuits.

Accomplishments that I'm proud of

Getting this whole thing to work.

What we learned

Voice recognition and language processing, using relays to control hardware, and learning how to use a breadboard.

What's next for Smart Blender

More intelligent voice commands and create a pulse option for the blender.

Share this project: