When cooking, we usually have the recipe opened on our smartphone or tablet, because who does own a traditional cookbook anymore when there are millions of recipes which can be found online! But as good as this variety is, we always struggle when we are in the middle of a recipe and the screen goes black. You either have your hands full of dough, need to stir while adding ingredients or are mixing beverages. Now you need to decide if you ruin your clean screen or interrupt your process and wash your hands again. So we decided to tackle this problem and developed kooci, your cooking companion!

What it does

Kooci is a voice assistant and after starting the app you interact with the system solely over voice commands and gestures. After you decided what you like to cook for example by saying "Kooci, I want to do a Mojito please!" Kooci will choose a suitable recipe and start guiding you through it. From now on you do not have to say anything to get to the next step. Kooci will automatically recognise your gestures, determine if you finished the current step and carry on with the next one.

How we built it

The system is controlled by the iOS app, which is connected with a Pebble Time smartwatch and IBM Watson. The Pebble Time records the forces generated by moving your arm and wrist and sends them to the iOS app. There, our algorithm determines the performed gesture and triggers the respective voice commands which are generated with the Text to Speech framework provided by IBM Watson. The voice recognition used to enable communication between the user and kooci is implemented using the Speech to Text framework from IBM Watson.

Check out our app demo here:

Challenges we ran into

The most challenging problem was to recognise the gestures as we only get accelerometer data from the Pebble watch. We first tried to implement a machine learning algorithm for the classification of gestures, but unfortunately we didn't have enough suitable data to train a working model. So we settled on the classical approach: looking directly at the values we receive from the watch and generate our own handcrafted model. At the end we ended up with a model being able to distinguishing three different gestures!

Accomplishments that we're proud of

We're really proud that we have a working system solving one of our real life problems! In our opinion, this kind of system can be applied to many different areas based on step-by-step instructions where gestures need to be performed, e.g. for educational tutorials.

What we learned

We learned how to use IBM Bluemix and how to access the IBM Watson services through their iOS SDK. Additionally we worked with the Pebble Time and got an insight into gesture recognition.

What's next for kooci

The first thing for us will be to deploy a connection to a remote database, so we have access to a wider variety of recipes and implement more gestures. It is also possible that we adapt it to another use case, for example for a tutorial on how to change a tire. We are excited to this future!

Built With

Share this project: