Inspiration

Back when the lockdown first started, my father was stuck at home instead of going to work and to pass the extra time he had, he tried to learn how to cook since it had always been one of his weaker points. He had a hard time getting a grasp on some of the basic acts involved in cooking, such as being able to tell whether the onions he had fried were 'brown' enough or not, whether the food still needed to boil for some more time and whether or not the pasta was 'al-dente' enough.

At the same time, I had several friends who were stuck in their student apartments during the first few months of the lockdown. They had always relied on eating out, but the pandemic and the added concern of their parents led them to learn to cook as well. Yet again, these people had a hard time cooking things the right way, and the often subpar resolution over video calls meant that before their mothers could tell them that the burner needed to be turned off, the food had already been overcooked.

This problem occurred to me as one that could be solved using AI and computer vision. After all, we could just take a picture of the food and the computer should be able to give us a fair estimate of whether or not the food was cooked well enough. Hence came the idea, which I have implemented in a bare-bones manner for the simpler problem of frying onions.

What it does

Given a picture of onions being fried, it determines whether or not the onions have been properly fried and also shows a confidence value as a percentage associated with its classification.

How we built it

I used Keras as well some utility functions provided by the website PyImageSearch and trained a convolutional neural network on a tiny dataset of just 286 images of onions (what can I say, it's hard to get images of fried onions on Google Images!)

Finally, upon providing a picture of onions in a frying pan, the network predicts whether or not the onions in the image are properly fried.

Challenges we ran into

As mentioned above, it was hard to find a suitable dataset containing images of fried/raw onions in a frying pan, and a lot of the images were not too clean/clear. As a result, the performance of the neural network is certainly subpar, and may even have been prone to overfitting to the dataset.

Accomplishments that we're proud of

The product that I was able to make appears to be a positive step in the right direction, since it showcases that such an application can be fleshed out and further developed. Further features can be added on top of the existing neural net in order to offer more kinds of evaluation. For example, if you put some cookies in the oven to bake, you can just click a picture to determine whether they need to be cooked for longer or not, instead of trying to call someone who has an idea of these things.

I am also proud of the surprisingly positive performance that the network managed to give for the tiny dataset that we procured. This also adds substance to the claims that such an application certainly has the potential to be fleshed out into a 'Cooking Assistant' of sorts!

What we learned

I learned new things about computer vision, image scraping, dataset preparation and CNN architecture through this project.

What's next for Is It Done Yet?

Next up, I would like to create a mobile based application that uses this neural network. Further, I would like to extend the applications beyond just fried onions and to other cooking related activities such as those mentioned previously (baking, boiling pasta). Additionally, the mobile app could include step by step tutorials and eventually serve as a Duolingo for cooking, allowing new people to pick up cooking more easily and even without help from other people!

Built With

Share this project:

Updates