Inspiration

We’re living in an unprecedented time. Between more and more research coming out about how “food is medicine” and the benefits that eating better can have on your health, there has been a huge push lately to be more conscious about what we eat. With disruptions in the food supply chain due to COVID-19, however, it’s sometimes hard to find fresh (or even minimally processed) food in grocery stores. Many people also don’t want to go shopping every few days for fresh ingredients if they can afford to stay home and self-isolate. At the same time, with schools and businesses closed or at reduced capacity for the foreseeable future, a lot more people are exploring parks and connecting with nature.

We wanted to combine these two ideas to allow people to forage for food like our ancestors did, safely and efficiently. The catch is that most people don’t know what plants out in nature are healthy, or can be used for what purposes. While it is pretty straightforward to pick a plant you know (say, a willow tree), and look up the health and medicinal properties (the bark can be brewed into tea with similar properties to aspirin), what do you do if you don’t know the name of the plant you’re looking at? That’s why we created Backyard Buffet, a one stop app for not only plant identification, but also determining health and medicinal benefits from a plant, as well as recipes to incorporate more everyday plants into your diet.

What it does

Backyard Buffet lets you know whether or not plants (or parts of plants) are edible. It gives information on how to prepare the edible ones for eating, as well as possible health and nutritional benefits. For inedible plants, Backyard Buffet lets you know whether they’re toxic, or simply indigestible.

While we’ve built a wide variety of different options to accomplish that goal, they all follow the same basic formula:

  1. You take or upload a picture of a plant you want to analyze.
  2. A Tensorflow Machine Learning Model (trained in Google Auto ML Vision) identifies the type of plant in the picture.
  3. All of the results are displayed in a standardized, easy-to-understand dashboard. These results include common information like the name of the plant and whether it is safe or not, as well as detailed information about nutrition, health benefits, side effects, and recipes.

When it comes to using this app, we know that there are different use cases that users may have. Therefore, we built a wide range of ways to get the plant photos into TensorFlow to hopefully fit every use case. These include:

  1. From a computer or 2-in-1 web browser, you can hold a plant up to the webcam and take a picture. This is perfect for using plants you have already picked to pull up recipes on a larger screen while doing meal prep.
  2. From a computer or 2-in-1 web browser, you can upload a picture from your hard drive. This is perfect for looking up information on plants you’ve found before heading back out to harvest them
  3. From a mobile device (iOS or Android) web browser, you can take a picture of a plant directly from the app. This is perfect for exploring around your house or campsite.
  4. From a mobile device (iOS or Android) web browser, you can take a picture from the native camera and send it directly to Backyard Buffet. This is perfect for taking a quick picture where you can use some of the advanced camera features like the flash or zoom.
  5. From a mobile device (iOS or Android) web browser, you can upload pictures from your camera roll. This is perfect for learning about plants you saw on a nature walk when you were not connected to the internet.
  6. From a mobile device (iOS or Android), you can download our Flutter app and take pictures and get plant information without ever needing to connect to the internet. This is perfect for long hikes or backpacking trips with sparse to no internet.

Overall, Backyard Buffet takes the idea of simply identifying a plant from a picture to the next level, by providing input methods for any use case, and more detailed information about each plant than any other app out there.

How we built it

Underlying the entire system is a TensorFlow Machine Learning model that we trained in Google Auto ML Vision. To do this we followed the these steps:

  1. We went exploring in our backyard and local parks, and identified as many different plants as we could.
  2. We thoroughly researched each plant, and identified those that we thought would be good candidates for training into the model. We tried to pick a mix of edible and toxic plants, as well as common and rare plants to show the scope of what Backyard Buffet could do.
  3. We went back out and collected more than 100 photos of each of our 4 candidate plants (Hostas, Dandelions, English Ivy, and Bloodroot). We also collected a mix of other plants on the internet for a “None of the above” category.
  4. We wrote a number of custom python functions to handle data processing and preparation, before uploading all of the photos (now standardly named and labeled) to Google Cloud Storage.
  5. We imported our Google Cloud Storage Bucket into Google Auto ML Vision, and trained our TensorFlow Model.

Next, we jumped into building a React App styled with Material UI to serve as our initial portal. We selected this for the following reasons:

  • It runs as a single page app, therefore once the app is loaded, it no longer requires the internet to run.
  • We can quickly and easily support almost every internet connected device, so we can reach the most users as fast as we can. Material UI was also designed with mobile responsiveness in mind, so we know that it will work for any sized device a user has.
  • We can distribute a demo with no software downloads required. In particular we chose to use Firebase Hosting to ensure availability and scalability of our initial demo, which is available for you to try at https://backyardhacks2020-gcp.firebaseapp.com/.

Finally, we created a Flutter App in order to entirely remove the need for an internet connection to access Backyard Buffet. This means that it can be used in the middle of nowhere where WiFi or cell service may be sparse or non-existent.

Tech stack diagram

Challenges we ran into

Interestingly enough, this ended up being one of our most straight forward hackathon projects. This was our first time training our own machine learning model, so it certainly took some time going through the Google Auto ML documentation. However, once we understood how everything worked, it wasn’t really all that challenging.

Likewise for creating the Flutter App. This was the second time we’ve ever built any mobile app, so once again there was a learning curve, but it was mostly just going through the documentation.

Surprisingly, what ended up being the hardest was connecting to the webcam for the webapp. While we’ve built video systems from scratch before, we were actually able to start out with an npm module to jumpstart this project, and we expected the video implementation to be the simplest aspect. However, this was our first time supporting more than just desktop browsers, and so it was an… “interesting” experience. While rendering the video was trivial, trying to turn on the right camera with the right-sized video was really difficult. While getUserMedia is the standard to connect to A/V input devices, the constraints that it accepts vary greatly from browser to browser, and version to version. Ultimately, we ended up writing a complex constraint tree that would adjust the parameters given based on the error message of the last camera render until the camera finally turned on.

Accomplishments that we're proud of

We accomplished every objective we set out to, with (almost) enough time to film our demo video before the sun went down and the rain started. We were also able to solve most of our cross-platform support issues, so we’re relatively confident that our app will work on pretty much any environment that a user has.

What we learned

As mentioned earlier, this was our first time creating our own machine learning model, so we learned a lot more about how machine learning works behind the scenes (instead of just calling someone else’s model or API).

We also learned a lot more about app development (using Flutter in particular), because that is not something we have had much experience with before.

Finally, we learned a ton about how to build cross-platform support into an application. In particular, we learned how different browsers handle images and video, and how to build debugging into the app itself so that it can adapt to its environment.

What's next for Backyard Buffet

Because we didn’t know how long it would take to train our image identification model in Google Auto ML Vision, we started small so that we wouldn’t bottleneck the rest of development. Now that we have the basics in place, if we have more time (and a bigger budget), we’d like to expand our model to identify more types of plants.

We’d also like to work on expanding cross-platform support. While we think that the camera video stream works pretty much everywhere, our primary development environment was a computer. Consequently, it wasn’t until some of the final stages of development that we found out that phones with ultrahigh resolution cameras can sometimes overwhelm TensorFlow. Therefore, in the future we’d like to work on advanced image resizing and sampling to reduce the size of the high-resolution images that tend to get uploaded directly from the phone’s native camera into Backyard Buffet to improve performance and reliability.

Try it Yourself

Want to try out Backyard Buffet for yourself? Check out our demo at https://backyardhacks2020-gcp.firebaseapp.com/. Note that for best results, we recommend Google Chrome on either a computer or Android. iOS and other browsers are supported, but we’ve found that either video compression or permissions can sometimes vary between browsers and impact performance once the image goes into TensorFlow.

Built With

Share this project:

Updates