We found inspiration to create this project because of our desire to increase plant growth efficiency around the world. This not only helps gardening enthusiasts, but can also benefit the agricultural industry by increasing crop output and reducing environmental harm of farming. In order to achieve this, we felt we needed to improve plant maintenance, as utilizing too much fertilizer can harm the environment or not knowing plant disease/deficiency symptoms contributes to the decrease in plant growth. Maintaining good plant health is the most crucial part of the growing process but the most tedious part therefore we felt the need to simplify it.
What it does
With our app, we make plant maintenance easier, reduce the amount of fertilizer being used by providing reliable maintenance information and encourage plant growth by providing easy access to plant growth information. The app takes an image of a plant leaf and utilizes computer vision to recognize potential plant diseases and deficiencies. From this, we can determine potential remedies for the specific disease and deficiencies. This information is then displayed to the user and the user can implement these remedies to aid their plant. As of now, our computer vision model is only on corn leaf disease and nutrient deficiencies but we plan to eventually build more models on more plants.
How we built it
The app itself is built on swift using the AlamoFire library to input images from the app. The desired leaf images either taken on the camera or imported from one's library are then sent to our trained Azure Computer Vision Model with a POST call. This then sends a JSON file back to us. The JSON file is then parsed and we display the probability of disease or deficiency based on the leaf image. The initial training models were trained on image datasets from Kaggle. Our consequent ML model was improved with web-scraped images, using Azure's Bing search API. Furthermore, the latter image datasets were processed to be suitable for ML training by augmentation (expansion of small datasets).
Challenges we ran into
A big challenge we faced was figuring out how to send POST calls to the Azure Computer Vision model and web-scraping images of deficient corn/maize leaves. Another challenge we faced was learning swift in 24 hours as none of us utilized Swift before the H4H Hackathon.
Accomplishments that we're proud of
We are proud of the forward progress that we made during this hackathon. We started knowing nothing about IOS development and ended the hackathon with a functioning IOS app that utilizes a computer vision API and web-scraping. All of us also got experience working with real world datasets, manipulating them, and extracting them. Ultimately, the fact that we built something cool and useful is what we are most proud about.
What's next for Better Plant
We eventually plan to scale our app by creating new models based on other plant leaves, diseases, and nutrient deficiencies. To build on top of this, we would also like to utilize aerial drone imaging on farm crops to aid farmers in the plant maintenance process. Lastly, we are looking to utilize the data that the app collects to recognize soil condition and monitor plant growth as well as improve our user interface.
Our project blended concepts of life sciences and computer science with students from CS Majors along with Biochemistry majors. The initial question stems from our common concern regarding the environment. We eventually intertwined various expertise in the fields of computer science, environmental science, and biology to incubate a final approach to simply plant maintenance.