We were thinking about issues that concern us, but where individuals can work to make a change. With 53 percent of our pets in the United States being overweight and far more being malnourished across the world, we decided that we wanted to help petowners keep their dear companions healthy. However, not all of us are trained veterinarians, so diagnosing your pet's health via weight is not an easy task. Our hack aims to make this process streamlined, integrating expert advice from veterinarians, machine learning, and a Raspberry Pi setup to help homeowners maintain their pet's health.

What it does

Our application is designed to make regulating and feed your pets much easier. Firstly, the application has the ability to classify various levels of pet health from a picture. The trained algorithm is able to classify dog body condition score, which ranges from 1 for severely malnourished to 9 for severely obese. Our application also is connected via wifi to a raspberry pi, which takes photos with the camera to identify the pet's health, and then distributes food according to how much is needed to maintain a category 5 physique (normal). Finally, the application has a forum where veterinarians can post advice on how to treat other illness pets have. Overall, our app aims to prevent illnesses among our pets and maintain a health body condition score.

How we built it

We gathered images of malnourished, healthy, and obese dogs from online. We utilized the ML Toolkit on Firebase in order to construct a model that could classify these dogs by their body condition scores. We then took this model, and worked with Android Studio in order to make a simple interface, that could demonstrate effectiveness in for a minimum viable product.

Challenges we ran into

Integrating the front end with our ML model was a challenge. To this effect, we had to learn how to use android studio.

We also had trouble obtaining enough data to train the model, having to create our own database to train our model.

Since we were new to Android Studio, making a relatively-decent looking front end app was a challenge.

Accomplishments that we're proud of

We are proud of our ability to quickly pivot and adapt to changes in development, as issues experienced in the latter stages required rapid divergence of ideas, converging to feasible products, and ability to learn new development skills and software.

We are also proud of our flexibility, bring able to simultaneously work on multiple aspects of the design cohesively while also being available to help one another when faced with issues.

What we learned

From a non-technical standpoint, we learned how to divide a project into smaller chunks and work on them productively as a team.

From a technical standpoint, we learned how to use Firebase, Android Studio, ML Kit, Kotlin, and Tensorflow to create a working machine learning model and Android application, primed to identify levels of animal physical health. We also learned how to quickly learn and development machine learning algorithms and models, being able to adapt to ML Kit and Firebase during the Hackathon. Finally, we became more familiar with backend development using Google APIs and Firebase.

What's next for GreenThumbs

More functionality! Expanding the mobile app to include the components discussed in the mockups, introducing expert advice and more hardware integration. We would also implement more nuanced sicknesses, as the compartmental nature of our model allows us to do so easily.

A stronger ML model! The current model was only trained using dogs. Using additional animals and various illnesses will help us to create a model that can classify such issues with higher specificity and breadth.

A nicer front end! Since we were new to Android Studio, we weren't completely sure how to make things look the way we desired. With more practice and patience, the front end could look as pretty as how we drew it, with all the functionality desired.

What else we can do

The nature of our model allows a compartmental allocation of images to categories. As such, the underlying algorithm can be used to identify any category of objects, and is not limited to just dogs.

Built With

Share this project: