Inspiration
The idea behind Leaf Labs began with a simple but powerful question: What if diagnosing a crop disease was as easy as taking a photo? Across Africa and many developing regions, smallholder farmers lose up to 40% of their harvests each year to preventable plant infections. Those losses affect not just food supply, but also livelihoods. We wanted to empower farmers through technology, transforming their smartphones into intelligent crop health assistants powered by AI.
What it does
Leaf Labs is an AI-driven web application that detects plant diseases instantly from a single photo. Farmers can simply upload or capture an image of a leaf, and the system analyzes it on the spot using a fine-tuned MobileNet model deployed with ONNX Runtime Web for in-browser inference. If the confidence score ( P(c|x) < 0.75 ), a Gemini Vision API fallback automatically engages for cross-verification, following a hybrid prediction formula:
$$ \text{Final Prediction} = \arg\max_c \left( \alpha \cdot P_{\text{ONNX}}(c|x) + (1-\alpha) \cdot P_{\text{Gemini}}(c|x) \right) $$
Each diagnosis includes confidence levels, treatment suggestions, and verified agricultural resources, helping farmers act immediately with reliable insights instead of uncertainty.
How we built it
We built Leaf Labs using Next.js 14, Tailwind CSS, and shadcn/ui for a smooth, responsive interface. The AI inference runs in-browser through ONNX Runtime Web (WASM), while fallback vision analysis leverages Gemini Vision API for cloud validation. We used Supabase for authentication, storage, and PostgreSQL data handling, Zustand for state management, and deployed via Vercel and Deno Deploy for scalability. The model was trained using TensorFlow on an optimized subset of the PlantVillage dataset, then exported to ONNX for fast client-side inference.
Challenges we ran into
- Reducing model size for WASM compatibility without sacrificing accuracy
- Dealing with real-world image noise, like poor lighting and low-quality cameras
- Maintaining a balance between speed and prediction confidence across devices
- Implementing a seamless switch between ONNX and Gemini models in real time
- Building a multilingual and offline-first experience for rural areas with limited connectivity
Accomplishments that we're proud of
- Delivering accurate AI disease detection (>80%) directly in the browser
- Enabling farmers to diagnose and treat infections quickly, improving yields and income
- Creating a system that aligns innovation with sustainability goals
- Proving that AI at the edge can transform agriculture and strengthen food security
What we learned
We learned that AI doesn’t need to be heavy to make an impact. Through model optimization, edge computing, and WebAssembly inference, we proved that advanced AI can run on low-resource devices. We also learned the importance of user empathy — simplifying technical insights into actionable, human-centered guidance for farmers.
What's next for Leaf Labs
We plan to expand Leaf Labs with multilingual voice interfaces, offline diagnosis caching, and large-scale deployment across rural networks. Future updates will also include disease trend mapping, community data sharing, and deeper integration with agricultural advisory systems.
Leaf Labs — Engineering the Future of Food Security. Today.
Built With
- deno-deploy
- edge-functions)
- firebase-cloud-messaging
- github
- google-gemini-vision-api
- javascript
- next.js-14
- onnx-runtime-web
- postgresql
- python
- shadcn/ui
- storage
- supabase-(auth
- tailwind-css
- typescript
- vercel
- zustand
Log in or sign up for Devpost to join the conversation.