🌱 EcoBuddy
💡 Inspiration
We were inspired by a simple problem: people want to be environmentally sustainable, but there’s no easy, engaging way to track and feel rewarded for those actions in daily life. At a campus like UMD, students make small decisions every day—walking instead of driving, recycling, conserving energy—but those actions often go unnoticed. We wanted to turn sustainability into something visible, interactive, and motivating—like a game. By combining AI, real-world behavior tracking, and a virtual companion, EcoBuddy was born.
⚙️ What It Does
EcoBuddy is an AI-powered sustainability companion that:
- Lets users log eco-friendly actions through voice or text
- Uses image verification to confirm real-world actions
- Assigns a sustainability score based on user behavior
- Evolves a virtual “Terp Turtle” buddy as users level up
- Rewards users with real incentives like dining credits and promo codes
The more sustainable you are, the more your buddy grows—and the more rewards you unlock.
🛠️ How We Integrated TerpAI
One of the most technically creative aspects of EcoBuddy was how we leveraged TerpAI — UMD's own AI platform — as the intelligence backbone of our app. Reverse Engineering the TerpAI API Rather than building a generic LLM integration, we inspected TerpAI's internal network traffic and identified its private API endpoint structure at https://terpai.umd.edu/api/internal/userConversations/{conversationId}/segments. By studying the request/response cycle through browser dev tools, we were able to construct authenticated POST requests that let us programmatically send prompts to TerpAI and stream back responses — essentially turning UMD's own AI infrastructure into our sustainability reasoning engine. Powering the Sustainability Score We fed user-submitted actions and image descriptions directly into TerpAI using a structured prompt spec (the EcoBuddy Agent Specification), which instructed the model to evaluate sustainability on a 0–100 scale across categories like transportation, waste, energy, food, and water. TerpAI's response became the source of truth for every score rendered in the UI — grounding our gamification system in real, contextual reasoning rather than hardcoded rules. Image Detection Pipeline When users upload a photo, we pass the image context to TerpAI along with our scoring rubric. The model analyzes the visual content, identifies sustainability-relevant behaviors or objects, and returns a structured interpretation with a score and an improvement tip — all formatted to EcoBuddy's response spec. This gave us AI-powered image understanding without needing a separate vision model. Real-Time Quests via Today@UMD To keep quests dynamic and campus-relevant, we pulled live event data from https://today.umd.edu/articles-list and piped it into TerpAI as context. This allowed the agent to generate quests tied to actual UMD happenings — sustainability events, dining specials, or campus initiatives — rather than static challenges. Location-Aware Suggestions We combined the device's geolocation with TerpAI's UMD-context awareness to surface hyper-local recommendations. Whether a student is near McKeldin Library, a campus dining hall, or a water refill station, EcoBuddy used TerpAI to tailor its suggestions to where the user actually was on campus in real time.
🛠️ How We Built It
We built EcoBuddy as a full-stack, multimodal AI system:
- Frontend: Voice + text interface for natural user input
- Backend: Manages scoring, leveling, and orchestrates AI workflows
- APIs & AI Integration:
- Used Google Gemini for natural language understanding and intent extraction
- Integrated Google Maps API to add location-based context (e.g., nearby sustainable actions, environment awareness)
- Leveraged computer vision models from Hugging Face for image classification and verification
- Cross-validated text + image outputs to ensure authenticity
- Data Layer: Used web scraping to enrich environmental insights (e.g., weather, tips)
- System Design: Built with asynchronous processing and idempotent pipelines for scalability
🚧 Challenges We Ran Into
- Verification accuracy: Ensuring that user-submitted images matched their described actions required careful tuning of confidence thresholds and cross-validation between text and vision models
- Time constraints: Building a full multimodal AI pipeline (voice → text → image verification → scoring) within a hackathon timeframe
- API limitations: We didn’t have direct access to a TerpAI API, so we had to design our own TerpAI-like agent layer
- Reverse engineering workflows: By analyzing how TerpAI structured prompts and responses in the browser, we identified key attributes (prompt formatting, context structure, response patterns) and replicated them using the Google Gemini API
- System integration: Coordinating multiple APIs (LLM, vision, maps) while keeping latency low and the user experience smooth
- Reward feasibility: Designing a system that could realistically integrate with campus incentives like dining credits
- Model generalization: Our initial single decision tree performed well on training data but failed to generalize to new inputs. We pivoted to LightGBM, which improved consistency and accuracy on unseen data during live testing
🧠 What We Learned
- How to rapidly prototype AI-powered applications under pressure
- The importance of user experience in adoption, even for technical products
- How to combine multiple AI systems (LLMs + vision) into one workflow
- The challenges of real-world validation vs. theoretical models
- That gamification + incentives can significantly drive behavior change
🚀 What's Next for EcoBuddy
- Expand beyond UMD by creating school-specific versions of EcoBuddy, each with its own custom branding, mascot, and logo to make the experience feel exclusive and personalized
- Partner with universities to integrate real rewards systems (e.g., dining credits, campus promos)
- Improve AI verification accuracy using more advanced multimodal models
- Add location + weather-based recommendations using APIs to suggest sustainable actions in real time
- Introduce social features like leaderboards, team challenges, and campus-wide competitions
Ultimately, we want EcoBuddy to become a platform that turns sustainability into a daily habit—not a chore 🌍🐢
Built With
- api
- idempotent-pipelines
- languages:-javascript
- modular
- python-frontend:-react-backend:-node.js-/-express-ai-&-ml:-google-gemini-(natural-language-understanding)-hugging-face-(image-classification-models)-ml-libraries:-pytorch
- requests-(web-scraping-&-data-ingestion)-visualization:-matplotlib-/-graphing-libraries-for-analyzing-model-performance-apis:-google-maps-api-(location-based-insights)-system-design:-asynchronous-processing
- scikit-learn-(model-training-&-evaluation)-data-processing-&-scraping:-beautifulsoup
Log in or sign up for Devpost to join the conversation.