🌍 Inspiration In today’s world, social impact and volunteer work are often hard to verify. NGOs struggle with fake or exaggerated claims, and there is no reliable way to prove that an activity actually happened. We were inspired by a simple idea: β€œIf financial transactions can be verified, why not social impact?” This led us to build ImpactInk, a system that brings trust, transparency, and proof to real-world impact using AI. βš™οΈ What it does ImpactInk is an AI-powered platform that verifies whether a claimed volunteer activity is genuine. It: πŸ“Έ Analyzes uploaded images of volunteer work 🧠 Detects objects and actions (e.g., cleaning, teaching, distributing food) πŸ“ Generates contextual descriptions of the activity πŸ” Detects duplicate or reused images πŸ€– Uses AI reasoning to validate the claim πŸ“Š Produces an Impact Authenticity Score πŸ› οΈ How we built it We built ImpactInk using a multi-model AI architecture: πŸ”Ή Vision Layer (Computer Vision) YOLOv8 β†’ Detects objects and activities BLIP β†’ Generates image captions CLIP β†’ Measures similarity between image and claim πŸ”Ή Reasoning Layer LLMs (Phi-3 / Mistral / LLaMA) Validates whether the detected activity matches the claimed impact πŸ”Ή Backend FastAPI orchestrates all components Runs locally while AI models run on Google Colab (GPU) πŸ“Š Scoring Formula ⚠️ Challenges we ran into πŸ”Œ Integrating multiple AI models into one pipeline πŸ–ΌοΈ Interpreting ambiguous or unclear images πŸ” Preventing duplicate submissions using embeddings (FAISS) ⚑ High latency due to multiple model calls πŸ€– Ensuring LLM outputs are reliable and not hallucinated πŸ† Accomplishments that we're proud of βœ… Built a working prototype combining multiple AI systems βœ… Successfully verified activities using image + reasoning βœ… Designed a scalable architecture (modular services) βœ… Created a unique solution to a real-world trust problem βœ… Integrated cutting-edge models (YOLO, CLIP, LLMs) into one system 🧠 What we learned Combining multiple AI models improves reliability over single-model systems Real-world problems require both vision + reasoning, not just detection System design and orchestration are as important as model accuracy AI outputs need explainability to build trust Building under constraints (time, compute) improves problem-solving skills

Share this project:

Updates