π Inspiration In todayβs world, social impact and volunteer work are often hard to verify. NGOs struggle with fake or exaggerated claims, and there is no reliable way to prove that an activity actually happened. We were inspired by a simple idea: βIf financial transactions can be verified, why not social impact?β This led us to build ImpactInk, a system that brings trust, transparency, and proof to real-world impact using AI. βοΈ What it does ImpactInk is an AI-powered platform that verifies whether a claimed volunteer activity is genuine. It: πΈ Analyzes uploaded images of volunteer work π§ Detects objects and actions (e.g., cleaning, teaching, distributing food) π Generates contextual descriptions of the activity π Detects duplicate or reused images π€ Uses AI reasoning to validate the claim π Produces an Impact Authenticity Score π οΈ How we built it We built ImpactInk using a multi-model AI architecture: πΉ Vision Layer (Computer Vision) YOLOv8 β Detects objects and activities BLIP β Generates image captions CLIP β Measures similarity between image and claim πΉ Reasoning Layer LLMs (Phi-3 / Mistral / LLaMA) Validates whether the detected activity matches the claimed impact πΉ Backend FastAPI orchestrates all components Runs locally while AI models run on Google Colab (GPU) π Scoring Formula β οΈ Challenges we ran into π Integrating multiple AI models into one pipeline πΌοΈ Interpreting ambiguous or unclear images π Preventing duplicate submissions using embeddings (FAISS) β‘ High latency due to multiple model calls π€ Ensuring LLM outputs are reliable and not hallucinated π Accomplishments that we're proud of β Built a working prototype combining multiple AI systems β Successfully verified activities using image + reasoning β Designed a scalable architecture (modular services) β Created a unique solution to a real-world trust problem β Integrated cutting-edge models (YOLO, CLIP, LLMs) into one system π§ What we learned Combining multiple AI models improves reliability over single-model systems Real-world problems require both vision + reasoning, not just detection System design and orchestration are as important as model accuracy AI outputs need explainability to build trust Building under constraints (time, compute) improves problem-solving skills
Log in or sign up for Devpost to join the conversation.