ItemRadarAI - AI-Powered Lost and Found Platform
In large urban environments, countless personal belongings are lost daily across diverse public spaces such as buses, metro stations, and parks. While many people are willing and eager to return these items to their rightful owners, the challenge lies in efficiently matching lost objects with those who find them. Fragmented reporting systems, vague descriptions, and inconsistent location details often lead to lost items remaining unclaimed, frustrating both finders and owners. ItemRadarAI addresses this critical gap by leveraging advanced AI agents and multi-modal data to streamline lost-and-found processes, enabling swift, accurate reunifications at city scale.
1. Features and Functionality
ItemRadarAI transforms the traditional lost-and-found experience by leveraging a coordinated team of AI agents to match lost items with found reports in just minutes. Its core functionalities include:
Conversational Chatbot Interface
Available via web our AI-powered chatbot guides users through the lost item reporting process. Users can submit either a description or a photo along with a location and the system handles the rest.
Smart Matching with Vector Search
Item descriptions and image-based insights are converted into vector embeddings and matched in real-time using Vertex AI’s Vector Search. When multiple candidates are found, the system engages the user with targeted, discriminative questions (e.g., "Does it have a zipper?") to narrow down the results.
Automated Found Item Registration
Finders can simply upload a photo and approximate location. The system uses Gemini Vision to extract descriptive attributes like brand, material, and color. A hybrid geocoding pipeline (combining Google Maps API, OpenStreetMap, and Gemini) intelligently interprets vague location inputs like "outside Starbucks near downtown", converting them into accurate coordinates.
2. Technologies Used
- AI/ML Stack:
- Google Gemini: NLP for conversational UI and image analysis (Gemini Vision).
- Vertex AI: Vector embeddings (
text-embedding-004) and similarity search. - LiteLLM: Unified LLM management.
- Google Gemini: NLP for conversational UI and image analysis (Gemini Vision).
- Cloud Infrastructure:
- Google Cloud: Firestore (NoSQL database), Cloud Functions (serverless), Cloud Storage (media).
- APIs: Maps Geocoding, Places (location validation), Vision (object detection).
- Google Cloud: Firestore (NoSQL database), Cloud Functions (serverless), Cloud Storage (media).
- Backend:
- Python 3.8+ with Google ADK for multi-agent orchestration.
- FastAPI (REST endpoints) + Uvicorn (ASGI server).
- Python 3.8+ with Google ADK for multi-agent orchestration.
- Frontend:
- Next.js 15.3.3 + React 18.3.1 (TypeScript) with Tailwind CSS.
- Radix UI (accessibility) + React Dropzone (file uploads).
- Next.js 15.3.3 + React 18.3.1 (TypeScript) with Tailwind CSS.
3. Data Sources
- Primary Inputs:
- User-submitted photos, item descriptions, and GPS coordinates.
- Timestamps (loss/found events) and chat logs (query refinement).
- User-submitted photos, item descriptions, and GPS coordinates.
- External APIs:
- Google Maps/Places: Precise geocoding and address validation.
- OpenStreetMap (Nominatim): Fallback for GPS-to-address conversion.
- Google Cloud Storage: Image hosting and metadata management.
- Google Maps/Places: Precise geocoding and address validation.
4. Findings and Learnings
Participating in the Agent Development Kit Hackathon with Google Cloud has been an eye-opening experience. One of our biggest takeaways is the incredible potential of AI agents to solve real-world problems through collaboration, modularity, and intelligence. The Agent Development Kit (ADK) proved to be a powerful and well-structured platform for building these systems, enabling us to rapidly prototype, test, and refine multi-agent workflows.
We were especially impressed by Gemini’s performance in this context, from generating accurate descriptions of images to engaging in meaningful conversations, it consistently delivered fast, reliable, and context-aware outputs. At the same time, we discovered that inter-agent communication remains a key challenge: ensuring clear state management and role boundaries between agents requires thoughtful design and testing.
Another important insight was the importance of strong prompt engineering. Even with great tools and models, crafting precise, structured prompts is essential to guide agents effectively, especially when chaining multiple tools or coordinating different tasks.
Overall, this hackathon has shown us not just the capabilities of Google's AI ecosystem, but also how promising this agent-based paradigm is for the future of intelligent applications.
Team: Ignacio Elvira Cruz, Iñigo Valenzuela, Rubén Llorente, Alonso García
Event: Google Cloud Agent Development Kit Hackathon
Built With
- agent-development-kit
- bash
- bigquery
- cloud-functions
- cloud-run
- dialogflow-cx
- docker
- fastapi
- firebase
- firebase-cloud-messaging
- firebasestudio
- firestore
- gcloud-cli
- gemini-text?&?multimodal-embeddings
- github-actions
- google-cloud
- iam-(least?privilege-sas)
- javascript
- looker-studio
- pub/sub
- python-3.11
- react
- secret
- sendgrid-api
- vertex?ai-vector-search
Log in or sign up for Devpost to join the conversation.