Inspiration We saw an opportunity to combine AI-powered image analysis with a community-driven platform to help users easily sell and acquire objects.

What It Does Our app lets users capture or upload photos and instantly receive AI-generated insights using the Gemini API. Users can create posts, explore content from others, and build a personalized profile, all within a clean, mobile-first experience built on React Native and Expo.

How We Built It Auth0 for user authentication

Frontend: React Native with Expo (TypeScript), file-based routing via Expo Router with tabs for Create, Explore, and Profile

AI: Google Gemini gemini-2.5 API for real-time image analysis, sending base64-encoded images directly to the generateContent endpoint

Backend: Firebase Firestore for real-time data syncing and user-generated content storage

Image Handling: expo-image-picker for camera/gallery access with base64 encoding via expo-file-system

Challenges We Faced One of our biggest hurdles was the Gemini API quota system we burned through free-tier limits across multiple models (gemini-2.0-flash, gemini-2.0-flash-lite) during development and had to iterate quickly to find a stable configuration. We also debugged environment variable loading in Expo, Firestore composite index requirements, and TypeScript compatibility issues with newer expo-image-picker versions.

What We Learned How to integrate a multimodal AI API with real image data in a mobile app

Managing API keys securely in a React Native/Expo environment

Firestore data modeling and composite index requirements for complex queries

Rapid iteration under time pressure, debugging and pivoting to new ideas.

What's Next

Expand AI analysis to support multiple images

Improve the Explore feed with filters and search

Move API key to a secure backend proxy

Built With

Share this project:

Updates