Inspiration

Beacon - AI-Powered Campus Lost & Found πŸ”

Inspiration πŸ’‘

We've all been thereβ€”losing your keys, wallet, or AirPods somewhere on campus and hoping someone turned it in. But traditional lost-and-found systems have a fatal flaw: they rely on perfect descriptions. Your "navy blue backpack" might be someone else's "dark blue bag" or "black backpack." We realized that showing is better than telling, and with modern AI vision models, we could match items visually instead of relying on subjective descriptions.

What It Does ⚑

Beacon is a native iOS app that revolutionizes how students find lost items on campus:

  • Visual AI Search: Upload a photo of what you lost, and OpenAI's CLIP model finds visually similar itemsβ€”no perfect description needed
  • Smart Location Matching: Uses GPS coordinates and EXIF metadata to prioritize matches near where you lost your item
  • Intelligent Ranking Algorithm: Combines visual similarity with location proximity for accurate results
  • Dual-Mode System:
    • Finder Mode: Report found items with photos and location
    • Searcher Mode: Search for lost items using natural language or quick filters

How We Built It πŸ› οΈ

Frontend (iOS):

  • Swift/SwiftUI for a native, fluid user experience
  • CoreLocation for GPS tracking and location services
  • ImageIO for extracting EXIF metadata from photos
  • Custom glassmorphism UI with gradient animations

Backend (Flask/Python):

  • OpenAI CLIP (ViT-B/32) for generating image embeddings and text-to-image matching
  • Haversine Formula for calculating distances between GPS coordinates:

$$d = 2R \arcsin\left(\sqrt{\sin^2\left(\frac{\Delta\phi}{2}\right) + \cos(\phi_1)\cos(\phi_2)\sin^2\left(\frac{\Delta\lambda}{2}\right)}\right)$$

where $R$ is Earth's radius, $\phi$ is latitude, and $\lambda$ is longitude

  • Custom Matching Algorithm:
    1. Generate CLIP embeddings for all stored images
    2. Compute similarity scores using cosine similarity
    3. If multiple candidates score within 10% of the best match, use location distance as tiebreaker
    4. Return best match with confidence score

Infrastructure:

  • Flask REST API with CORS support
  • Multipart form-data for image uploads
  • Metadata embedded in filenames: originalname__lat_lon__pickupLocation.ext
  • ngrok for secure tunneling during development

Challenges We Faced 😀

  1. EXIF Metadata Extraction: Camera photos don't automatically include GPS data like photos from the library. We had to implement dual fallback: try EXIF first, then use live GPS.

  2. Platform Migration: Midway through, we migrated from React Native to native Swift for better performance and iOS integration. This required rewriting the entire frontend while maintaining API compatibility.

  3. Location-Based Tiebreaking: Initially, we only used CLIP confidence scores, but this led to ambiguity when multiple similar items existed. We developed a sophisticated tiebreaker algorithm that uses GPS distance when confidence scores are close.

  4. Cross-Platform Path Handling: Hardcoded Windows paths broke on macOS. We solved this with dynamic path resolution using os.path.dirname(os.path.abspath(__file__)).

  5. Image Format Inconsistency: The backend originally only searched .jpg files but accepted all formats. We expanded to .jpg, .jpeg, and .png to ensure all uploads are searchable.

Accomplishments We're Proud Of πŸ†

  • βœ… Successfully integrated OpenAI CLIP for semantic visual search
  • βœ… Built a beautiful, native iOS app with smooth animations and intuitive UX
  • βœ… Implemented intelligent location-aware matching that combines AI with geography
  • βœ… Created a robust filename-based metadata system that survives database-free deployments
  • βœ… Achieved <2 second search times even with hundreds of images
  • βœ… Extracted EXIF GPS data from photos for automatic location tagging

What We Learned πŸ“š

Technical Skills:

  • How transformer-based vision models like CLIP work and how to optimize them for real-time use
  • SwiftUI's declarative paradigm and state management patterns
  • CoreLocation API and handling iOS permission workflows
  • Image metadata standards (EXIF, IPTC) and extraction techniques
  • Implementing distance calculations with the Haversine formula

Design Insights:

  • The importance of fallback mechanisms (EXIF β†’ GPS β†’ manual entry)
  • How confidence thresholds affect user trust in AI systems
  • The value of visual feedback during async operations (uploading, searching)

Product Development:

  • When to use AI vs. traditional algorithms (AI for matching, math for distance)
  • The critical importance of location context in physical-world applications
  • Native vs. cross-platform trade-offs in mobile development

What's Next for Beacon πŸš€

  • Real-Time Notifications: Push alerts when a potential match is found
  • Interactive Campus Map: Visual display of where items were found
  • User Authentication: Account system for tracking your reports
  • OCR for ID Cards: Automatically extract names from student IDs
  • Machine Learning Pipeline: Retrain CLIP on campus-specific items for better accuracy
  • Multi-Campus Support: Scale to universities nationwide
  • Blockchain Integration: Immutable proof-of-finding for high-value items
  • Social Features: Thank and rate finders, build reputation scores

Built With

Share this project:

Updates