Inspiration

How many times a day do you pull out your phone to identify a landmark, check the air quality, or find your next turn? We wanted to eliminate that friction. OmniScope was born from the idea that contextual, AI-powered information should live in your line of sight, not behind a screen. We envisioned a world where exploring your surroundings feels effortless, immersive, and intelligent.

What it does

OmniScope is a wearable AI assistant that brings the digital and physical worlds together. Built with Xreal AR glasses and a Raspberry Pi, it processes real-time camera feeds, GPS data, and environmental sensor inputs to deliver instant, contextual insights. Whether you're identifying a monument, checking air quality, or navigating city streets, OmniScope overlays this information seamlessly through AR, keeping your hands free and your eyes on the world.

How we built it

We built OmniScope as a fully integrated hardware-software system. Our setup combines:

Xreal AR glasses for immersive, hands-free visual display

Raspberry Pi + camera module for on-device processing

Claude Sonnet 4 (Vision) for real-time image understanding and contextual analysis

iOS companion app for control, data syncing, and Google Maps integration

The workflow is simple: point your camera, tap “Capture” on the app, and OmniScope analyzes what it sees, cross-referencing GPS data to provide detailed historical or environmental context. We also implemented turn-by-turn navigation over AR, synced wirelessly from the iOS app, and a custom interface for air quality and weather visualization.

Challenges we ran into

We faced:

Accurate navigation updates on both iOS app and AR display.

Accessing Weather and AQI properly, making sure they're accurate.

Bypassing iOS' hotspot client isolation problem.

Accomplishments that we're proud of

We achieved full hardware integration, a seamless AR user interface, and end-to-end AI scene understanding. Seeing the system identify landmarks and overlay relevant context in real time was a true “wow” moment for the team.

What we learned

We learned how critical context-aware design is for wearable tech; balancing information richness with unobtrusive presentation. We also deepened our understanding of multimodal AI (vision + text), embedded system optimization, and the importance of efficient cross-device communication. Above all, we learned that the future of computing is not in our pockets, it’s in our perception.

What's next for OmniScope

Next, we plan to refine OmniScope into a consumer-ready product by:

Miniaturizing the hardware into a sleeker, all-in-one module

Expanding context recognition with offline AI models for privacy

Enhancing AR UX with dynamic overlays and gesture/voice controls

Adding more functionalities, such as real-time public transit navigation

Activity tracking

Built With

Share this project:

Updates