🎯 Purpose
Bright Sight enables simple, intuitive indoor navigation on smart glasses by using floor plan images and natural language interaction. Guidance is audio‑first for hands‑free simplicity, with visual cues as secondary support.
🐝 What it does
Input: the smart‑glasses camera captures an image of the floor plan.
User interaction: the user asks a question about the floor (e.g., “Where is the restroom?”) or requests navigation (e.g., “Take me to Room 214”).
Output: Bright Sight computes the path and provides step‑by‑step audio guidance, supported by visual overlays when useful.
🛠️ How it works (focused on described scope)
User takes a photo of the floor plan with the smart‑glasses camera.
Bright Sight understands the floor plan and guides user to the exact place that user wants to go.
The system guides the user primarily with spoken instructions (e.g., “walk straight, then turn left”), while optional visual arrows or labels appear in the glasses display.
🚧 Known constraints
Camera capture quality matters: blurry or incomplete photos reduce accuracy.
Floor plan detail matters: missing or outdated information limits guidance quality.
📌 Current status
Project in progress. Scope is strictly limited to: camera floor‑plan input, user queries, AI reasoning, and audio‑first guidance with optional visuals.
🚀 Next step
Future features includes on-device deployment with utilization of smaller language models optimized for the task.
Log in or sign up for Devpost to join the conversation.