Inspiration

The inspiration for this project came from the idea that good conversations are driven by context. In many social situations, parties, networking events, or casual meetups, the hardest part isn’t talking, but not knowing enough about the person in front of you to start or sustain a meaningful conversation.

I noticed that most useful social information already exists, names, shared interests, mutual connections, but it’s fragmented across apps and inaccessible in the moment it’s actually needed. Pulling out a phone breaks eye contact and social flow, while relying on memory often fails under pressure.

What it does

LookBack is a pair of smart glasses connected to an AI-powered app that provides real-time social context during in-person interactions. When the user looks at someone, the system can surface relevant information, such as a name, shared interests, or mutual connections, directly in the glasses display.

Instead of having awkward small talks, LookBack delivers subtle, in-the-moment cues that help users start conversations, remember people, and engage more confidently in social settings.

PS: Since we don’t have access to smart glasses, we built LookBack as a mobile software prototype to demonstrate the concept.

How we built it

The project was built as a prototype combining computer vision, machine learning, and wearable UI design: -Implemented real-time face detection and recognition using a pre-trained vision model.

  • Matched detected faces against a database of known profiles to retrieve contextual information. -Designed a lightweight AR-style overlay for the glasses that displays information without obstructing vision. -Built a pipeline that connects the camera feed, recognition system, and context retrieval with low latency.

Challenges we ran into

  • Navigating LinkedIn’s strong anti-scraping protections: To overcome this, we implemented a Selenium-based solution that simulates real user behavior, including authenticated logins, allowing us to reliably scrape dynamically loaded profile data.
  • Identifying people in real time: Our initial plan involved using reverse image search APIs, but these solutions were either inaccurate, restricted behind paywalls, or limited in scale. As a result, we pivoted to building our own database of people, currently focused on McGill University and parts of Montreal, mapping names to facial data for identification.
  • Face-tracking system: Consistently locking onto faces and accurately track multiple people simultaneously, requiring careful tuning of detection thresholds, tracking logic, and performance optimizations.
  • Real-time performance: Running face recognition quickly enough to feel seamless on wearable hardware was challenging -Lighting and angle variability: Faces in low light or at awkward angles reduced recognition accuracy. -Information overload: Deciding how much context to show without distracting or overwhelming the user required careful design choices. -Ethical considerations: We had to think critically about consent, privacy, and how to avoid misuse of personal information.

Accomplishments that we're proud of

Despite the challenges, we are especially proud of building a fully end-to-end system that works in real time, from face detection and tracking to identity matching and profile summarization. We successfully implemented a multi-face tracking pipeline that can detect and follow several people simultaneously while maintaining stable identity assignments. On the backend, we integrated authenticated LinkedIn scraping and automated profile summarization, allowing relevant information to be surfaced within seconds of identification. We are also proud of designing and scaling our own custom face–name database focused on the McGill and Montreal community, which gave us full control over accuracy, performance, and extensibility. Overall, our biggest achievement was delivering a technically complex, real-time system that integrates computer vision, web automation, and AI into a cohesive and functional prototype within a hackathon timeframe.

What we learned

This project taught us how challenging it is to build reliable real-time systems that work outside of controlled environments. We learned firsthand how difficult multi-face tracking and identity consistency can be, especially when dealing with motion, occlusions, and multiple people at once. We also learned the importance of being flexible in our approach, when reverse image search APIs didn’t work as expected, we quickly pivoted and built our own identification database. On the backend, we gained practical experience with browser automation, authentication, and handling real-world constraints when working with dynamic platforms like LinkedIn.

What's next for LookBack

Next steps include:

  • Improving recognition accuracy across diverse environments
  • Adding stronger privacy controls and consent mechanisms
  • Expanding contextual insights beyond recognition (events, shared experiences, conversation cues)
  • Exploring hardware integration with next-generation AR glasses

Our long-term vision is to make LookBack a social augmentation tool that helps people connect more meaningfully, without replacing genuine human interaction.

Share this project:

Updates