About the Project — LumoSpace

Inspiration

Our team started with one question:
“If light can change how we feel, focus, and perform, why don’t we measure it like we measure steps or heart rate?”

Inspired by neuroscientist Andrew Huberman’s research on light and circadian health, we wanted to bring those insights into everyday spaces. Good light improves mood, focus, and sleep quality, but most people have no way to quantify their lighting conditions.

That led to LumoSpace, an AI-powered tool that turns any phone into a personal light analyzer. It helps users visualize, quantify, and improve lighting conditions in homes, offices, and real-estate spaces in seconds.


What We Learned

We learned that lighting quality is not just an aesthetic factor but a measurable health variable. During the hackathon, we explored how light intensity (lux), color temperature (K), and flicker frequency affect human performance and wellbeing.
We modeled these effects with metrics such as

$$ \text{Productivity Gain} \approx 15\% \quad \text{and} \quad \text{Error Reduction} \approx 23\% $$

We also confirmed that optimizing daylight use can reduce energy consumption by up to 30 percent. Integrating neuroscience, data science, and design thinking helped us understand how light truly shapes human experience.


How We Built It

We combined computer vision, sensor data, and AI analysis to make lighting evaluation fast and mobile-friendly.

Build Steps:

  1. Input Layer: The user uploads or captures a room image (standard or panoramic). The system automatically detects whether it’s a 360° view and extracts spatial cues such as room boundaries, window locations, and light fixture positions.
  2. Processing Layer: Computer vision models estimate geometry (width × length × height), identify room type (e.g., living room, kitchen), and segment light sources, surfaces, and window regions. When available, the phone’s ambient light sensor refines brightness and color temperature readings.
  3. Analysis Layer: The image and sensor data are passed to an AI model (Claude) using a structured JSON prompt that enforces consistent fields like roomType, windows, currentLighting, and recommendations. Claude interprets lighting quality, detects issues (e.g., “uneven lighting” or “too dark”), and generates optimized fixture and window suggestions with detailed parameters like lumens and color temperature.
  4. Output Layer: The output can be rendered into an interactive dashboard on our app, showing lighting scores, design recommendations, estimated improvement costs, and energy efficiency ratings. Personalized recommendations can include “increase daylight exposure,” “reduce glare,” or “adjust smart bulb temperature to 4000 K.”

Everything runs locally for privacy. Pro Mode doubles color accuracy through calibration and sensor fusion.


Challenges We Faced

  • Achieving consistent readings across different phone sensors required complex normalization.
  • Testing the model in varied environments to validate results took time.
  • Designing a simple interface that translates technical results into clear actions.

Despite these challenges, we produced a working prototype that scans a room, generates metrics, and gives actionable insights in under a minute.


Future Vision

We plan to integrate with smart lighting ecosystems such as Philips Hue, Apple Home, and Google Home for adaptive closed-loop control. This will allow lights to automatically adjust based on real-time feedback.

We also plan to expand LumoSpace Pro for architects, designers, and realtors, adding advanced analytics and dashboards for tracking long-term lighting health.


In short: LumoSpace translates neuroscience and environmental science into simple, measurable insights. Better light means better lives.

Built With

Share this project:

Updates