Inspiration
We were inspired by the daily struggle for sustained focus and mental well-being in the remote work era. Traditional desktop assistants are passive; we wanted to create a system that is proactive, contextual, and highly adaptive.
What it does
Lumora is a smart, adaptive desk assistant that leverages real-time facial emotion data to automatically manage a user’s environment. It uses a camera and a machine learning model (or simulation) to continuously detect the user's dominant emotion (e.g., focused, stressed, tired). With this info, translates the emotion into a detailed, structured Action Plan and offers insight into mood patterns and personalized productivity recommendations.
How we built it
The system uses a Micro-Service Controller Platform (MCP) architecture, separating the logic (Flask/FastMCP API) from the presentation (React/TypeScript UI) and the data collection (Python client). This design allows the Python emotion tracker to call the central MCP API, which returns a detailed action plan for the React client to display and eventually execute system-level commands.
Challenges we ran into
Our biggest hurdles involved establishing robust, API-based communication between the separate Python backend and the React/TypeScript frontend.
Accomplishments that we're proud of
We successfully built a highly modular, decoupled architecture where core intelligence is separate from the UI, allowing for rapid feature iteration without breaking the frontend. We achieved our primary goal of demonstrating adaptive environmental control by translating an emotion into a structured command executed across different software components.
What we learned
We gained valuable expertise in orchestrating diverse technologies (Python, Node/React, Flask) within an API-first framework, realizing that the difficulty lies in coordinating these microservices, not just building them. This reinforced the power of a centralized API layer (FastMCP) for defining system behavior and integrating disparate components efficiently.
What's next for Lumora
We plan to replace the simulated emotion detection with a real ML model and integrate official Spotify and smart device APIs to fully automate the environmental changes based on the received action plan. We will also expand the MCP's intelligence by including contextual data like mouse/keyboard activity to make the system's suggestions even smarter.
Built With
- fastmcp
- flask
- python
- sqlalchemy

Log in or sign up for Devpost to join the conversation.