Inspiration Modern work happens everywhere—kitchen tables, dorm rooms, shared spaces—but these environments aren’t built for focus. Traditional productivity tools live inside screens, not in the physical world where distractions occur. We imagined a solution that doesn’t replace your real room with VR, but augments it. Inspired by spatial cognition research, mixed reality interfaces, and the idea that environments shape behavior, we created RoomMind: a system that transforms any physical room into an AI-adaptive productivity cockpit that evolves with your workflow. Inspiration Modern work rarely happens in ideal spaces. People study, plan, and create from kitchen tables, dorm rooms, cafés, and shared environments—places full of visual clutter and constant interruption. Yet most productivity tools exist only inside screens, detached from the physical world where focus is actually challenged. We envisioned a system that doesn’t replace your surroundings with a virtual replica, but enhances them. Drawing from research in spatial cognition, mixed-reality interface design, and environmental psychology, we set out to build something that uses the room itself as a cognitive scaffold. That idea became RoomMind: an adaptive MR workspace that transforms any environment into an AI-supported productivity cockpit that evolves with the user’s workflow and attention patterns. What It Does RoomMind turns your real space into an intelligent, spatially organized productivity environment using passthrough, hand tracking, and AI-driven adaptation. After scanning the room, the system identifies walls, desks, surfaces, and open areas, then projects personalized “Productivity Zones” onto them. These zones can become task boards, timelines, ideation spheres, or focus surfaces anchored directly to the room’s geometry. Interaction is entirely hand-first—pinches, swipes, pulls, and air-drawing gestures let users manipulate content as if it were physically present. When deep focus is needed, RoomMind can gently mute real-world distractions by masking clutter, highlighting essential information, and elevating priority items into view. Its Adaptive Focus Engine monitors subtle behavioral cues; when attention drifts, it introduces soft visual nudges to guide the user back. The result is a calm, immersive, distraction-resistant workspace built around natural spatial behavior. How We Built It RoomMind was developed in Unity using the Meta XR SDK (v81+), integrating Passthrough, Hand Tracking, Scene Understanding, and Spatial Anchors. Passthrough AI classifies surfaces and identifies usable areas, while Scene Understanding constructs the geometry necessary for stable spatial UI anchoring. Our custom gesture interpreter maps pinches, swipes, grabs, and air-draw strokes with smoothing algorithms, contextual logic, and cooldowns for reliability. Lightweight mesh panels and optimized shaders keep floating UI elements running at 60–72 FPS. The Adaptive Focus Engine leverages on-device machine-learning models to detect attention shifts and dynamically adjust overlays. Spatial Anchors preserve workspace continuity across sessions. Throughout development, we relied on rapid prototyping and iterative profiling to ensure user comfort on Quest 3 hardware. Challenges Balancing passthrough clarity with MR overlays required extensive fine-tuning of masking and contrast. Gesture recognition initially struggled with speed and varied hand posture, prompting the development of smoothing filters and contextual gesture logic. Scene Understanding occasionally misclassified surfaces in dim lighting, leading us to add fallback rules and user-driven correction gestures. Maintaining framerate with many spatial elements also demanded aggressive batching and LOD optimization. Accomplishments We successfully created a spatial workspace that adapts to any room, supports natural controller-free interaction, and offers a focus mode that reduces distractions without isolating users. The experience demonstrates mixed reality’s value beyond entertainment and maintains polished, stable performance on consumer hardware. What We Learned & What’s Next We learned that mixed reality becomes most powerful when digital tools merge seamlessly with physical environments. Looking ahead, we plan to add multi-user collaboration, AI-driven task automation, environment-based layout profiles, deep productivity-tool integration, and advanced biofeedback focus cues—continuing RoomMind’s evolution into a fully adaptive spatial productivity ecosystem.
Built With
- all
Log in or sign up for Devpost to join the conversation.