Inspiration
AI has become powerful, but the interface hasn’t evolved at the same pace. We still interact with intelligence primarily through text boxes, buttons, and menus.
Yet many high-performance professions — music production, aviation, robotics, film editing — rely on tactile control surfaces. Physical modulation enables faster feedback loops and deeper cognitive engagement.
We asked a simple question: What if AI workflows were conducted the way musicians conduct orchestras — through embodied, continuous control?
SANCTUM FLOW emerged from that gap between digital intelligence and physical interaction.
What it does
SANCTUM FLOW turns Logitech MX Creative Console and MX Master 4 into a hardware-native AI control plane.
Instead of typing long prompts and navigating menus, users physically modulate AI workflows:
– Rotate a dial to adjust reasoning depth or creativity – Press a key to trigger multi-agent tasks – Use the Actions Ring to switch between modes like Analyze, Generate, Simulate, or Deploy – Scrub through an AI reasoning timeline with tactile control
The result is faster iteration, interruptible inference, and embodied interaction with intelligent systems.
AI becomes something you conduct — not something you wait on.
How we built it
We built SANCTUM FLOW using the Logitech Actions SDK as the hardware integration layer.
Device inputs (dial rotation, key press, gesture events) are captured and routed to a local orchestration engine that:
– Maintains session state – Maps physical inputs to AI parameters – Manages workflow routing – Interfaces with cloud inference providers
We created a model abstraction layer so parameters like temperature, reasoning depth, and simulation horizon can be adjusted continuously via rotary input.
The system supports modular integrations (Slack, GitHub, Figma, CLI tools) through a workflow router, making it adaptable to different user modes such as Developer, Creative Director, or Data Operator.
Challenges we ran into
The biggest challenge was translating analog hardware movement into meaningful AI parameter control.
Rotary inputs are continuous. AI models operate in discrete computational steps. We had to:
– Smooth input signals – Prevent over-triggering inference – Maintain low-latency feedback – Keep state consistent across workflow mode switches
Another challenge was designing interaction patterns that feel intuitive rather than gimmicky. Hardware mapping had to accelerate real workflows — not just demonstrate novelty.
Accomplishments that we're proud of
We successfully transformed Logitech hardware into a live AI orchestration surface rather than a shortcut launcher.
Key accomplishments:
– Real-time parameter modulation via physical dial – Mode switching through Actions Ring gestures – Interruptible AI workflow control – A reasoning timeline that can be scrubbed and branched
Most importantly, the experience feels hardware-native — not layered on top.
What we learned
We learned that AI interaction speed is not just about model latency — it’s about interface friction.
Physical control surfaces reduce cognitive switching costs. Continuous modulation enables experimentation that would be cumbersome in text-based systems.
We also learned that trust increases when users can interrupt, rewind, and fork AI reasoning. Control builds confidence.
What's next for SANCTUM FLOW
Next steps include:
– Multi-agent orchestration control via hardware – Enterprise security layers and session governance – Deeper integrations with development and creative toolchains – Expanded model adapters (local + cloud hybrid) – User testing across engineering, design, and data teams
Long term, SANCTUM FLOW evolves into a universal physical AI interface layer — enabling Logitech devices to become the standard control surface for the AI-native workforce.

Log in or sign up for Devpost to join the conversation.