Inspiration

By 2026, the workplace has fundamentally changed. We are no longer just typing prompts into chatbots; we are managing entire teams of autonomous Multi-Agent Systems (via LangGraph, AutoGen, etc.). You might have one agent handling customer refunds, another reviewing code, and a third drafting outreach emails.

The problem? Managing a digital workforce through 20 different browser tabs and Slack channels is chaotic and exhausting. You lose visibility, and workflow bottlenecks occur because agents are constantly waiting for human approval. We realized that to effectively manage an AI workforce, you need what airline pilots have: a physical cockpit. You need dedicated hardware to steer, review, and orchestrate your agents day-to-day without breaking your flow state.

What it does

NEXUS is an architectural framework and interactive simulation that envisions transforming the Logitech MX Creative Console and MX Master 4 into an everyday control plane for your AI ecosystem. It is designed to pull your agents' tasks out of the cloud and put them under your fingertips.

  • Daily Triage & Arbitration: Instead of generic macros, we designed the 9-key OLED array to act as a dynamic inbox for your agents. One physical button press allows you to 'Sign & Approve' a drafted email or 'Reject & Fix' a PR.
  • Continuous Autonomy Throttling: We mapped the Contextual Dial to act as an analog steering wheel. Depending on the day's workload, you physically turn the dial to adjust an agent's freedom from 0% (Strictly Supervised) to 100% (Fully Autonomous).
  • Ambient Workspace Awareness: We designed a workflow where if an agent escalates a critical task, the MX Master 4 uses the Actions SDK to force the MagSpeed scroll wheel into a heavy "Ratchet Lock." You physically feel the notification in your hand.

How we built it

For this phase of the hackathon, we built a comprehensive, high-fidelity interactive architecture simulation using HTML, Tailwind CSS, and JavaScript. This simulation visually and logically demonstrates our exact UX, data flow, and hardware mapping without requiring the judges to run a local backend.

Alongside the working simulation, we engineered the complete production architecture blueprint:

  1. The Planned Brain: A Python-based orchestrator to manage the background AI agents and open a local WebSocket.
  2. The Planned Bridge: A custom C# Logitech Actions SDK plugin to listen to those WebSockets.
  3. The Planned UI: Logic to dynamically push Base64 images to the OLED array, turning it into a Dynamic State Machine.

Challenges we ran into

Our biggest design challenge was translating the sheer volume of AI data into a 9-key physical grid. An AI agent might generate 100 lines of code or a 5-paragraph email. How do you review that on a small OLED screen? The Pivot: We realized the hardware's strength is Triage, not deep reading. We designed the console to act as the ultimate "glanceable" routing board.

Additionally, simulating the tactile feel of proprietary hardware (like the MagSpeed wheel locking or the dial turning) purely in a web browser required creative UI/UX animations to accurately convey the physical concept to the judges.

Accomplishments that we're proud of

We successfully designed a workflow that moves the MX Creative Console beyond simple "keyboard macro" mapping. By defining "Continuous Hardware Semantics"—like using the dial as a physical risk throttle—we conceptually proved that this hardware can natively manage complex, non-linear enterprise workflows. We are incredibly proud of the interactive simulation that brings this vision to life.

What we learned

We learned that as AI becomes more autonomous, the need for physical, tactile interaction actually increases. People want to physically "feel" in control of their automated systems. The tactile clunk of the MagSpeed wheel or the smooth turn of the dial provides a psychological grounding that digital buttons simply cannot offer.

What's next for Nexus

Our immediate next step is to transition from simulation to a functional prototype. We plan to build out the C# Actions SDK bridge and connect it to a real LangGraph backend, bringing our "Bring Your Own Agent" (BYOA) PaaS to life. Give us the hardware, and we will build the control room.

Built With

  • na
Share this project:

Updates