Inspiration
We realized that modern knowledge workers don’t really “switch apps” — they switch mental states:
- Deep Work (writing, coding, designing)
- Analyze (spreadsheets, dashboards, debugging)
- Collaborate (meetings, docs, chat)
Every time we move from one state to another, we pay a “context switching tax”: hunting for windows, re-orienting shortcuts, changing mouse speed, closing/opening notifications. Even a tiny $2$–$5$ second pause, multiplied across hundreds of micro-switches per day, adds up to hours of lost focus.
Yet our hardware (mouse, keyboard, dial) is still mostly app-bound, not **mind-state-bound.
The question that sparked this project:
“What if Logitech peripherals understood how we were thinking, not just what window we had open?”
That idea became The Cognitive Layer OS - a layer that maps cognitive modes to hardware + OS behavior.
What We Will Built
We built a cognitive layer that sits between the user and the OS, using:
- A Logitech MX Master 3 mouse
- A Logitech Craft keyboard (especially the dial)
Instead of binding actions to apps, we bind them to modes like:
- Deep Work
- Analyze
- Create
When the user switches modes, three things change instantly:
The Hardware Morphs
- Mouse DPI / speed adjusts per mode (slow and precise for Excel, fast for Photoshop canvases).
- Scroll wheel toggles between ratchet vs. free-spin depending on the task.
- Thumb buttons are mapped to universal cognitive commands, e.g.:
- Summarize
- Find
- Undo
- Capture Insight (e.g., quick note or screenshot to a central “brain”)
The OS Morphs
- Opens / focuses the relevant workspace (e.g., analytics dashboard, design tools, IDE).
- Adjusts notification settings and focus modes.
- Applies mode-specific layouts or workspaces.
The Visual Feedback – “Actions Ring”
- An on-screen “Actions Ring” overlay shows:
- Current cognitive mode
- Live button mappings for the mouse and Craft dial
- This makes the hardware feel alive and “linked” to the user’s mental state.
- An on-screen “Actions Ring” overlay shows:
Example Flow:
Turn the Craft Dial to Analyze
→ Mouse slows down for pixel-perfect cell selection
→ Thumb button becomes Copy / Paste
→ Analytics dashboard pops up, notifications toned downTurn the Craft Dial to Create
→ Mouse accelerates for big canvas moves
→ Scroll wheel unlocks for fast zooming/timeline scrubbing
→ Thumb button becomes Brush Size or Next Tool
How We Plan to Build It
At a high level, our system has three layers:
Mode Controller (Cognitive Layer)
- A small daemon listens for:
- Craft dial turns / button combos
- Mode change events from our UI
- Maintains the current mode state as a simple FSM:
- DeepWork → Analyze → Create → …
- Exposes a lightweight internal API:
set_mode("Analyze"), get_current_mode(), etc.
- A small daemon listens for:
Hardware Integration Layer
- Uses the Logitech SDK to:
- Adjust DPI and scroll behavior dynamically.
- Rebind buttons / Craft dial actions depending on the active mode.
- Treats each device configuration as a profile:
- profile_deep_work, profile_analyze, profile_create
- On mode change, we swap profiles in (almost) $O(1)$ time: one event, full hardware remap.
- Uses the Logitech SDK to:
OS + UX Layer
- Uses OS-level automation (workspaces, app launching, focus modes) to:
- Open toolsets per mode: IDE + terminal for Deep Work, browser + dashboards for Analyze.
- Toggle notification profiles to reduce interruptions.
- Renders the Actions Ring overlay:
- Built as a simple UI that subscribes to mode changes.
- Shows icons / labels for the current mappings so users don’t have to guess.
- Uses OS-level automation (workspaces, app launching, focus modes) to:
Internally, we think of each mode as a bundle of primitives:
- Input behavior (DPI, scroll style, button mappings)
- App/workspace layout
- Attention policy (how loud the OS is allowed to be)
Challenges We Faced
Mapping “Mental States” to Concrete Events
- “Deep Work” and “Analyze” are fuzzy ideas.
- We had to translate them into very specific bindings:
- What exactly should thumb button 1 do in Analyze vs. Create?
- Which actions are truly “universal” (e.g., Summarize, Search)?
SDK & OS Constraints
- Working within the Logitech SDK and OS automation APIs meant:
- Some settings can’t be changed as fast or as granularly as we’d like.
- Different OSes expose different hooks.
- We had to design a portable abstraction that can degrade gracefully:
- If we can’t fully control X on OS Y, we still provide meaningful mode behavior.
- Working within the Logitech SDK and OS automation APIs meant:
Avoiding “Mode Confusion”
- If we change too much, too fast, users get lost.
- We needed:
- Clear visual feedback (Actions Ring).
- Stable, predictable mappings per mode.
- We iterated on icon design, color coding, and keeping each mode’s behavior internally consistent.
Latency and Flow
- For this to feel like a true cognitive co-pilot, mode switches must be:
- Instant (no lag on hardware changes)
- Predictable
- We optimized the event pipeline so that from dial turn → hardware + OS change feels like a single fluid gesture.
- For this to feel like a true cognitive co-pilot, mode switches must be:
What We Learned
Think in modes, not apps.
Designing around mental states changes how you think about shortcuts, defaults, and UX.Hardware is a canvas for cognition.
The same mouse and keyboard feel completely different when they’re aware of the user’s intent.Feedback is everything.
Even a small overlay like the Actions Ring dramatically boosts confidence: users feel safe exploring modes because they can “see” the mapping.Abstractions matter.
By modeling each mode as a bundle (input + workspace + attention), we made it easy to add new modes or tweak existing ones without rewriting everything.
Impact & Next Steps
This project turns Logitech peripherals into a cognitive co-pilot:
- Reduces the friction of task switching.
- Keeps users in their flow state longer.
- Repositions peripherals from “nice accessories” to a cognitive interface layer.
Next steps we’d love to explore:
- Learning user behavior over time to auto-suggest mode changes.
- Sharing / importing mode presets for roles: “Data Analyst,” “Video Editor,” “Developer,” etc.
- Integrating more deeply with OS focus APIs and calendar events
(e.g., auto-switch to Collaborate when a meeting starts).
In short, we started with a mouse, a keyboard, and a dial and ended up asking:
“What if the real OS is the one that understands how you think?”
Built With
- css3
- electron
- html5
- javascript
- logitech
- logitech-mx-master-3
- node.js
- os-level-automation-(powershell-/-applescript)
- python
- react
- typescript
- websocket/ipc
Log in or sign up for Devpost to join the conversation.