Personal Activity Intelligence (PAI)

AI Command Center Plugin for Logitech Devices

Note: This submission represents the ideation phase. A working prototype is currently in development for the Semi-Finals phase.


Inspiration

Logitech devices are powerful but passive — they execute commands without ever learning from them. The average knowledge worker loses roughly 4 hours per week reorienting after application switches, toggles through applications over 1,200 times daily, yet 81% never use a single keyboard shortcut. The gap between device capability and actual usage is not a hardware problem — it is an intelligence problem. PAI was conceived to close that gap.


What it does

PAI adds an intelligence layer on top of Logitech's Actions SDK, operating across two phases:

Phase 1 — Smart Automation: PAI monitors active applications and user behaviour in real time, then automatically recommends and applies optimal button, dial, and shortcut configurations. It learns repetitive sequences and surfaces one-click macro suggestions tailored to each user's workflow — with zero manual profile switching.

Phase 2 — Workflow Second Brain: PAI passively captures content from websites, PDFs, and documents opened during a session, auto-tags each item by application, context, and time, and consolidates everything into a local, searchable knowledge base. Users can query it in natural language — "What did I read about this last week?" — and retrieve answers from their own captured history. All data is stored locally and never transmitted externally.


How we built it

The plugin is designed around the Logitech Actions SDK, providing native integration with the MX Creative Console, MX Master Series, and all Actions-enabled devices. The architecture follows a four-step loop:

$$\text{Detect} \rightarrow \text{Record} \rightarrow \text{Analyse} \rightarrow \text{Adapt}$$

The AI engine processes the accumulated knowledge base to identify optimisation opportunities and select the most contextually appropriate device configuration. The longer PAI runs, the higher the personalisation fidelity — improvement is compounding, not linear.


Challenges we ran into

  • Defining the boundary between passive observation and privacy intrusion, resolved by committing to fully local storage with no external transmission
  • Determining the minimum data threshold required before AI suggestions become meaningfully accurate
  • Designing an adaptive configuration system that applies changes silently without interrupting active workflows

Accomplishments that we are proud of

  • A coherent two-phase product roadmap that delivers immediate value on day one while compounding utility over time
  • A privacy-first architecture that treats local data sovereignty as a core design constraint, not an afterthought
  • Alignment with the existing Logitech device ecosystem — zero new hardware required across a base of 400M+ users

What we learned

Building intelligence on top of peripheral devices requires rethinking what a device configuration actually means. A profile is no longer a static preset — it is a dynamic, context-aware state that should evolve with the user's behaviour. The shift from manual configuration to learned adaptation is non-trivial in both technical and UX terms.


What's next

  • Develop a functional prototype of Phase 1 Smart Automation using the Actions SDK
  • Validate macro detection accuracy across at least three distinct workflow profiles
  • Build and test the local knowledge base pipeline for Phase 2 content capture
  • Submit a working build for the Semi-Finals evaluation by 1 April 2026

Built With

Share this project:

Updates