Inspiration
Every enterprise AI tool I've used follows the same friction loop:
- Open a chat window
- Explain the context
- Paste the content
- Wait for a response
- Manually copy the result back
That’s 5 steps of overhead per interaction.
For someone triaging emails, reviewing contracts, or solving MCQs at scale — this compounds brutally.
Quantifying the Problem
Let:
- t_overhead ≈ 45 seconds
- n = 30 tasks/day
T_lost = n × t_overhead = 30 × 45 = 1350 seconds ≈ 22.5 min/day
For a team:
- m = 50 users
T_team = m × T_lost = 50 × 22.5 = 1125 min/day ≈ 18.75 hours/day
Core Insight
The real problem is not AI capability.
The problem is interaction overhead.
Goal:
t_overhead → 0
That became Cursivis Enterprise.
What It Does
Cursivis turns your cursor into an Airia-powered enterprise agent.
Interaction Model
- Select → highlight anything
- Trigger → invoke AI instantly
- Review → get contextual output
- Act → execute if needed
Interaction cost ≈ O(1)
No chat window. No context switching. No copy-paste.
Context-Aware Intelligence
Same trigger, different behavior:
- Email → summarize / reply
- Contract → explain
- Support ticket → triage
- MCQs → batch answer
f(selection) → optimal action
System Architecture
Selection → Companion → Backend → Airia Pipeline → Result → Execution
Airia Agent Backend (Node.js)
Single abstraction layer for all AI calls:
POST https://api.airia.ai/v2/PipelineExecution/{pipelineId}
Behavior = f(pipeline configuration)
No code changes needed — only pipeline updates.
Windows Companion App (WPF / .NET 8)
- Captures text + screen
- Floating overlay UI
- Voice + trigger input
- Routes to backend
Modes:
- Smart → automatic
- Guided → suggestions + choice
Browser Execution Layer
AI Output → Structured Plan → Browser Execution
- Runs inside real logged-in browser
- No sandbox
- No re-auth
This is what makes it an agent.
The Airia Integration
Single entry point: airiaClient.js
Prompt Abstraction
[Cursivis Enterprise — legal role]
Action: explain clause
Content type: contract
Selected content:
{selected text}
Why This Works
Context = structured prompt + role injection
- One pipeline
- Multiple roles (legal, support, sales, ops)
- Zero code branching
Each response includes executionId → full traceability
Challenges
1. Schema Constraint
Airia enforces:
{ intent, actions[] }
Fix: encode everything inside allowed fields (output, description)
2. MCQ Bulk Answering
Problem: fewer answers than questions
Fix:
actions[i] = { label: Qi, output: ai } for i = 1...n
n = number of detected questions
3. Clipboard Race Condition
Problem: clipboard delay
Fix: 180ms retry
4. Browser Planner Bug
Problem: wrong function structure
Fix: correct wrapper
What I Learned
- Pipelines > hardcoded logic
- Constraints → reliability
- UX defines perceived intelligence
- Removing friction = real agent feel
Built With
- Airia — pipeline execution, routing, planning
- Node.js + Express — backend
- WPF / .NET 8 — companion app
- Playwright — fallback automation
- Chromium Extension (MV3) — browser execution
- C# Named Pipe IPC — communication
Built With
- .net-8
- airia
- c#
- chromium-extension-(mv3)
- docker
- express.js
- javascript
- logitech-mx-creative-console-sdk
- named-pipes-(win32-ipc)
- node.js
- playwright
- win32-api
- windows-clipboard-api
- wpf
- xaml
Log in or sign up for Devpost to join the conversation.