About the project

FrictionLens started from a simple question: why do we only judge code quality after the code is already written?

When developers get stuck, tired, or start thrashing inside the same method, the warning signs usually show up in behavior first: repeated edits, too much undo/redo, jumping across files, or overgrowing one function under pressure. We wanted to build a tool that catches those signals early and turns them into something actionable inside the IDE.

FrictionLens is an IntelliJ plugin that observes the coding session in real time and translates it into live engineering signals such as friction, fatigue, focus, style fit, and structural anomaly risk. Instead of only saying “this code is bad,” it tries to answer a more useful question: what is happening during implementation that is causing the code to drift?

The plugin combines two layers:

  1. Local behavioral heuristics that watch developer workflow patterns like repeated edits, file switching, edit cadence, churn, and undo activity.
  2. Code-structure analysis that detects risky patterns such as very long methods, deep nesting, and hotspots that deviate from the local baseline.

We also added optional cloud logging with MongoDB Atlas so sessions can be analyzed over time instead of only in the moment. On top of that, we built a reporting pipeline that turns the collected data into both a Markdown summary and a visually rich HTML dashboard. This makes the project useful not only as an IDE assistant, but also as a way to understand coding patterns across a session and present them clearly.

To push the project further, we integrated the Google Gemini API as an optional AI layer. Gemini can interpret the current state of the session and generate higher-level suggestions, watch-outs, and next steps on top of the local metrics. That let us combine deterministic telemetry with AI-generated guidance instead of relying on AI alone.

How we built it

We built the plugin in Kotlin on top of the IntelliJ Platform SDK. The UI is rendered directly inside a custom tool window, where the dashboard shows current signals, hotspots, explanations, and recommendations. The local analysis engine aggregates editor events and structural metrics into readable session-level scores.

For persistence, we supported both local CSV logging and MongoDB Atlas logging. We then created a separate Python-based reporting tool that pulls documents from MongoDB, computes aggregate statistics, and generates a polished frontend dashboard for presentation and analysis.

Challenges we faced

One of the biggest challenges was signal design. It is easy to collect raw activity, but much harder to transform that activity into metrics that are interpretable and not noisy. We had to tune the plugin so it could distinguish between normal iteration and actual implementation friction.

Another challenge was UX. Because the plugin lives inside the IDE, it must feel helpful without becoming distracting. We spent time redesigning the interface so it communicates important information clearly, with richer visuals, better grouping, and a cleaner hierarchy of metrics.

We also had to solve practical engineering issues:

  • managing IntelliJ plugin lifecycle and tool window updates
  • keeping telemetry optional and understandable
  • evolving the MongoDB document schema to support better analytics
  • making AI suggestions useful without forcing them into the workflow
  • generating a dashboard that is presentation-friendly and still grounded in real collected data

What we learned

We learned that developer friction is surprisingly measurable. Before bad code becomes obvious, there are already detectable patterns in the implementation process. We also learned that AI works best here as a second layer: the most reliable foundation comes from concrete behavioral and structural signals, while AI helps interpret those signals in a more human way.

Most importantly, we learned that there is a real opportunity to build tools that support developers during coding, not just after the fact through linting or review.

What’s next

We want to expand FrictionLens with:

  • stronger per-language structural analysis
  • better personalization of “style fit” and fatigue baselines
  • richer time-series analytics in the reporting dashboard
  • team-level comparisons and trend views
  • more targeted AI coaching based on repeated hotspot patterns

Built With

Share this project:

Updates