Viewpilot
Inspiration
I built Viewpilot because raw datasets are powerful, but the path from a CSV to something decision-makers can actually use is still too manual. Too often, people open a spreadsheet, scroll through thousands of rows, build charts by hand, and lose momentum before they reach a real insight. I wanted to create a project that turns that painful first hour of analysis into a much faster, more interactive workflow. The goal was simple: upload a dataset, generate a usable dashboard automatically, and then keep exploring through a copilot instead of starting over every time a new question comes up. In a way, the product idea can be summarized as: $$ \text{Value} = \frac{\text{Insight Quality} \times \text{Speed}}{\text{Friction}} $$ My goal with Viewpilot was to increase insight quality and speed while cutting down the friction of exploratory analysis.
What I Built
Viewpilot is an agentic analytics workspace that turns a CSV or public API source into a live dashboard. Instead of only showing static charts, it creates a session-based environment where the dataset, dashboard, and follow-up questions all stay connected. The workflow looks like this:
- A user uploads a CSV or launches a demo API source.
- The app creates a live E2B sandbox session.
- Python runs inside that sandbox to profile the data and generate an initial dashboard.
- The frontend renders KPI cards, chart panels, tables, and insights.
- A copilot sidebar lets the user ask follow-up questions against the same session context. This makes the project feel less like a file viewer and more like an analytics workspace that can keep reasoning over the same dataset. ## How I Built It I built Viewpilot with a full-stack setup centered around Next.js, React, and a session-driven backend workflow. Key pieces of the architecture include:
- Next.js for the app shell, routes, and API endpoints
- React for the interactive upload flow and dashboard experience
- E2B sandboxes to run isolated Python analysis safely
- Mistral models accessed through the OpenAI SDK client shape for routing, summaries, critique, and bounded code generation
- Plotly for rendering visual analytics panels
- Redis/session state storage to keep dashboard state and conversations available across the workflow A big part of the build was designing the transition from upload to insight:
- create a sandbox
- upload the file
- run a Python exploration script
- parse the results into structured dashboard data
- store the analysis state
- let the copilot continue from that same context I also built progress-aware query handling so the user can see where a request is in the pipeline, such as routing, task planning, sandbox execution, validation, critique, and persistence. ## What I Learned This project taught me a lot about building AI features that are actually usable, not just impressive in a demo. I learned that:
- good AI products need strong workflow design, not just model calls
- sandboxed execution is extremely useful when you want generated analysis to stay bounded and safe
- structured intermediate state matters a lot when you want follow-up questions to stay coherent
- reliability features like validation, critique, fallbacks, and traceable stages are just as important as the initial generation step
- frontend UX matters a lot in AI tools, because trust is built through clarity, progress feedback, and readable outputs I also learned how important it is to separate responsibilities across the system. The model should not do everything. Some parts are better handled by deterministic code, some by Python analysis, and some by the model. That balance made the project much more stable. ## Challenges I Faced The hardest part was making the system feel consistent across multiple steps instead of acting like disconnected features. Some of the biggest challenges were:
- Keeping state in sync The upload flow, dashboard, copilot, and sandbox all depend on the same evolving session state. Making sure they stayed aligned was a major challenge.
- Handling sandbox execution safely Running analysis in a live sandbox is powerful, but it introduces failures like timeouts, malformed outputs, or slow execution. I had to think carefully about guardrails and fallbacks.
- Turning raw analysis into UI-ready data It is one thing to generate analysis, and another to transform it into reliable KPI cards, charts, tables, and insight summaries that render correctly.
- Making AI output trustworthy A copilot that answers quickly but unreliably is not very useful. I had to add validation, critique stages, and bounded code generation so results were more dependable.
- Designing for iterative analysis Users do not stop at the first dashboard. They ask follow-up questions. Supporting that meant preserving context and making the system reusable across the full session. ## Why This Project Matters to Me What excites me most about Viewpilot is that it combines several things I wanted to get better at: product thinking, AI workflows, full-stack engineering, and data tooling. It pushed me to think beyond “can I generate something?” and toward “can I build something people would actually want to use?” Viewpilot represents my attempt to make analytics more interactive, more accessible, and much faster to start. Instead of treating data exploration as a slow setup process, I wanted to treat it as a live conversation. ## Closing Reflection If I continued developing this project, I would focus on making the copilot even more reliable, improving export/reporting, and expanding support for more live data sources. But even in its current form, Viewpilot helped me better understand how to build AI-powered products that combine generation, structured state, and real user workflows. More than anything, this project taught me that the best AI experiences are not just smart, they are well-orchestrated.
Built With
- e2b
- mistral
- nextjs
- typescript
Log in or sign up for Devpost to join the conversation.