Inspiration We started with a simple question: how do we expand access to AI for the billions of people still unconnected? While researching the organizations working to close the digital divide, we discovered that the NGOs themselves — the ones building connectivity infrastructure, running health programs, and funding education — are drowning in a much more immediate problem: reporting. Every nonprofit collects impact data. But turning a spreadsheet into a compelling story for a donor looks completely different from what a community member needs to see, which looks nothing like what an internal operations team needs. Right now, that means weeks of manual reformatting for every audience. A UN audit found agencies producing thousands of custom donor reports annually, each in different formats — with 26% submitted late just from the sheer volume. Meanwhile, the average nonprofit donor retention rate sits below 35%, partly because donors never clearly see what their money accomplished. The organizations doing the most important work in the world have the worst tools for telling their story. We wanted to fix that. What it does ImpactLens lets a nonprofit upload their data — CSVs or PDFs — and instantly generates three interactive dashboards tailored for different audiences:
Donor View — emphasizes financial transparency, cost per beneficiary, ROI, and growth trends. Frames challenges as funding opportunities. Community View — emphasizes human impact, people reached, geographic scope. Uses simple language and relatable metrics. Internal View — emphasizes operational efficiency, bottlenecks, burn rate, targets vs. actuals. Honest about what needs attention.
All three dashboards are generated in parallel from a single upload. Users switch between audience views instantly with tabs — no re-uploading. Each view can be shared via a unique URL or downloaded as a standalone HTML file. The system supports multiple file uploads for organizations that want a holistic dashboard across all their program data. The AI doesn't just chart data — it reads it, identifies what matters for each audience, writes a tailored narrative headline, picks the right visualization types, flags anomalies, and arranges everything into a coherent data story. How we built it Frontend: React with Vite and TypeScript, styled with Tailwind CSS. We used Recharts for all chart rendering (bar, line, pie, area, radar/spider charts) and Papaparse for CSV parsing. PDF parsing uses PDF.js on the client side. The design language is intentionally warm and editorial — more "annual report" than "SaaS dashboard" — using serif headlines and earth-tone palettes to feel trustworthy and approachable. Backend: Express.js server proxying requests to the Claude API (Anthropic). The server holds our API key and exposes a single analysis endpoint. When a user uploads data, the backend fires three parallel Claude API calls — one per audience type — each with audience-specific prompt tuning that instructs Claude on what metrics to prioritize, what language register to use, and what chart types fit best. The critical architecture decision was having Claude return structured JSON (a "DataStory" object with narrative, key metrics, visualization specs, and layout instructions) rather than generated code. The frontend has a deterministic renderer that maps this JSON to interactive components. This means dashboards are always consistent, always interactive, and never break from unpredictable AI output. Sharing uses lz-string compression to encode the dashboard state into a URL fragment — no database needed. The recipient opens the link and sees the rendered dashboard instantly. Challenges we ran into Getting Claude to return clean JSON consistently. Early on, responses would sometimes include markdown formatting or conversational preamble despite explicit instructions. We solved this with a strict system prompt, a concrete example of the expected output schema, and a retry mechanism that sends an even more constrained prompt if the first response fails JSON parsing. Chart cutoff issues. Recharts defaults didn't always leave enough room for tall bars or high data points, causing clipping at the top of charts. We had to add explicit Y-axis domain padding and container margins to every chart type, then visually verify each one with extreme test values. PDF data extraction. PDF.js extracts text, not visual chart data. When a PDF contains a bar chart rendered as a graphic, we're relying on Claude to reconstruct the underlying data from surrounding text, axis labels, and callouts. This works well for data-rich PDFs but has inherent limits with purely graphical charts. We added a "Recreated from original PDF chart" label so users always know when a visualization was inferred rather than directly extracted. Balancing three audience prompts. Getting the AI to genuinely change its analytical lens per audience — not just swap adjectives — required detailed, specific prompt engineering. The donor prompt needed to surface financial efficiency metrics that the community prompt should actively de-emphasize. The internal prompt needed to be blunt about problems that the donor prompt should frame constructively. Getting this right took significant iteration. Accomplishments that we're proud of
One upload, three dashboards. The parallel analysis pipeline means a nonprofit uploads once and gets three completely different, audience-appropriate stories from the same data in under two minutes. The data story architecture. The structured JSON intermediate layer (DataStory) between Claude's analysis and the frontend renderer turned out to be a really clean separation of concerns. It makes the system predictable, testable, and extensible without being fragile. It actually feels good to use. The count-up animations on metric cards, the staggered section reveals, the audience-specific color accents — small touches that make the dashboard feel like something worth sharing, not just a data dump. Shareable without infrastructure. No database, no user accounts, no login walls. A nonprofit can generate a dashboard and text a link to a donor in under three minutes. The donor sees an interactive report without installing anything or creating an account.
What we learned
Nonprofits don't need more data tools — they need fewer steps. Every tool we looked at (Tableau, Power BI, Looker Studio) is powerful but assumes someone technical will configure it. The insight was that the AI layer should make all the configuration decisions so the user doesn't have to. Audience-aware analysis is fundamentally different from audience-agnostic visualization. Changing chart colors for different viewers isn't the same as changing which metrics get surfaced, which trends get highlighted, and what narrative gets written. That distinction is the core of the product. The global reporting problem is staggering. We went in thinking this was a nice-to-have efficiency tool. Researching the scale — thousands of custom reports per agency, 80% of time spent on reformatting, donor retention collapsing — made us realize this is critical infrastructure the sector is missing. Structured AI output is more reliable than free-form generation. Asking Claude to return a specific JSON schema and rendering it deterministically on the frontend is dramatically more reliable than asking it to generate HTML or chart code directly.
What's next for ImpactLens
Template library — Pre-built dashboard scaffolds for common nonprofit types (health, education, water/sanitation, microfinance) so the AI has a strong starting point for each sector. Longitudinal tracking — Upload data quarterly and the system automatically shows trends over time, building on previous dashboards rather than starting from scratch each time. Multi-language output — Generate dashboard narratives in the viewer's language, not just the organization's language — critical for community-facing dashboards in multilingual regions. Funder format presets — One-click formatting for major funders (USAID, World Bank, Gates Foundation reporting templates) so nonprofits can meet specific donor requirements without manual reformatting. Persistent storage and collaboration — Adding user accounts, saved dashboards, and team workspaces so organizations can build a living library of their impact reporting over time. Embeddable dashboards — Let nonprofits embed their ImpactLens dashboards directly on their website, turning their data into a public-facing transparency tool like the interactive annual reports that organizations like Habitat for Humanity and Girls Who Code have pioneered — but without needing a development team to build it.
Built With
- claude
- react
- typescript
- vite
Log in or sign up for Devpost to join the conversation.