Inspiration
Accessing public assistance is often hardest when people are already overwhelmed. Someone dealing with job loss, illness, or displacement usually has to repeat their story across multiple systems, upload the same documents over and over, and guess which programs they may qualify for. I built AidFlow to make that process more human, more structured, and more actionable.
I was especially inspired by the idea that AI should not replace case workers, but help them move faster with better context. AidFlow is designed as a bridge between applicants, reviewers, and submission systems so people can get help with less friction.
What it does
AidFlow is an AI-powered intake and review workflow for social-service applications.
It lets an applicant:
- Start an intake case in one of three portal layouts
- Enter household, income, contact, and urgent-need details
- Save a structured application packet tied to a real case record
- Generate an Amazon Nova Lite analysis of their situation
- Review an eligibility shortlist before submission
It lets a reviewer:
- Open the same shared case from a reviewer dashboard
- See applicant details, uploaded documents, extracted facts, blockers, and next steps
- Review eligibility recommendations with supporting evidence
- Approve and submit the case while preserving a full confirmation trail
The key idea is that every step stays connected to the same underlying case, so the intake, analysis, review, and submission state all stay in sync.
How I built it
I built AidFlow with:
- Next.js App Router for the full-stack application
- React for the intake, review, and dashboard interfaces
- Prisma and PostgreSQL for persistent case, review, document, and submission data
- Amazon Bedrock with Amazon Nova Lite for structured case analysis and eligibility reasoning
- Tailwind CSS for the UI
The architecture centers on a shared Prisma-backed case model. Instead of treating intake and review as separate prototypes, I unified them around one source of truth. When an applicant saves a case, AidFlow stores the packet, runs analysis, writes the results back to the case, and makes those results immediately available to the review workflow.
I also built:
- A shared client-facing case DTO for applicant and reviewer views
- Case APIs for saving, loading, analyzing, and submitting
- A submission flow that persists reviewer approval, submission metadata, and confirmation IDs
- A lightweight portal access flow to jump into applicant or reviewer paths quickly
- A fallback analysis path so the app can still function if Bedrock is unavailable
Challenges I ran into
One of the biggest challenges was unifying two different worlds: a front-end intake experience and a backend reviewer system. Early on, those flows used different sources of truth, which meant changes in one place did not reliably show up in another. I had to refactor the app so every screen worked from the same persistent case state.
Another challenge was getting Bedrock integration working cleanly in the local environment. Credential loading, environment configuration, and script execution behaved differently between Next.js and standalone test scripts, so I had to troubleshoot how AWS credentials were being loaded before Nova Lite calls would succeed reliably.
I also spent time balancing AI flexibility with product reliability. For a hackathon project, it was important that the model output be useful but also structured enough to safely power the UI. That meant tightening prompts, shaping outputs into predictable objects, and keeping a fallback path in place.
Accomplishments that I'm proud of
I'm proud that AidFlow is more than a static concept. It is a working end-to-end application with a real case lifecycle.
Highlights I'm especially proud of:
- Connecting applicant intake, AI analysis, reviewer review, and final submission into one continuous workflow
- Integrating Amazon Nova Lite into the product in a visible, meaningful way
- Building a shared persistent data model so all three applicant layouts and reviewer screens reflect the same case state
- Making submission real by saving approvals, statuses, timestamps, and confirmation IDs
- Designing the experience so AI supports human review instead of bypassing it
I'm also proud that the system is polished enough to record and present clearly, with multiple layouts, case views, and a clean review path.
What I learned
I learned that the hardest part of building AI products is not just calling a model. It is designing the surrounding system so the model’s output is useful, inspectable, and tied to real product actions.
I also learned:
- Good schema design matters as much as good prompting
- Shared state across workflows is critical for trust and usability
- LLM features are much stronger when they produce structured artifacts, not just text
- Hackathon scope control matters a lot; one strong vertical slice beats many half-finished features
- AWS integration details like environment loading and runtime behavior can become real product blockers if not handled early
What's next for AidFlow
Next, I want to turn AidFlow from a strong prototype into a richer assistance platform.
My next steps are:
- Add retrieval over real benefits rules and policy documents
- Expand document understanding with extraction and verification from uploaded evidence
- Improve the eligibility engine with stronger citations and confidence signals
- Add more explicit multi-agent orchestration for intake, verification, and form mapping
- Explore controlled portal automation for assisted submission
- Strengthen authentication, audit trails, and reviewer controls for real-world deployment
Long term, I see AidFlow becoming an AI-assisted operating layer for public benefits intake: helping applicants tell their story once, helping reviewers act faster with better context, and helping agencies process cases with more consistency and less manual overhead.
Built With
- amazon-bedrock-nova-lite
- nextjs
- postgresql
- prisma
- react
- tailwind-css
Log in or sign up for Devpost to join the conversation.