Inspiration
What it does
How we built it
Challenges we ran into
Accomplishments that we're proud of
What we learned
What's next for SwiftLoan Ai
What inspired us?
The traditional loan process is like mailing paper forms back and forth—it's slow, opaque, and frustrating. Our inspiration began with a clean, professional Figma design that showed a better way: an application that was transparent, interactive, and fast.
We saw that most AI demos are like a powerful F1 engine sitting on a wooden cart—just a simple chat box. Our goal was to build the whole car: a complete, polished, and intelligent application. We created Agent SwiftLoan to turn that Figma vision into a reality, using agentic AI to build a loan system that is fast, trustworthy, and autonomous.
How we built it
Agent SwiftLoan is a full-stack application with a clear separation between its "brain" (the backend AI crew) and its "face" (the frontend dashboard).
- The "Brain": A 4-Agent AI Assembly Line
We used Python, FastAPI, and CrewAI to build a backend team of four specialized AI agents. These agents are powered by NVIDIA NIMs (using meta/llama-3-1-8b-instruct) and connected via the langchain_nvidia_ai_endpoints library.
Our AI backend works like an autonomous factory assembly line:
Agent 1: The Receptionist (Sales Agent): This agent greets the user and takes their jumbled, conversational order (e.g., "Hi, I'm Jack, I need 40k for my education..."). Its job is to instantly type this up into a perfect, structured work ticket (JSON) for the factory floor.
Agent 2: The Security Guard (Verification Agent): This agent takes the work ticket and checks the user's credentials. It verifies their ID (e.g., PAN must be ABCDE1234F) and checks their credit score against a VIP list (e.g., score must be > 650).
Agent 3: The Financial Analyst (Underwriting Agent): Once verified, this agent analyzes the financial data to assess the risk, determining if the loan is a safe bet.
Agent 4: The Manager (Sanction Agent): This agent reviews the reports from all previous agents and writes the final, personalized approval or rejection letter, ready to be sent back to the customer.
- The "Face": The Transparent Glass Wall
The user interface is a React, Vite, and TypeScript dashboard styled with Tailwind CSS (shadcn/ui). We didn't want the AI to be a "black box." Our dashboard acts as a transparent glass wall to the factory floor:
The Progress Tracker is the assembly line indicator, showing the application moving from one station to the next.
The AI Agent Panel shows the user exactly which agent (e.g., "Verification Agent") is "Working" or "Done" in real-time.
The Chat Interface is the professional front desk where the entire interaction happens.
Challenges we faced
Dependency Hell (The Mismatched Parts): Our biggest challenge was like building a car where the wheels (crewai) only fit a 2024 model, but the NVIDIA engine (langchain-nvidia-ai-endpoints) only fit a 2023 model. We had to act as mechanics, digging through requirements.txt to find the exact set of compatible versions for openai, langchain, and crewai to work together.
The "Jumping" Chat Box (The Broken Elevator): Our chat UI was like a broken elevator. Every time a new message arrived, the entire page would jump, sending the message box flying off the screen. We had to completely re-wire the CSS layout (using flex-1, min-h-0) and manage the browser's focus (inputRef.current.blur()) to create a smooth, predictable, "WhatsApp-like" scroll.
Connecting the "Show" to the "Go": The AI backend is too fast (it gives one final answer in seconds). The Figma flow was slow and cinematic. A "fake" flow is like a movie stuntman—it looks great, but it's not real. We had to build a hybrid flow. The frontend plays a movie (simulated steps for Verification/Underwriting) to give the user a great experience, and then in the final act, the real AI hero (the backend call) steps in to deliver the final, intelligent decision.
What we learned
We learned that a powerful AI is not enough. A "black box" AI is scary; a transparent AI builds trust. Our system works because the user can see the agents working, making the process feel both high-tech and accountable. We didn't just build a tech demo; we built a complete, human-centric product.
Built With
- ai/ml:-nvidia-nims-(meta/llama-3-1-8b-instruct)
- crewai
- fastapi-frontend:-react
- framer-motion-cloud:-amazon-web-services-(designed-for-amazon-eks-/-amazon-sagemaker)-languages:-python
- langchain-backend:-python
- shadcn/ui
- tailwind-css
- typescript
- vite

Log in or sign up for Devpost to join the conversation.