Inspiration

The AI Agents era has a massive security blind spot: API Credential Exposure. To perform meaningful financial analysis or execute trades, AI agents currently require direct access to sensitive API keys (e.g., Plaid, EODHD, or Stripe). As a developer, this creates a "Secret Zero" dilemma. If you pass these long-lived keys into an LLM's context window, you risk catastrophic leaks via prompt injection or hijacked logs. Currently, users are forced to choose between Agent Intelligence (giving the agent the keys it needs) and Financial Security (keeping the agent siloed and useless). That is why this idea of a security financial analyst system like ASEFIN MCP will solve this problem.

What it does

By combining the Model Context Protocol (MCP) with the Auth0 Token Vault, we’ve created an architecture where:

1/ The LLM is "Blind" to Secrets: The agent never sees your raw API keys. It only receives the "Facts" returned by a secure MCP server.

2/ Delegated Authority: Through Auth0 for AI Agents, the user grants the agent scoped, short-lived tokens to perform specific tasks—like fetching a 30-day market brief or calculating portfolio volatility.

3/ High-Stakes Protection: Any sensitive "write" action (like emailing a report or moving funds) triggers an immediate Step-up Authentication request, ensuring the human remains in ultimate control.

How I built it

The Secure Brain (Python/FastMCP): I developed a custom MCP server using Python and FastMCP. This server acts as the "Secure Context Provider," fetching real-time market data from the EODHD API without ever exposing the API keys to the LLM.

The Identity Layer (Auth0): I've integrated the Auth0 Token Vault to manage delegated authority. The system exchanges the user's session for ephemeral, scoped tokens, solving the "Secret Zero" dilemma.

The Frontend (Next.js & v0): The UI was built with Next.js and Tailwind CSS, featuring a real-time "Narrator" chat interface that displays security metadata, such as confidence scores and data provenance.

The Bridge: I utilized Vercel Header Rewrites to allow the Next.js frontend to communicate seamlessly with the Python FastAPI backend, enabling a unified deployment on Vercel.

Challenges we ran into

Asynchronous Authorization: Implementing the "Step-up" flow was challenging. I had to ensure the MCP server could "pause" a tool execution and signal the frontend to trigger an Auth0 MFA challenge before resuming the high-stakes action.

Context Isolation: Ensuring the LLM stayed "blind" to the credentials required a strict Narrator Pattern. I had to design the MCP tool outputs to provide only the "Facts" (e.g., stock prices) while keeping the transport tokens strictly server-side.

Rapid Deployment: Coordinating a Python-based MCP server within a Next.js Vercel environment required fine-tuning the vercel.json and API routes to handle the MCP HTTP transport protocol in under 10 hours.

Accomplishments that we're proud of

Solving The Zero Secret Problem: I successfully demonstrated an AI agent performing financial analysis using a live API without the LLM ever "seeing" a long-lived credential.

High-Stakes Guardrails: I built a UI that detects "High-Stakes" intent (like sending a report) and automatically renders a security warning with a verification trigger.

Seamless Integration: Merging the Model Context Protocol (MCP) with established identity standards like Google OAuth2 through Auth0.

What I learned

Agentic Security is Identity: I learned that the future of AI isn't just about "smarter" models, but about more secure identity delegation. An agent is only as safe as the tokens it holds.

MCP Power: I discovered how powerful the Model Context Protocol is for standardizing how AI interacts with local and remote data sources securely.

What's next for ASEFIN MCP

Automated Trading Guardrails: Implementing a "Spending Limit" vault where the agent can execute small trades autonomously but requires biometric MFA for any transaction over a specific dollar amount.

Multi-Agent Auditing: Allowing a second "Auditor Agent" to review the logs generated by the "Narrator Agent" to ensure total transparency.

Built With

Share this project:

Updates