Inspiration

Rewards credit cards promise free money—cashback, flights, perks. But in reality, most people miss out. Why? Because banks made the system intentionally confusing. Points expire, bonuses hide behind fine print, and tracking what to use where is a full-time job. We saw Reddit threads, friends' frustrations, and our own lost perks—and decided to fix it. No more spreadsheets. No more guesswork.

What it does

Credence is your AI rewards co‑pilot. Powered by Perplexity’s Sonar API, it tells you the best credit card to use for every purchase. It helps you track rewards, uncover hidden perks, avoid expiration traps, and maximize your benefits. No bank connection needed. Just input your cards and spending habits, and the app does the rest—giving you real-time strategies, savings simulations, and clear, AI-powered recommendations.

How we built it

  • Frontend – Next.js + TypeScript = snappy UI.
  • Authentication – Auth0 email login
  • Database – MongoDB
  • Data SourcesRewards Credit Card API for credit cards data + Gmail Receipts/Plaid for transactions data
  • AI – LangGraph ReAct agent (powered by OpenAI, Gemini, and Perplexity)
    • Tool 1 – Official MCP server for DB queries (find/sort/regex in‑DB, zero hallucination).
    • Tool 2 – Sonar Pro for fresh offers + peoples' thoughts + expert advice.
    • Tool 3 – Widget factory stream existing dashboard-components in the chat, crazy right?

Challenges we ran into

  • A lot of wrappers – It was a pain dealing with the Vercel AI SDK, LangGraph framework, and LangChain. Too many functions to memorize, each with opinionated patterns for inputs/outputs. Connecting everything together led to a lot of friction and frustration.

  • Truth over tokens – When we first built tools to fetch from the database, usage felt limited—we had to write tons of static query templates. So we pivoted and decided the LLM should generate the queries dynamically. Just around then, MongoDB released the official MCP server, which was perfect timing. At first, it would load all databases and inspect every collection on each call. We tried including those details in the system prompt to save time, but the LLM still pulled big chunks of data and filtered on its own, which caused slowdowns and hallucinations. We added a few-shot prompt showing how filtering should happen directly in the Mongo query—and that was a game changer.

Accomplishments that we're proud of

  • Widget factory – The agent dynamically renders live components (e.g., card comparisons, strategy cards) directly in the chat. Users can save suggestions or recommendations instantly—no need to copy/paste or bookmark anything. And let’s be honest, we all know what happens when something gets bookmarked (it’s never opened again).

  • Zero-bias stack – No affiliate links, no SEO tricks. All credit card data is sourced from the Rewards API, and all external reasoning is backed by Sonar citations. Recommendations are based on real value, not commissions.

  • One-screen clarity – Onboarding takes under 2 minutes on average. After that, everything—cards, goals, insights—is auto-saved and accessible from a single dashboard. No toggling between pages or tabs.

  • DB-level reasoning – Our MongoDB MCP server filters over 2,000 card records instantly, using live queries (find, sort, regex). The LLM only sees the relevant data—not the entire table—so it can focus on reasoning, not filtering.

What we learned

People don’t want another finance app; they want less work. If the AI already knows their situation and shows the answer in a friendly widget, trust goes up and friction disappears. And when you push the heavy data lifting to the database, the LLM can focus on thinking instead of sifting—and everyone wins.

What's next for Credence

  • Ship iOS widget: swipe‑time suggestion in Apple Pay.
  • Open API for fintech partners to embed “Best Card” in checkout flows.

Built With

Share this project:

Updates