Inspiration

We believe that traditional budgeting is broken. It's a fundamentally negative experience, often leading to guilt, and people abandoning it. We started with a simple observation, personal finance is quite emotional, and yet most tools on the market treat it like a cold, math problem. Our inspiration was to reframe budgeting not just as a system of restriction, but rather a system of balance. We wanted to make a tool that will acknowledge human behavior, like the occasional splurge, and provides a constructive, guilt-free way back to a healthy financial state.

What it does

Karma is a smart wellness application that gamifies budgeting through a dynamic Karma Score. Instead of just tracking expenses, it actively helps users balance their spending habits. When a user makes an "indulgent" purchase (a non-essential splurge), our system flags it, and theri Karma Score takes a temporary hit. To restore balance, the app's core Spend-Swap Engine immediately proposes a personalized, more achievable challenge. This is powered by our custom classification and generation model, built on the Cohere platform. This model analyzes transaction data with complete context of your transactions and behaviors, allowing it to, for instance, suggest forgoing a few coffee purchases to offset an expensive dinner, creating a more proactive and less judgemental feedback loop.

How we built it

We engineered Karma as a modern, full-stack TypeScript monorepo using Turborepo to ensure type safety and consistency of our code. We built our backend on Elysia.js, which is a high-performant, native Bun framework, which allowed us to subscribe to webhooks from Clerk (for user lifecycle events) and Plaid (for realtime transaction syncing), with data models that are validated using Zod schemas. The frontend is a Next.js 15 app.

The intelligence layer is where our innovation mainly lies. Instead of relying on a general purpose LLM, we developed a proprietary model, trained on the Cohere platform. This model was trained on a dataset of financial transactions and spending patterns found on Kaggle, optimizing it for the high-accuracy classification of financial data. This allows us to achieve higher performance and reliability for our core tasks, which are transaction categorization, detecting indulgences, and context-aware challenge generation, all at a super low latency and greater cost efficiency than just a generic API call to a foundational model.

Challenges we ran into

Developing a system that is this dynamic led to several technical challenges. Ensuring immediate consistency between the Plaid webhooks, our database, and the frontend was complex, and we solved it by creating a robust webhook processing pipeline. But, the main challenge was moving beyond generic AI. We had initial prototypes using OpenAI models (namely GPT-5-mini) from the Azure AI Foundry, but we found their outputs too broad for our liking, and immensely slow. So, we decided that our solution was to build our own model. This involved a good amount of time in data curation and multiple iterations of training on the Cohere platform to create a model that could reliably and quickly parse unstructured transaction data and output structured accurate classifications. This change from prompting a general model to querying a specialized one was critical to achieving the precision required for Karma's core functionality.

Accomplishments that we're proud of

We are proud that we built an application beyond just simple data aggregation. By investing time into building a custom-trained model, Karma provides genuinely personalized, context-aware financial guidance that adapts to the user's real-life habits with a level of accuracy a general-purpose model just couldn't match. Our choice of going performance-first with Bun and Elysia provides some good benefits, namely making the app feel super responsive. The end-to-end Typescript monorepo, I believe, is a testament to modern development practices, which allowed our team to build, test, and deploy features extremely quickly and with confidence.

What we learned

This project was a deep dive into building production-grade, AI native applications. The key takeaway from this was the clear advantage of domain specific models over generic ones for product critical features. General LLMs may be excellent for broad tasks, custom training a model is what gave us control, accuracy, and the performance we needed for a reliable user experience. We validated the power of event driven architecture for handling third party integration and the importance of a schema-first development to maintain order and sanity in a complex, async system.

What's next for Karma

Our current platform is just a foundation. Our roadmap is focused on leveraging our model's capabilites even further:

Financial coaching -- beyond just basic swaps, we can have AI providing long term financial advice based on deep understanding of a user's spending habits More gamification -- we can introduce challenges, rewards, and social leaderboards to further motivate positive financial behavior Predictive budgeting -- this is something we had intially planned for our MVP, however weren't able to finish building due to time constraints. However, we believe we can use our model to forecast upcoming expenses and potential budget shortfalls, which will allow our suers to make adjustments before they overspend

Built With

Share this project:

Updates