Inspiration

I’ve always wanted a J.A.R.V.I.S.-like assistant, but my context today is fragmented across Gemini, ChatGPT, and Claude. This “context amnesia” forces constant repetition. I built Cortex Protocol to solve this—a universal memory layer that acts as a persistent, portable “SSD” for my digital life, while giving me full control over my data.

What It Does

Cortex Protocol is an open-source Model Context Protocol (MCP) Server that works as a shared brain for AI agents. It uses a “Biological Funnel” with three layers: Hot memory (24-hour recall), Warm memory (monthly summaries), and Cold memory (permanent Knowledge Graph). A memory-decay system ensures irrelevant data fades, and a 3D Glass Brain dashboard visualizes the AI’s memory.

How I Built It

The project was 98% vibecoded using Antigravity. Gemini 3 Pro (1M context) powers “The Dreamer” for nightly memory consolidation, while Gemini 3 Flash handles fast, cost-efficient real-time memorize/recall. The stack includes Python FastAPI on Google Cloud Run, Firebase (Auth and Firestore Vector Search), and Next.js on Vercel.

Challenges & Accomplishments

Balancing a full-time job with the hackathon was intense. Improving retrieval for vague queries like “What else?” required an upgrade-query rewriting step. Major wins include the “Magic Handoff” to Cursor IDE, independent Dreamer services, and the Glass Brain.

Learnings & What’s Next

I learned memory needs curation, privacy needs granular control, and MCP state management is critical. Next, I aim to scale Cortex to millions through production polish, deeper integrations (including ChatGPT Apps), local-first on-device PII redaction, encrypted user-owned storage, and a smarter Biological Funnel.

Built With

Share this project:

Updates

posted an update

One Major Challenge I faced was memory quality at scale.

Two hard problems:

  • Duplication → similar memories crowd the system
  • Conflict resolution → when memories disagree, what’s truth?

To handle this, Cortex uses Google DeepMind's Gemini 3.1 Pro in the Dreamer pipeline to:

  • merge duplicate memories
  • resolve conflicts

It works... but it’s expensive and not scalable. Right now, I’m intentionally trading cost for simplicity to prove the system.

The good news: I’ve already designed a new memory architecture (Cortex v2) that:

  • eliminates duplication properly
  • introduces a significantly more efficient and scalable memory architecture that improves memory quality while reducing compute overhead
  • supports years of memory without degradation in retrieval quality
  • costs < $1/month per user at moderate usage

I’ll also benchmark it against LongMemEval and share results. Stay tuned.

Log in or sign up for Devpost to join the conversation.

posted an update

Post-submission update – Cortex Protocol V1.5 is now live. Key upgrades since the hackathon submission:

• New landing page with complete docs, setup guide & integration directory → https://cortexmcp.vercel.app • New Memory Transfer feature (import full context history from any AI via one copy-paste) • Improved MCP server (faster retrieval + GraphRAG + encryption) • Full PAT system for user-controlled sharing Also launched today on Product Hunt: https://www.producthunt.com/products/cortex-memory-mcp

Still powered by Gemini 3 in the Dreamer, Ingest & Recall pipeline and built around the same user-owned MCP philosophy.

Looking forward to the final results!

Log in or sign up for Devpost to join the conversation.

posted an update

As we approach the Gemini 3 Hackathon results on April 8th(which also happens to be my birthday). I’ve been reflecting on a question I’ve been asked often:

“What’s the difference between Cortex Protocol and OpenMemory or other platforms?”

Most memory systems today are built for developers. They help AI agents remember better, but the control still stays with the application.

Cortex flips that model.

Cortex is built for the user.

  • You decide which AI can access your memory
  • You control what gets stored, shared, or hidden
  • Your context isn’t locked inside a single tool

My vision goes beyond IDEs or individual apps.

Cortex is designed to become the default memory layer for all AI agents.

Think OAuth... but for AI memory.

A future where:

  • You don’t repeat yourself across tools
  • Every AI understands you from day one
  • Your memory is portable, persistent, and truly yours

No matter the outcome, this is just the beginning for Cortex.

Log in or sign up for Devpost to join the conversation.

posted an update

Tech Stack Used for Cortex Protocol:

  • Frontend: Next.js 16+, Tailwind v4, Three.js
  • Backend: FastAPI (Python), MCP Server
  • Database: Firebase Firestore (Vector + Graph)
  • Search: Hybrid (HNSW Vector + BM25 Keyword)
  • Hosting: Vercel (Frontend) / GCP Cloud Run (Backend)
  • AI Models: Gemini 3 Flash/Pro

Log in or sign up for Devpost to join the conversation.