Inspiration

Knowledge work runs on fragmented context.

Critical information lives across notes, emails, documents, meeting writeups, chats, and drafts. Teams constantly lose decisions, repeat questions, and waste time reconstructing what they already knew. The problem is not that organizations lack information. It is that their knowledge is scattered, unstructured, and hard to verify.

We were inspired by a simple idea: organizational knowledge should work more like memory, not storage.

Instead of forcing people to search through disconnected files and threads, we want to turn messy internal information into structured, verifiable, and reusable memory that people and AI systems can actually rely on.

What it does

Our project is an organizational memory system that transforms scattered notes, documents, and other internal information into typed, structured memories.

Instead of treating everything as raw text, the system extracts meaningful units such as:

  • tasks
  • people and profiles
  • projects
  • facts
  • decisions
  • ideas

These memories are not just searchable. They are also verifiable, so users can inspect where they came from and check the source context behind them.

The result is a system that helps teams:

  • recover important context faster
  • reduce repeated searching across tools
  • keep knowledge organized as it evolves
  • trust AI-assisted answers because they are grounded in source material

In the long term, this can serve both as a human-facing product and as a memory layer for AI agents.

How we built it

We built the system around a simple principle: raw notes and documents should become structured memory objects.

The core approach combines:

  • LLM-based extraction to turn unstructured text into typed memories
  • A memory schema with explicit types, attributes, and metadata
  • Source-grounded memory creation, so every memory can be traced back to the note or document it came from
  • A lightweight workflow where users can write notes, upload content, and inspect the resulting memories

The early product flow is intentionally simple:

  1. A user writes or uploads information (integrated MD editor and PDF parsing)
  2. The system processes that content and extracts candidate memories
  3. Memories are stored in structured form with source references
  4. The user can review, retrieve, and build on top of them later

The backend can be directly connected to AI agents like claude and serve as dedicated memory layer.

We are designing it to work first as a practical memory tool for people, while also laying the groundwork for agent-facing APIs.

From the technical POV, we are using cloudflare worker and pages, supabase and groq as an LMM provider.

Challenges we ran into

One of the biggest challenges was defining the right unit of memory, because deciding what should become a task, a profile, a fact, or a project memory is much harder than simply storing notes. We also had to solve how memories should be merged and updated, since new notes often refine, extend, or partially change existing knowledge rather than create something entirely separate. Search speed became another important challenge: a memory system quickly loses value if retrieval and updates slow down as the knowledge base grows, so we had to think early about how to avoid naive linear search and design for efficient lookup and memory maintenance at scale. Trust was another core issue, because memory is only useful if users can verify it, which means provenance and traceability matter just as much as extraction quality. On top of that, the inputs themselves are highly heterogeneous, with notes, emails, documents, and drafts all containing useful context but in very different formats and levels of clarity. Finally, we had to balance simplicity and ambition throughout, because it is very easy to overbuild a full knowledge platform too early instead of focusing on a sharp, practical wedge.

Accomplishments that we're proud of

  • Designed a system that goes beyond note storage and focuses on structured organizational memory
  • Built early memory creation pipelines that can turn raw notes into typed, verifiable memory objects
  • Developed a scalable model for memory updates and retrieval, designed to avoid naive linear search and support more efficient lookup as the memory base grows, targeting O(log n) behavior instead of O(n)

What we learned

  • Search alone is not enough. People need organized memory, not just better retrieval
  • Verifiability is essential. Users will not trust a system that cannot show where an answer came from
  • The best early wedge is narrow: start with high-value memory creation and retrieval before expanding into a full platform
  • Organizational knowledge is dynamic, so memory systems must support updates, enrichment, and reorganization over time
  • The real opportunity is not just helping humans remember more, but helping teams and agents operate on a shared layer of trusted context

What's next for Mendelgate

We plan to build a polished workflow for creating memories from notes, documents, and uploads, while continuously improving the quality of memory typing, updating, and retrieval. At the same time, we want to make the interface lightweight and practical enough for everyday use, so the product feels useful in real workflows rather than like a heavy knowledge management system. Another priority is adding integrations for common knowledge sources such as email and internal documents, allowing the memory layer to capture context where work already happens. From there, the focus is on validating the product with real users to identify the strongest initial wedge and refine the use case that delivers the most immediate value. Over time, the goal is to expand from a human-facing memory product into an API and infrastructure layer for AI agents, so structured organizational memory can support both people and automated systems.

Long term, the goal is to become the organizational memory layer that turns fragmented internal information into structured, verifiable, and reusable context, so both people and AI can work with far less loss, duplication, and uncertainty.

Share this project:

Updates