I came to software through art. As a dancer, filmmaker, and musician, I watched machine learning arrive and saw immediately what it would become. I started teaching myself development, offensive security, hardware networking, and local ML hosting, not to become an engineer, but because I needed to understand the tools I was going to have to live with. That security background changed everything. The more I learned about how agentic AI systems actually work, the more I realized most of them are architecturally unsafe by default: ambient shell access, opaque execution, no human gate before code runs. That insight compelled me to build something I can trust the safety of my family to.
MINs and the Question of Inference
That line of thinking led me to MINs, Modular Inference Networks, a research direction centered on hierarchical abstraction using narrow specialist models. The core question was whether you could decompose intelligence the way evolution did, not one general model doing everything, but small, interpretable models doing specific things well, with structured handoffs between them. The goal was token efficiency, interpretability, and trust: knowing exactly what fired, why, and what it touched. MINs shaped how I think about every tool I build now. Inference is expensive, opaque, and a potential data exposure point. It should be scoped, not ambient.
Building Claritty
Claritty came out of that research as a practical agentic security framework: three files, zero dependencies, SHA-256 verified action execution, a human approval gate before anything runs. The design principle is that restriction is the foundation, not an afterthought. Every capability has to be explicitly granted. The model proposes, the human executes. Claritty is currently welcoming adversarial reviews, the security model is designed to be stress-tested, not trusted on reputation alone. I released it under a dual license with a hard revenue threshold, so individuals and researchers get full access while commercial use funds continued development. Intelligence sovereignty as a business model.
Codemap and This Hackathon
Codemap is the proof. It takes any source file, regardless of language, and generates a hash-indexed section map: every logical section of your code gets a unique 6-character hash, a one-sentence description, and precise boundary markers. The result is a structured, navigable artifact your AI tools can reference precisely instead of guessing. One model call does the semantic labeling. Everything after that, boundary validation, file instrumentation, section lookup, runs deterministically on your machine. Zero data leakage beyond that single call. The real challenge was scrutinizing the idea itself against the GitLab Duo ecosystem: can regulated agentic augmentation actually work in practice, not just in theory? Can you put AI tooling in the hands of someone operating in a hostile environment and have them trust it completely? The demo answers that. Codes land, Claritty starts, the model proposes orchestration but only sees what the user explicitly sends via Codemap as an agentic action. No data leakage, no silent execution, the consent mechanism is entirely outside the model's sight and reach. No ambient access, no hidden execution, no trust me. GitLab was the test case. The idea passed. Productivity and security don't have to be at odds. Codemap is the argument. Claritty is the infrastructure. And this hackathon was the test.
Built With
- claritty
- python
Log in or sign up for Devpost to join the conversation.