Inspiration
Human brains don't send every sensory input to conscious reasoning. The thalamus and sensory cortex filter ~99% of incoming signals through habituation, circadian rhythms, and salience detection. Only novel, important stimuli reach the prefrontal cortex for conscious thought.
Most AI agents lack this filter. They send every sensor event directly to the LLM, wasting API calls on routine noise. We built Cortex to add this missing perception layer — the cognitive mechanisms that decide what is worth reasoning about.
What it does
Cortex is a cognitive-science-based perception framework that sits between sensors and Gemini 3. It applies three neuroscience mechanisms before any API call:
- Habituation Filter — repeated stimuli raise the threshold (you stop noticing the clock ticking)
- Circadian Rhythm — adjusts vigilance by time of day (night mode heightens alertness)
- Decision Engine — routes events by priority using a salience network
The Gemini 3 bridge provides a complete perceive-reason-act pipeline. Events pass through Cortex first, and only novel, significant events reach Gemini 3 for deep reasoning. In our tests, this filters 60-80% of noise, saving API calls while improving response relevance.
How we built it
- 6,053 lines of Python across 7 cognitive modules, 3 sensor sources, and 3 integration bridges — 42 commits
- Pure Python, zero dependencies — stdlib-only, works anywhere Python 3.10+ runs
- Config injection pattern — one CortexConfig object wires everything
- Gemini 3 Flash Preview integration via REST API (urllib, no SDK dependency)
- Mock mode for testing without API key, production mode for real deployment
- 115 tests running in under 0.5 seconds
Architecture:
Sensors → Cortex (perception) → Gemini 3 (reasoning) → Actions
habituation filter contextual reasoning alert
circadian rhythm planning & decisions investigate
priority assessment natural language log
Challenges we ran into
- JSON response parsing: Gemini 3 wraps JSON responses in markdown code fences (
json...), which required custom stripping logic before parsing - Model naming: The actual API model ID is
gemini-3-flash-preview, notgemini-3-flash— discovered by querying the models list endpoint - Balancing filter sensitivity: Too aggressive filtering misses important events; too permissive defeats the purpose. We tuned thresholds based on cognitive science literature (Thompson & Spencer, 1966)
Accomplishments that we're proud of
- 6,053 lines of Python, 42 commits — built in a single day, fully functional framework
- Zero external dependencies — pure stdlib Python, no pip install nightmares
- 80% noise filter rate in real-world tests — 5 sensor events reduced to 1 API call
- 95-98% confidence scores from Gemini 3 on filtered events (vs ~50% on unfiltered noise)
- 115 tests passing in 0.47 seconds — well-tested, production-ready
- Real-world validated — 91% cognitive load reduction on 22h / 944 events of live camera data
- Multiple integration bridges — Gemini 3, Elasticsearch, MCP Server, ReachyMini robot
What we learned
- Cognitive science principles translate directly to practical engineering — habituation, circadian rhythms, and orienting responses are not just metaphors, they are implementable algorithms
- Pre-filtering dramatically improves LLM reasoning quality — Gemini 3 gives better answers when it only sees novel, important events
- Mock modes are essential for rapid development — we could iterate on the perception pipeline without burning API credits
What's next for Cortex
- ReachyMini robot integration — physical body with camera, microphone, and IMU sensors feeding directly into Cortex perception pipeline
- PyPI publication —
pip install cortex-agentfor easy adoption - Additional LLM bridges — Claude, GPT, local models
- Streaming perception — real-time continuous filtering for production deployments
- Community sensor sources — plug-and-play sources for common IoT sensors
Built With
- borbely-1982
- cognitive-science-(thompson-&-spencer-1966
- cognitivescience
- gemini-3-flash-preview-api
- python
Log in or sign up for Devpost to join the conversation.