Inspiration
Research is the engine of human progress — but the process of conducting research is painfully slow. Researchers spend months on literature reviews, struggle to identify novel angles in crowded fields, and burn countless hours drafting papers that follow rigid academic conventions.
We asked ourselves: What if AI could be the research assistant that every scientist deserves?
We were inspired by watching brilliant researchers get bogged down in the tedious parts of their work — not the exciting experiments or breakthrough insights, but the repetitive grind of finding papers, synthesizing trends, and formatting manuscripts. With the explosion of LLM capabilities and agentic AI systems, we saw an opportunity to build something transformative: an AI that doesn't just answer questions, but actively accelerates the entire research lifecycle from idea to publication.
What It Does
CortexLab is an AI-powered research assistant that guides you from a rough research idea to a publication-ready paper.
Here's how it works:
Describe Your Interest
Tell CortexLab what you want to research (e.g., "transformer architectures for medical imaging").Gap Discovery
Our AI agents scan the research landscape, analyze trends, and identify untapped research opportunities that others have missed. You get a curated list of promising directions with supporting evidence.Deep Dive & Experiment Design
Pick a direction, and CortexLab conducts a comprehensive literature review. It then generates a detailed experiment plan: baselines to compare against, datasets to use, evaluation metrics, and implementation guidance.Paper Drafting
Upload your experiment results (logs, metrics, notes), and CortexLab drafts a complete academic paper in IMRaD format (Introduction, Methods, Results, and Discussion). Export to Word and submit to your target venue.
How We Built It
Architecture
- Frontend: React 19, TypeScript, Vite, Tailwind CSS v4, Framer Motion
- Backend: FastAPI (Python) with async SQLAlchemy
- AI Engine: LangGraph for multi-agent orchestration, LangChain for LLM integrations
- LLM Providers: Google Gemini and Groq for fast inference
- Database: PostgreSQL (async) for production, SQLite for development
- Auth: Google OAuth 2.0 with secure session management
The Agent System
We built a sophisticated multi-agent pipeline using LangGraph with specialized nodes:
- LiteratureScout — Searches and retrieves relevant papers
- TrendSynthesizer — Identifies emerging themes and patterns
- GapMiner — Discovers underexplored research opportunities
- DirectionGenerator — Proposes concrete research directions
- DeepDiveScout — Conducts thorough literature analysis
- ExperimentDesigner — Creates actionable experiment plans
- PaperWriter & PaperEditor — Drafts and refines academic manuscripts
Each agent is a specialized node in our LangGraph workflow, with shared state passed between them to build comprehensive research artifacts.
Challenges We Ran Into
Agent Orchestration Complexity
Coordinating multiple AI agents that share context and build on each other required careful state design and strict information boundaries.LLM Output Reliability
Producing consistent, structured JSON outputs required extensive prompt engineering and a JSON repair layer for edge cases.Real-Time Streaming
We implemented Server-Sent Events (SSE) to stream agent progress so users see live updates instead of long loading screens.Tailwind CSS v4 Migration
Major changes from v3 required multiple layout and responsiveness iterations.Balancing Depth vs. Speed
We optimized prompts and parallelized agent execution to keep outputs thorough without excessive wait times.
Accomplishments We're Proud Of
End-to-End Research Pipeline
From “I’m curious about X” to “Here’s a draft paper about X.”Beautiful, Polished UI
Clean white theme, smooth animations, and intuitive workflows.Multi-Agent Architecture
A production-grade LangGraph system with 9 specialized agents working seamlessly.Export to Word
Generate.docxfiles ready for academic submission.Google OAuth Integration
Secure, production-ready authentication and session management.
What We Learned
LangGraph Is Powerful
Graph-based orchestration is ideal for complex, multi-step AI workflows.Prompt Engineering Is Everything
Small prompt changes had massive impact on research quality.UI/UX Matters for AI Products
A great AI needs an equally great interface.Async Python Is Essential
Required for responsive, scalable agent execution.Iterative Development Wins
MVP-first, refine later proved far more effective than over-engineering.
What’s Next for CortexLab
- Citation integration (Semantic Scholar, arXiv, Google Scholar)
- Collaborative research workspaces
- Fine-tuned academic writing models
- Domain-specific modes (CS, Biology, Physics)
- Reference management (Zotero, Mendeley)
- Peer review simulation agents
- Conference and journal deadline tracking
CortexLab: From idea to publication, accelerated by AI.
Log in or sign up for Devpost to join the conversation.