Inspiration
The project was born from the need for sovereign intelligence. Traditional AI research tools are often centralized, heavily filtered, and prone to data logging. The goal was to create a "Perplexity-style" experience that is completely uncensored, private, and runs on decentralized infrastructure, giving power back to investigative researchers and analysts.
What it does
Deep Agent is a deep investigation platform that goes beyond simple chat. It features three primary modes:
- Search: Quick, real-time web results with citations.
- Research: A multi-step pipeline that plans and executes parallel searches to synthesize comprehensive reports.
- Insight (Graph-RAG): The core innovation. It ingests documents and web data into a Neo4j Knowledge Graph, mapping relationships between entities like people, organizations, and dates to find hidden patterns.
How I built it
The app uses a "Sovereign Stack":
- AI Engine: Powered by Venice AI for LLM reasoning (
deepseek-v3.2), web search (llama-3.3-70b), and vision (venice-v3-vision). - Brain: Neo4j serves as the persistent memory, handling vector search and relationship mapping.
- Backend: Built with FastAPI and LangGraph to manage complex agentic workflows.
- Deployment: Configured for the Akash Network, a decentralized cloud, ensuring the platform remains permissionless.
Challenges I ran into
- Parallelization: Orchestrating multiple concurrent web searches and scrapers while maintaining a clean state in the agent's memory.
- Graph Mapping: Automatically extracting entities and creating meaningful
RELATED_TOlinks in Neo4j from unstructured web text without generating "hallucinated" connections. - Decentralized Deployment: Tuning the Akash SDL configuration to handle persistent storage and co-located services like Redis.
Accomplishments that I'm proud of
- Graph-RAG Integration: Successfully moving from flat text retrieval to a relational model where the AI can "see" the network of a case.
- Zero-Big-Tech Stack: Building a high-performance research tool without relying on OpenAI, Google, or AWS.
- Speed: Achieving high-intensity research reports through the use of ThreadPoolExecutors for parallel data gathering.
What I learned
- Agentic Workflows: The power of LangGraph for managing long-running research tasks that require decision-making at each step.
- Vector + Graph: Why vector search alone isn't enough for investigations; you need the explicit relationships that only a knowledge graph provides.
- Sovereign Infrastructure: The nuances of deploying to decentralized clouds like Akash compared to traditional VPS providers.
What's next for UNCENSORED AI
- Enhanced Memory: Wiring in the Redis and Supabase scaffolds for live case notifications and long-term user session storage.
- Collaborative Investigation: Allowing multiple researchers to interact with the same Knowledge Graph in real-time.
- Local-First Option: Providing an even easier path for users to run the entire stack on their own local hardware.
Built With
- akash
- fastapi
- pythoon
- supabase
- venice
Log in or sign up for Devpost to join the conversation.