π AI Document Search (RAG Chatbot)
Chat with your PDF documents using an AI-powered chatbot.
This project uses Retrieval-Augmented Generation (RAG) so the answers are based on your uploaded files, not just the modelβs memory.
β¨ Features
- π Upload PDF documents for ingestion
- π Semantic search with embeddings (finds meaning, not just keywords)
- π¬ Ask natural questions and get accurate, context-aware answers
- π Source citations from your original documents
- β‘ Lightweight Frontend (HTML, CSS, JS) + FastAPI backend
- π§ Powered by Ollama LLM + LangChain
- π¦ Vector database with FAISS (local)
π οΈ Tech Stack
Frontend: HTML, CSS, JavaScript
Backend: FastAPI (Python)
AI Model: Ollama (LLM) + LangChain (retrieval & QA chain)
Vector DB: FAISS (default)
Deployment: Docker (backend), Vercel/Static hosting (frontend)
βοΈ How It Works
- Upload PDF β Extract text with LangChain loaders
- Chunk text β Split into smaller sections for better retrieval
- Embed chunks β Convert into vectors using Ollama embeddings
- Store vectors β Save in FAISS database
- Ask a question β Query is embedded and compared to stored vectors
- Retrieve top matches β Most relevant document chunks are selected
- Generate answer β Ollama LLM forms a response using retrieved chunks
- Return results β Answer is displayed in the chatbot UI

Log in or sign up for Devpost to join the conversation.