Inspiration

We love AI, but we don’t love when it confidently gives the wrong answers. Codexa was born from a simple idea: AI should be trustworthy. Instead of guessing, it should lean on real information that users provide.

What it does

Codexa lets you upload PDFs or text files and then ask questions about them. It uses Elastic Search to find the right content and Google Gemini to summarize it into a clear answer with proof. Every result is backed by the actual source text. No guesswork.

How we built it

  • Flask for backend and routing
  • Elastic Cloud for indexing and search
  • Google Gemini 2.5 API for reasoning
  • SQLite for storing API keys
  • HTML + CSS for the dark, minimal UI
  • PyPDF2 for PDF text extraction

Local Deployment

Run it easily on your computer:

  1. Clone the repo
  2. Install dependencies
  3. Add your Elastic and Gemini API keys
  4. Run python app.py and open http://127.0.0.1:5000

Challenges we ran into

We had to balance Elastic’s powerful search with Gemini’s context limits and make sure everything stayed fast. Handling large files and keeping the UI intuitive were also important design challenges.

Accomplishments we’re proud of

We built a fully working retrieval and reasoning pipeline that feels smooth to use. The UI is clean. The answers are backed by real evidence. And it already feels like a tool that could ship to real customers.

What we learned

We learned how to properly coordinate search and generative AI. We improved our skills in Elastic optimization, API orchestration, and prompt design focused on factual accuracy.

What’s next for Codexa

  • Add user accounts and usage analytics
  • Support multi-document search
  • Highlight citations inside AI summaries
  • Deploy Codexa publicly on Render or Vercel

Links

Built With

Share this project:

Updates