I built SupplyGuard to make supplier risk answers immediate and explainable. Too often the data is scattered, so I combined Elastic’s hybrid search (BM25 + kNN) with a conversational layer, so I can ask natural questions and get grounded answers with citations.
I designed a FastAPI backend exposing hybrid search and a RAG-style chat endpoint, backed by Elasticsearch with a dense_vector field for kNN. I created a React UI with a Chat page, risk dashboards, CSV export, and quick “Add Supplier/Certification” flows to demonstrate real interactions. I also wired an optional Vertex AI path that I can toggle via environment variables; when enabled, the chat uses a Gemini provider, otherwise it falls back to a local synthesis so the demo works without cloud creds.
What I learned:
Blending lexical and vector signals improves relevance for short business queries. Grounding conversational responses with retrieved sources increases trust and clarity. On Windows, pinning compatible httpx/httpcore versions and running Uvicorn from the project root prevents import/runtime issues. Hardening Elasticsearch index creation with timeouts makes local demos more reliable.
Challenges I solved:
Python/Windows dependency quirks (httpcore typing issues on 3.14). Index creation timeouts; I added longer client/index timeouts and idempotent creation. Frontend/backend alignment (base URL, trailing slashes) to avoid 307/308 POST redirects.
Team:
Built solo: I designed, implemented, and integrated the entire stack and UX myself.
Built With
- axios
- elastic
- hybrid-search-(bm25-+-knn-over-dense-vector)-frontend:-react-18
- languages:-python-3.11
- lucide-react-(icons)-ai-(optional):-google-cloud-vertex-ai-(gemini)-via-env-toggled-provider-tooling/infra:-docker
- markdown
- pydantic
- tailwind-css
- typescript-backend:-fastapi
- uvicorn-search:-elasticsearch-8.14-(docker)
- vite
Log in or sign up for Devpost to join the conversation.