Inspiration
Comparing cloud GPUs is painful: fragmented providers, inconsistent pricing/specs, and constantly changing availability. GPU Findr centralizes this into one place and automates insights so developers can move faster, cheaper.
What it does
- Scans and monitors GPU availability/pricing across multiple providers (AWS Lambda, RunPod, TensorDock, Vast.ai).
- Exposes a web API and frontend for GPU search and monitoring.
- Automatically generates and publishes blog content on market trends.
- Integrates with MCP (Model Context Protocol) for AI-powered interactions. Source: project README.
How we built it
Language & structure: Multi-component Go application with clear commands:
cmd/api/– HTTP server, routes, GPU + blog endpoints, MCP integration.cmd/scan/– provider scanners (lambdaGetter.go,runpodGetter.go,tensordockGetter.go,vastGetter.go) orchestrated byscan.go.cmd/blog/– automated content generation/publishing.
Data & platform: Supabase backend for storage and publishing.
APIs & docs: OpenAPI spec (
cmd/api/openapi.yaml) and Swagger (/docs/swagger.json).Automation: GitHub Actions nightly workflow to generate and publish blog posts.
Config: Environment variables for Supabase, OpenAI, and provider credentials (see README).
Challenges we ran into
- Normalizing heterogeneous provider schemas, price units, and spec fields.
- Handling rate limits, flaky endpoints, and deduping overlapping offers.
- Designing a stable API while the underlying provider data changes continuously.
- Automating high-quality blog content without human editing.
- Securely managing keys and operationalizing the nightly pipeline.
Accomplishments that we're proud of
- A single Go codebase that continuously scans multiple providers and surfaces actionable results.
- A clean REST API + frontend that makes search fast and transparent.
- Hands-off blog automation that turns raw market data into readable updates.
- MCP integration that lets agents query GPU data directly.
- Production hygiene: OpenAPI/Swagger docs and a working GitHub Actions pipeline.
What we learned
- Provider data is messy; robust normalization and validation pay dividends.
- Clear API contracts (OpenAPI) accelerate iteration across the stack.
- Content automation benefits from guardrails (templates, thresholds, retries).
- MCP is a powerful bridge for agent workflows when the API surface is simple.
What's next for GPU Findr
- More providers & regions to deepen coverage (expand adapters beyond current set).
- Richer filters & ranking (e.g., VRAM thresholds, spot vs on-demand flags, sustained-use effects).
- UI/UX improvements for faster triage and side-by-side comparisons.
- Caching & performance tuning for bursty traffic and big scans.
- Observability (metrics, alerts) for data freshness and pipeline health.
- Docs & examples: end-to-end recipes for common queries and MCP agent flows.
Project site: https://gpufindr.com
Built With
- agents
- agentx
- automation
- aws-lambda
- blog-generation
- cloud
- command-line
- cost-optimization
- data-normalization
- github-actions
- go
- golang
- gpu
- html-frontend
- http-server
- json
- mcp
- model-context-protocol
- openapi
- postgresql
- rest-api
- runpod-api
- supabase
- swagger
- tensordock-api
- tidb
- vast.ai-api
Log in or sign up for Devpost to join the conversation.