Inspiration
Scrolling YouTube one night, I landed on “Kidney Disease: My Story of Kidney Failure” (https://www.youtube.com/watch?v=aOMk2r5y_0E) — a raw vlog where the creator describes the daily stress of guessing which foods are “safe” for protein, sodium, potassium, and phosphorus limits. I paused the video, opened the App Store, and realised the existing apps for CKD patients are mostly just information pamphlets under the guise of apps. In addition, there were hundreds of AI calorie counters but zero reliable and user-friendly tools for renal patients to track their critical mineral intakes. We envisioned Renal Guard to step in and fill this gap, to provide patients with chronic kidney disease a user-friendly and reliable tool to monitor and manage their illness and lifestyle. We hope to deliver a significant quality of life improvement through Renal Guard.
What it does
Renal Guard has one simple purpose: to tell you whether the meal in front of you is safe to eat. It lets users photograph any meal and instantly see a traffic-light readout for sodium, potassium, phosphorus, water, protein, and calories. It logs each scan, graphs daily totals, and serves a bite-sized renal care tip every morning.
How we built it
Frontend – React + Redux + Vite PWA, using the browser’s getUserMedia / APIs for instant camera access.
Backend – We route requests through Amazon API Gateway (REST API) to two lightweight AWS Lambda functions, each with a clear single responsibility. The first Lambda handles time-critical work—image upload preprocessing and synchronous external API calls—while the second asynchronously takes care of follow-up tasks such as persisting data to our PostgreSQL database. Splitting the original monolithic Lambda (≈19 s total runtime) into this pair cut user-facing response time to about 10 s – nearly a 50 % improvement. Images reside in Amazon S3, Supabase gives us quick, secure JWT-based auth and manages the Postgres instance, and Vercel’s serverless platform streamlines CI/CD with zero-downtime deploys.
AI pipeline – A single multimodal call to GPT-4.1 Vision now handles the entire workflow: it ingests the photo, detects and normalizes ingredients, classifies the dish, and returns a full nutrient breakdown (macros plus renal-critical minerals) in structured JSON. Consolidating what used to be two separate model invocations into one request slashed end-to-end analysis time from ~19 s to ~10 s, without sacrificing accuracy. Careful prompt engineering (system-role spec + JSON schema) keeps responses parse-proof, and the large vision context window confirms how much heavy lifting a compact multimodal LLM can do in one shot.
Data – Supabase-hosted PostgreSQL is our single source of truth, holding per-meal records. Amazon S3 houses every image—raw uploads in a user/yyyy/mm/dd folder structure, plus lightweight thumbnails generated by the first Lambda. All Lambda functions stream structured JSON logs and custom latency/error metrics to CloudWatch, where dashboards and alarms give us real-time visibility and paging.
UX/UI – Tailwind CSS, Lucide-React icons, and custom 3-D assets for a clean, mobile-first interface.
Challenges we ran into
Mobile camera quirks – iOS Safari, Android Chrome, and Samsung Browser all treat differently, so we had to write fall-backs for rotation, EXIF stripping, and auto-compression artifacts.
Because we chose a fully serverless stack and no traditional backend framework, we suddenly had to recreate conveniences that frameworks usually give you “for free,” such as routing helpers, request parsing, and above all CORS hygiene. Early on we spent hours chasing cryptic “No Access-Control-Allow-Origin” errors in the browser. Putting Amazon API Gateway in front of our Lambdas let us declare CORS headers in one place and inspect them right in the console, wiping out an entire class of 4xx headaches almost overnight.
Accomplishments that we're proud of
13-second scan-to-score on average mobile data—measured end-to-end in the real world, not the lab.
Seamless AWS ↔ Supabase bridge with no public buckets and zero hard-coded keys.
What we learned
Event-driven beats monolith: keeping Lambda functions single-purpose made debugging and cold-start tuning straightforward.
Prompt engineering is half the battle: a single “enumerate all ingredients in JSON” instruction reduced hallucinations far better than extra training data.
Users prefer clarity over detail: most testers ignored milligram numbers until we added a simple green/amber/red badge.
What's next for Renal Guard
- Fine-tune a lightweight on-device model so basic nutrient warnings work even without internet, boosting privacy and speed.
- FHIR / EHR integration to let nephrologists see meal logs inside their clinical dashboards.
- Barcode + voice input for packaged foods and visually-impaired users.
- Gamified streaks & caregiver sharing to increase daily-scan adherence.
- Multilingual rollout starting with Spanish and Korean, using the same AI pipeline but localized nutrient guidelines.
Built With
- amazon-web-services
- apigateway
- browserapi
- github
- javascript
- lambda
- react
- redux
- supabase
- tailwindcss
- typescript
- vercel

Log in or sign up for Devpost to join the conversation.