Inspiration
My cousin called me panicking after getting her blood test results. Her hemoglobin was flagged, her ferritin was low, and the report was covered in abbreviations she'd never seen. She didn't know if she needed to go to the ER or just eat more spinach.
That moment stuck with me. Millions of people receive medical documents every day — lab reports, prescriptions, imaging results — and have no idea what they mean. They either spiral into anxiety googling symptoms, or they ignore the results entirely until their next appointment.
I wanted to build something that could sit between the patient and the jargon — something that actually respects their intelligence while translating the complexity.
What it does
LabLens is a multimodal medical document intelligence platform powered by Amazon Nova 2 Lite. You upload a photo of any medical document and have a natural, multi-turn conversation about it.
Two modes make it genuinely useful for different audiences:
- Patient mode — skips the basics, flags what's abnormal, explains results in plain English, and always ends with specific questions to ask your doctor
- Doctor mode — responds at a clinical peer level with ICD-10 codes, differential considerations, drug interaction flags, and evidence-based next steps
How I built it
The entire stack is JavaScript — no Python, no pip, no virtual environments.
Backend: Node.js 22, Express 5, AWS SDK v3, Multer for image handling
Frontend: React 19, Vite, Tailwind CSS with a custom dark dashboard theme (Syne + DM Sans fonts, #0C0C0C background, #AAFF00 lime green accents)
AI: Amazon Nova 2 Lite (us.amazon.nova-2-lite-v1:0) via AWS Bedrock inference profiles
The core of the app is a streaming multimodal API call to Nova. Images are base64-encoded and passed as content blocks alongside the user's text:
const requestBody = {
schemaVersion: 'messages-v1',
messages: [{
role: 'user',
content: [
{ image: { format: 'png', source: { bytes: base64Image } } },
{ text: userQuestion }
]
}],
system: [{ text: systemPrompt }],
inferenceConfig: { max_new_tokens: 1024, temperature: 0.7 }
};
Responses stream in real time via Server-Sent Events — chunks arrive at decoded.contentBlockDelta.delta.text and are pushed to the client as they generate.
Infrastructure: EC2 t3.micro on AWS, nginx as reverse proxy, Let's Encrypt SSL via Certbot, PM2 for process management, Route 53 for DNS (https://lablens.har5ha.in), Elastic IP for a stable address.
Challenges I ran into
1. Nova's API schema is unique
Nova uses schemaVersion: 'messages-v1' — not Anthropic's format. The token parameter is max_new_tokens, not max_tokens. You must use inference profile IDs (prefixed with us.) rather than foundation model IDs or every call fails silently. None of this is obvious from the docs at first glance.
2. Scope enforcement via system prompts doesn't work
I initially tried to make Nova stay on-topic using system prompt instructions ("if the user asks anything non-medical, reply with one sentence and stop"). Nova 2 Lite ignored this completely and wrote poetry on request.
The fix was a server-side keyword guard that intercepts requests before they reach Bedrock. If the message contains no medical keywords and no image is attached, it returns the deflection instantly — zero API cost, 100% reliable.
3. EC2 deployment git permissions
AWS SSM runs commands as root, but the app directory was owned by ec2-user. Every git pull silently failed due to ownership mismatch. The permanent fix: chown -R root:root on the app directory so git operations work reliably from SSM across all future deployments.
4. Tailwind CSS v4 breaking changes
Tailwind v4 dropped the standalone PostCSS plugin — you now need @tailwindcss/postcss and @import "tailwindcss" instead of @tailwind base/components/utilities. The @apply directive also behaves differently inside @layer blocks without a @reference. Took some debugging to sort out.
Accomplishments I'm proud of
- The Doctor/Patient mode toggle with genuinely different system prompts — not just a tone change, but a fundamentally different communication style tailored to two distinct audiences
- The real-time streaming UX — responses feel instant and alive, not like waiting for a loading spinner
- Building and deploying a full-stack production app with HTTPS, custom domain, and process management in under 2 hours
- The server-side guardrail approach — a pragmatic engineering decision that's more reliable than prompt engineering
What I learned
- Amazon Nova 2 Lite is genuinely capable at multimodal document understanding — it reads lab values, identifies units, and flags abnormalities with impressive accuracy
- Inference profile IDs are mandatory for Nova on Bedrock — this is not optional
- For LLM-powered apps, application-layer guardrails always beat system prompt restrictions for reliability
- Streaming SSE from Node.js to React is clean and straightforward once you understand the readable stream API
What's next for LabLens
- PDF support — most medical documents arrive as PDFs, not images
- Trend analysis — upload multiple lab reports over time and let Nova identify trends in your values
- Export — generate a plain-English summary PDF patients can bring to appointments
- Voice mode — Amazon Nova Sonic for hands-free document review
Built With
- amazon-nova-2-lite
- aws-bedrock
- aws-ec2
- aws-sdk-v3
- express.js
- nginx
- node.js
- pm2
- react
- tailwind-css
- vite

Log in or sign up for Devpost to join the conversation.