Inspiration

Inspired by the fact that caregivers often take photos of wounds but lack the expertise to interpret risks. I aim to leverage Gemini 3’s multimodal power to transform "image labeling" into "clinical reasoning," allowing AI to guide users through diagnosis like a senior specialist.

What it does

CareAI is a wound reasoning agent. It supports image uploads and structured clinical assessments (e.g., exudate, pain). By comparing historical images, the AI generates visual evolution reports, 72-hour risk projections, and stepped caregiver action plans.

How we built it

Built with Gemini 1.5 Pro in Google AI Studio. By designing sophisticated System Instructions, we injected wound care specialist logic into the model, enabling automated generation of structured Markdown medical reports from multimodal inputs.

Challenges we ran into

The biggest challenge was moving AI from "image description" to "clinical reasoning." By designing a "Minimalist Indexed Consultation Mode," we enabled the AI to provide accurate logic even when historical images are limited.

Accomplishments that we're proud of

As a nursing student crossing into AI, I built an interactive clinical reasoning system that actively requests care context instead of jumping to conclusions.

What we learned

We learned the critical value of Gemini 3’s long-context window for tracking disease progression. The core of medical AI isn't just algorithmic complexity, but its ability to perform causal reasoning like a human.

What's next for CareAI – Gemini 3 Multimodal Wound Reasoning Agent

Next, we will introduce voice interaction and expand specialized logic branches (e.g., diabetic foot, surgical incisions), making CareAI an affordable "pocket wound specialist" for every household.

Built With

Share this project:

Updates