Inspiration

What it does

How we built it

Here is the Devpost submission structured and formatted in MDX, incorporating the specific details from the Understory (pocketgall) repository:

## Inspiration

Understory is designed to transform the initial clinical encounter by shifting the burden of data synthesis from the physician to an AI-augmented workflow. By evolving generic medical analysis into a "Care Plan Recommendation Engine," the platform aims to reclaim clinical time for direct patient interaction, ultimately strengthening the doctor-patient relationship through increased presence and empathy. A major driving force was promoting community well-being by reducing the cognitive load on healthcare providers, mitigating physician burnout.

## What it does

Understory is a next-generation "Live Agent" orchestrator that streamlines patient intake. Acting as an interruptible voice-first clinical co-pilot, it combines real-time human-in-the-loop web speech interaction with a diagnostic 3D surface model and Gemini's deep reasoning. 

Practitioners can use a Three.js-powered interactive 3D body map for localized clinical notation and the Web Speech API to dictate notes. The platform uses `gemini-2.5-flash` natively and via `@google/adk` to process this multimodal symptom data, instantly producing synthesized, actionable clinical strategies. These strategies are organized by distinct diagnostic lenses (Overview, Interventions, Monitoring, Education). Crucially, it uses an "Interactive Task Bracketing" double-click state machine (Normal, Added, Removed) that empowers doctors to rapidly vet and customize these AI recommendations before they are acted upon.

## How we built it

We built the application using a modern, reactive architecture:
*   **Framework:** Angular v21.1 (Signals-based, Zoneless) with Server-Side Rendering (SSR) & Client-Side Hydration.
*   **Intelligence:** Powered by the Google GenAI SDK (`gemini-2.5-flash`) and orchestrated via the Google Agent Development Kit (`@google/adk`) using specialized `LlmAgent` experts.
*   **Visualization & Speech:** Three.js for 3D anatomical modeling and the Web Speech API for bi-directional voice interaction.
*   **Integrations:** Google Programmable Search Engine (CSE) and NIH PubMed E-utilities for auxiliary clinical context.
*   **Styling:** Tailwind CSS following a premium, minimalist "Dieter Rams" design system.
*   **Deployment:** An Express.js backend fully deployed on Google Cloud Run.

## Challenges we ran into

One of the most significant challenges was balancing bleeding-edge AI orchestration with the strict UX demands of a modern progressive web application. Specifically, stabilizing the AI's clinical generations required us to deeply integrate the `@google/adk`'s `InMemoryRunner`. 

On the frontend, dealing with complex mobile constraints required us to master CSS viewport units (like `100dvh`) to restore native scrolling. We also had to implement robust `@media print` rules to ensure our structured offline clinical stationery printed correctly.

## Accomplishments that we're proud of

*   **Performance:** Achieving a top-tier mobile performance score (100/100 Lighthouse) through diligent layout unblocking and dynamic asset loading.
*   **Extreme Responsiveness:** Creating a responsive UI that scales down to extremely constrained viewports, such as a Pixel Watch 2 at 286px width, for ultra-portable clinical referencing.
*   **Human-in-the-Loop Architecture:** Successfully implementing the "Interactive Task Bracketing" system, ensuring the AI strictly acts as a transparent co-pilot rather than an autonomous decision-maker.
*   **Data Portability & Offline Access:** Engineering the ability to export patient states as Unicode-safe Base64 encoded FHIR Bundles. We are also proud of our "Printable Clinical Stationery," which generates CSS Grid-optimized, multi-page physical printouts featuring Halftone body maps for secure, offline record-keeping.

## What we learned

Building this platform completely changed how we approach state management. We learned the profound importance of prioritizing granular, reactive UI signals from day one. The project also deepened our respect for advanced CSS techniques—not only for handling tricky mobile viewports but also for translating a digital interface into a physical, printable medium. 

## What's next for Pocket Gull (Understory)

Our roadmap is guided by our **Kaizen Philosophy**—the belief in continuous, incremental improvement, as clinical tools should evolve alongside their practitioners. 
*   **Incremental Intelligence:** We will use the interactive bracketing data to allow doctors to continuously improve the AI's baseline output.
*   **Iterative Design:** We will constantly polish the UI to reduce cognitive load, ensuring every pixel serves a clinical purpose.
*   **Evolving Integration:** While we currently prioritize high-integrity manual data handling, our next major step is building the bridges for automated, high-privacy biometric telemetry.

Challenges we ran into

Accomplishments that we're proud of

What we learned

What's next for Pocket Gull

Built With

Share this project:

Updates