PRISM - Personality-Rooted Informatics System for Meaningful Health Insights

Inspiration

We wanted a single place where community, everyday health signals, and what is happening around you feel connected, not another siloed tracker. LA Hacks and campus life made it obvious: people already share progress and look for events and resources nearby, but those stories rarely sit next to real location context or private, fast AI that respects the moment they are in.

What it does

PRISM is a connected wellness demo built as an Expo React Native app (iOS-first for the full pipeline). Users can:

  • Publish image-based progress posts to a community board, with optimized images for feed display.
  • See upcoming and past community events and map pins sourced from Ticketmaster and Eventbrite, merged in one place.
  • Get on-device suggestions and summaries tied to health and trend data on supported builds.
  • Optionally sync post metadata through Firebase when configured, while still working in a local-first mode when it is not.

The product story is simple: connect how you feel, what you share, and what is around you without forcing everything through a generic cloud chatbot.

How we built it

  • Client: InsightsScreenExpo - React Native (Expo), with a custom dev client where native modules (Zetic, HealthKit) are required.
  • Backend: Firebase Cloud Functions (HTTPS) for event aggregation, geocoding enrichment, and the “nearby context” API used when posting.
  • Data: Firestore for community posts when Firebase env vars are set; graceful degradation when they are not.
  • Media: Cloudinary for uploads and on-the-fly image transformations (see Sponsors).
  • On-device AI: Zetic native bridge running a Gemma 4-class model for text generation over captions, metrics, and insights payloads (see Sponsors).
  • Location-aware layer: “Arista”-named HTTP integration to our functions for context + events (see Sponsors).
  • Health: Apple HealthKit on real iPhones for Insights-oriented flows (capabilities and Info.plist usage strings documented in the repo).

Sponsors

Figma Make

We started exploration in ChatGPT with a tight visual prompt: it excelled at mood, layout, and polish for a hero screen and supporting UI, but the output was intentionally illustrative, beautiful reference art rather than something you can ship as production React Native.

We exported that as a PNG, brought it into Google Stitch to refine composition, typography rhythm, and small interaction cues, and tried Stitch’s export-to-code path (including its design handoff and MCP-style workflow). Stitch helped us iterate visually, but the generated structure and tokens did not map cleanly to our Expo stack, so we treated that step as design truth, not implementation truth.

We then moved the Stitch work into Figma Make, which bridged the gap: it respected the refined layout and component intent well enough to produce a credible UI scaffold we could align with our design system and navigation. From there, Figma Make’s export gave us a strong starting point in code, real components, spacing, and hierarchy we could refactor, wire to data, and harden for HealthKit, Zetic, and our backend calls. In practice, Figma Make turned “pretty mock” into shippable UI velocity without locking us into a dead-end codegen path.

Cloudinary

We use unsigned uploads with an upload preset: the app POSTs the post image to Cloudinary and stores secure_url and public_id. From public_id we build delivery URLs with transformations—f_auto / q_auto for format and quality, c_fill for feed (e.g. 1080×1080) and thumbnail (e.g. 360×360 with g_auto for smart crop). The UI prefers the feed variant when present so lists stay fast and sharp without storing multiple files ourselves.

Google Gemma (on-device via Zetic)

Core language features run on iPhone through Zetic’s native module, with the model identity configured as gemma-4-E2B-it (app.config.js). The same native entry points power: auto descriptions and tags for progress posts from caption + context + image hints; short wellness copy from stress / glucose / heart-rate style inputs; and Insights-style narrative from structured trend fields passed in a dedicated analysis shape. If the native stack is missing or fails, we use deterministic fallbacks so the app never dead-ends.

Zetic / Melange (on-device inference)

As health data is personal, we opted for on-deice inference for privacy and data security reasons. We treat on-device inference as primary: preloadModel() at startup for latency, then generateProgressMetadata / generateAdvisorSuggestion for the flows above. Cloud is used for media (Cloudinary) and optional sync (Firebase), not as the main reasoning engine for those strings - matching an edge-first, cloud-secondary story. Outputs are trimmed and sanitized for stable UI.

Arista Networks (“Connect the Dots”)

We implemented an “Arista” integration layer: the app calls our deployed nearbyEventContext (POST: lat, lon, time, community) and communityEvents (GET: lat, lon, tab, community). Those functions route and unify data: curated/scored nearby-style resource and event context for posts and on-device prompts, plus a merged Ticketmaster + Eventbrite event list with sorting and geocoding when coordinates are missing. That gives one JSON contract to the client while multiple providers sit behind it.

Challenges we ran into

  • Native + Expo: HealthKit and Zetic require a dev client and correct prebuild / pods; Expo Go alone cannot carry the full story.
  • Reliability across providers: Ticketmaster and Eventbrite can fail or rate-limit independently so we used parallel fetches, tolerant merging, and surfaced errors in metadata where useful.
  • Grounding AI without over-trusting outputs: We pass structured context (scores, CSV trend points, etc.) and cap string length; we gate “use event in copy” on confidence so the model does not over-claim.
  • Config sprawl: Cloudinary, Firebase, Zetic, and Arista URLs each need env discipline—.env.example documents the shape; real keys stay out of git.

Accomplishments that we're proud of

  • A coherent pipeline from photo → Cloudinary → optional Firestore → feed-optimized URLs.
  • Real multi-provider events on a map and in lists, with backend normalization and geocoding.
  • On-device Gemma via Zetic for post intelligence and advisor-style copy, with explicit fallbacks and preload for UX.
  • A clear separation of concerns: edge for sensitive / latency-sensitive language generation; cloud for media, sync, and data aggregation.

What we learned

  • On-device LLMs change the privacy story for captions and advice, but integration and rebuild cycles are the hidden cost-bridges and capabilities have to be right before the demo shines.
  • URL-level image transforms are underrated: one upload, many shapes, with auto format/quality, huge for mobile performance.
  • Aggregation APIs are product features: users do not care which ticket API an event came from; they care that it is one list, one map, one post story.

What's next for PRISM

  • Deeper HealthKit surfaces (trends, goals, notifications) tied to the same on-device advisor loop.
  • Stronger offline behavior for reading the feed and drafts when the network drops.
  • Richer Arista context (more resource types, real POI or campus feeds) while keeping scoring explainable.
  • Android parity for Zetic where the stack allows, or a documented cloud fallback path only for non-supported devices.
  • Hardening: tests around aristaClient / zeticClient contracts, and tighter typing of function JSON responses end-to-end.
Share this project:

Updates