Inspiration
What it does
How we built it
Challenges we ran into
GLP-1 Face Track: A Journey into Digital Health Monitoring
Inspiration
The idea for GLP-1 Face Track emerged from a growing concern in the medical community and social media discussions: "Ozempic Face" - the noticeable facial volume loss that many patients experience while taking GLP-1 agonists like Ozempic, Wegovy, and Mounjaro.
In 2025, a pivotal research paper by Sharma RK et al. titled "Radiographic Midfacial Volume Changes in Patients on GLP-1 Agonists" provided quantitative evidence: patients lose approximately 7% midfacial volume per 10kg of weight loss. This finding sparked a question:
Could we create an accessible tool that helps patients track these changes themselves, without expensive medical imaging equipment?
The goal was ambitious: transform a smartphone camera into a tool for longitudinal facial volume monitoring, democratizing access to health insights that were previously only available through clinical CT scans.
What I Learned
1. The Mathematics of Facial Analysis
The project deepened my understanding of computer vision and geometric analysis. Key concepts included:
Shoelace Formula for Area Calculation:
For a polygon with vertices $(x_1, y_1), (x_2, y_2), ..., (x_n, y_n)$, the area is:
$$A = \frac{1}{2} \left| \sum_{i=1}^{n-1} (x_i y_{i+1} - x_{i+1} y_i) + (x_n y_1 - x_1 y_n) \right|$$
Z-Depth Volume Estimation:
MediaPipe's Face Mesh provides 468 landmarks, each with an estimated Z-coordinate representing depth. The volume proxy uses:
$$V_{region} = \sum_{i \in region} A_i \cdot |z_i - z_{reference}|$$
Where $A_i$ is the area weight and $z_{reference}$ is a reference depth (typically nose bridge).
Inter-Pupillary Distance Normalization:
To account for varying camera distances, all metrics are normalized by the inter-pupillary distance (IPD):
$$\text{NormalizedIndex} = \frac{\text{RawIndex}}{\text{IPD}}$$
2. Pose Estimation Challenges
Head pose significantly affects measurement accuracy. I implemented Euler angle estimation:
- Yaw ($\alpha$): Left-right rotation
- Pitch ($\beta$): Up-down tilt
- Roll ($\gamma$): Side tilt
The acceptable tolerance was set to $\pm 10°$ for all angles, ensuring consistent measurements across sessions.
3. Privacy-First Architecture
Building a health app taught me the importance of data sovereignty:
- Local-First Design: IndexedDB stores all data on the user's device
- Optional Cloud Sync: Supabase integration is opt-in, not required
- No Analytics Tracking: User behavior is never monitored
4. Multi-Language Support
Implementing i18n (internationalization) revealed the complexity of localization:
const translations = {
'zh-TW': { capture: '拍攝', dashboard: '儀表板' },
'ja': { capture: '撮影', dashboard: 'ダッシュボード' },
'en': { capture: 'Capture', dashboard: 'Dashboard' }
};
Voice guidance required not just translation, but culturally appropriate phrasing.
How I Built It
Architecture Overview
┌─────────────────────────────────────────────────────────┐
│ Browser (Frontend) │
│ ┌─────────┐ ┌──────────┐ ┌──────────┐ ┌─────────┐ │
│ │ Camera │ │ FaceMesh │ │ Metrics │ │ UI │ │
│ │ Module │─▶│ Module │─▶│ Engine │─▶│ Module │ │
│ └─────────┘ └──────────┘ └──────────┘ └─────────┘ │
│ │ │ │
│ └──────────────────────────────────────────┘ │
│ │ │
│ ┌───────────▼───────────┐ │
│ │ IndexedDB (Local) │ │
│ └───────────┬───────────┘ │
└──────────────────────────┼───────────────────────────────┘
│ (optional)
▼
┌─────────────────────────┐
│ Supabase (Cloud) │
│ • Authentication │
│ • PostgreSQL + RLS │
│ • Storage Bucket │
└─────────────────────────┘
Technology Stack
| Component | Technology | Why |
|---|---|---|
| Face Detection | MediaPipe Face Mesh | 468 landmarks, runs in-browser, no API calls |
| Styling | TailwindCSS (CDN) | Rapid prototyping, small bundle |
| Charts | Chart.js | Lightweight, responsive |
| Local Storage | IndexedDB | Structured data, large capacity |
| Cloud Backend | Supabase | PostgreSQL + Auth + Storage in one |
| Deployment | Vercel | Automatic builds, environment variables |
| Language | Vanilla JavaScript | No build step needed, fast loading |
Key Implementation Decisions
1. No Framework (Vanilla JS)
Given this is a single-page app with moderate complexity, frameworks like React would add unnecessary overhead. The entire app loads in under 2MB including MediaPipe models.
2. Modular Architecture
Each concern is a separate module:
camera.js- WebRTC camera accessfaceMesh.js- MediaPipe integrationposeEstimator.js- Head pose calculationmetrics.js- Volume calculationsanalytics.js- Health scores, predictionsstorage.js- IndexedDB wrappervisualization.js- Chart.js integration
3. Region-Based Volume Analysis
Following the research paper's methodology, I defined anatomical regions:
const MIDFACE_REGIONS = {
upperCheek: {
left: [116, 117, 118, 119, 120, 121, 187, 205, 36, 142],
right: [345, 346, 347, 348, 349, 350, 411, 425, 266, 371]
},
// ... more regions
};
4. Merz Aesthetics Scale Mapping
To make results clinically interpretable, I mapped volume changes to the Merz Scale (0-4):
| Score | Severity | Volume Change |
|---|---|---|
| 0 | None | > -5% |
| 1 | Mild | -5% to -15% |
| 2 | Moderate | -15% to -25% |
| 3 | Noticeable | -25% to -35% |
| 4 | Severe | < -35% |
Prediction Model
Based on the research correlation ($\rho = 0.590$ between weight loss and superficial volume loss), I implemented a prediction formula:
$$\Delta V_{predicted} = \frac{\Delta W}{10} \times 7\%$$
Where $\Delta W$ is weight change in kg.
For a patient planning to lose 15kg: $$\Delta V = \frac{15}{10} \times 7\% = 10.5\% \text{ predicted volume loss}$$
Challenges Faced
Challenge 1: MediaPipe Initialization Timing
Problem: MediaPipe models load asynchronously, causing race conditions.
Solution: Implemented a promise-based initialization with retry logic:
async init() {
return new Promise((resolve, reject) => {
this.faceMesh = new FaceMesh({
onResults: (results) => this.handleResults(results)
});
this.faceMesh.initialize().then(resolve).catch(reject);
});
}
Challenge 2: Z-Depth Accuracy
Problem: MediaPipe's Z-coordinates are estimates, not true depth measurements.
Solution: Used relative Z-depths within a single frame and normalized by IPD. This doesn't give absolute volume but provides consistent trend tracking.
Challenge 3: Storage Path Configuration
Problem: When deploying to Vercel, the static file server couldn't access files outside the src/ directory.
Solution: Created a build script that generates config.js into src/js/ based on environment variables:
// build-config.js
fs.writeFileSync(
'src/js/config.js',
`window.SUPABASE_URL = '${process.env.SUPABASE_URL}';`
);
Challenge 4: Row Level Security (RLS) Complexity
Problem: Supabase's RLS policies are powerful but tricky to get right.
Solution: Carefully crafted policies ensuring users can only access their own data:
CREATE POLICY "Users can view own measurements"
ON measurements FOR SELECT
USING (auth.uid() = user_id);
Challenge 5: Mobile Browser Camera Constraints
Problem: iOS Safari has strict requirements for getUserMedia - it only works with user-facing camera and requires user gesture.
Solution:
- Only request camera on explicit button tap
- Use
playsinlineattribute for video element - Provide clear error messages for permission denial
Challenge 6: Multi-Language Voice Guidance
Problem: Web Speech API support varies across browsers and languages.
Solution: Implemented graceful degradation with feature detection:
async speak(text, lang) {
if (!('speechSynthesis' in window)) {
console.warn('Speech synthesis not supported');
return;
}
const utterance = new SpeechSynthesisUtterance(text);
utterance.lang = lang;
window.speechSynthesis.speak(utterance);
}
Results
After several iterations, the app now provides:
- Real-time face detection with pose guidance
- Accurate trend tracking for volume changes
- Weight-volume correlation visualization
- PDF report generation for medical consultations
- Cross-device sync via Supabase
Live at: https://midface-volume-measure.vercel.app
Future Improvements
- 3D Reconstruction: Integrate depth-sensing cameras for true volumetric analysis
- Clinical Validation: Partner with dermatologists to calibrate thresholds
- AI Insights: Use machine learning to predict individual trajectories
- Intervention Suggestions: Recommend skincare routines based on detected changes
Conclusion
GLP-1 Face Track demonstrates that accessible technology can bridge the gap between clinical research and everyday health monitoring. By combining computer vision, thoughtful UX, and privacy-first design, we've created a tool that empowers patients to understand their bodies better.
The project taught me that the most impactful applications often come from addressing real-world problems with simple, accessible solutions. Sometimes, you don't need complex infrastructure - just the right combination of existing technologies applied thoughtfully.
"The best measure of a person's health is not what they weigh, but how they feel in their own skin."
Accomplishments that we're proud of
What we learned
What's next for midface-vulome-measure
Built With
- canvas-api
- html5
- mediapipe
- web-speech
Log in or sign up for Devpost to join the conversation.