Inspiration

1 in 4 women with endometrial damage will never conceive — not because a treatment doesn't exist, but because it isn't precise enough.

Ultrasound-Guided Intrauterine Hydrogel Injection (UG-IHI) is an emerging regenerative therapy that delivers biodegradable hydrogel loaded with PRP, MSCs, or exosomes directly to the damaged uterine lining. But two problems remain unsolved: how do you guide the injection safely in real time, and how do you know which formulation is right for this specific patient?

We built EndoRegen AI to answer both.

What it does

EndoRegen AI is a two-module clinical decision support system for endometrial regeneration:

Module 1 — UG-IHI AR Assistant (Real-Time Surgical Guidance)

  • Overlays optimal hydrogel injection sites on live ultrasound feed using computer vision
  • Tracks transcervical catheter position frame-by-frame
  • Fires instant perforation risk alerts (SAFE → CAUTION → WARNING → CRITICAL) when the catheter approaches danger zones
  • Verifies post-injection gel coverage across the uterine cavity
  • Generates an AI-written clinical procedure report at procedure end
  • Runs entirely on-device (edge AI) — no cloud dependency mid-surgery

Module 2 — Personalized Hydrogel Formulation Recommender

  • Analyzes patient-specific factors: age, endometrial thickness, damage severity, hormone levels, and treatment history
  • Recommends the optimal hydrogel payload — PRP vs MSCs vs exosomes — and dosage
  • Uses SHAP and LIME to explain every recommendation in plain clinical language
  • Continuously improves via offline Reinforcement Learning trained on treatment outcomes

Together: a physician gets told where to inject and what to inject — before and during the procedure.

How we built it

AR Guidance System

  • YOLOv8 for real-time catheter tip detection at 30fps on-device
  • Faster R-CNN as a verification layer for edge-case frames
  • Unity + ARKit/ARCore for the AR overlay interface on medical-grade tablets
  • ONNX Runtime + TensorRT for edge AI deployment
  • Modular LLM prompt architecture (antigravity design) — 6 isolated AI prompt modules, each handling one task, preventing context overflow and hallucination chains
  • IMU + ultrasound timestamp sync for sensor fusion

Hydrogel Recommender

  • XGBoost + LightGBM for tabular patient data classification
  • Stable Baselines3 for offline Reinforcement Learning on synthetic outcome data
  • SHAP + LIME for explainable AI — so clinicians understand why a formulation was recommended
  • Patient data pipeline with anonymization built in

Challenges we ran into

  • Antigravity prompt design — a single large AI prompt mid-surgery causes latency spikes and context crashes. We broke the system into 6 atomic prompt modules, each with strict token budgets and isolated responsibilities, so no single call can bring the pipeline down.
  • No real patient data — we trained on synthetic ultrasound data and simulated patient records. Building realistic synthetic datasets that reflect clinical diversity was harder than expected.
  • Latency vs accuracy tradeoff — running YOLOv8 at 30fps on a tablet while simultaneously computing risk scores required careful parallelization of the prompt modules.
  • Explainability for clinical trust — raw SHAP values mean nothing to a fertility specialist. Translating model outputs into actionable clinical language took significant prompt engineering.

Accomplishments that we're proud of

  • Designed a full 6-module antigravity AI architecture where each module is independently testable, replaceable, and crash-safe
  • Built a real-time risk alert system with 4 severity levels that halts the AR overlay on CRITICAL events and awaits physician acknowledgment
  • Integrated Reinforcement Learning into a clinical recommendation pipeline — rare at this level of development
  • Created a system that addresses both the procedural and pharmacological sides of a single therapy — most tools solve one or the other
  • Grounded the entire project in a published research paper on UG-IHI endometrial regeneration

What we learned

  • Modular AI architecture isn't just good engineering — in medical settings, it's a safety requirement
  • Explainability (SHAP/LIME) is not optional in clinical AI; a black-box recommendation will never be trusted by a physician
  • Edge AI deployment forces you to think about every millisecond — cloud-first thinking breaks in the operating room
  • Women's health is one of the most underserved areas in medical AI, and the technical gaps are real and solvable

What's next for EndoRegen AI — AR Guidance & Hydrogel Formulation System

  • Synthetic dataset expansion — partnering with reproductive medicine researchers to generate clinically validated training data
  • Phantom validation — testing the AR guidance system on uterine phantoms before any human trial
  • IRB/ethics submission — preparing documentation for an approved pilot study in a fertility clinic setting
  • Regulatory pathway research — understanding FDA De Novo and CE MDR routes for AI-assisted surgical guidance tools
  • Real-world RL feedback loop — connecting the recommender to anonymized outcome data from actual UG-IHI procedures as they are published in literature

Built With

Share this project:

Updates