In high-impact, high-risk areas like medicine and health, human experts can help keep users of AI-generated systems safe, and also help provide valuable validation and training labels for evaluation and improvement of foundation models, such as models available via Perplexity Sonar API.

Onna Health is a women's health-focused app and example that helps solves the "long tail" problem of Search by suggesting a Perplexity AI-generated answer to user-generated health questions, augmented by human expert verification in addition to social community upvoting/downvoting.

Rarely in Medicine are questions so simple as "find urgent care." Instead they're more nuanced and require elaboration. It's this additional context that matters, and changes answers that go beyond simply "How do you treat cancer" to even solving for potential misinformation and loaded search queries like "Don't carrots cure cancer?"

If a company is a hyperscale Search engine, folks might try to implement some nice-sounding policy with alliteration, e.g. Remove, Raise, Reward, Reduce. That is, set and enforce policy to remove content violating policy, raise authoritative voices, reward trusted creators, and reduce the spread of borderline content. The problem is choosing who, exactly, is authoritative. Is it governments? Is it every government all the time regardless of political beliefs? Which creators? The one that lead to the most ads and clicks? Instead, we can follow how real-life works. Rather than rewarding celebrity or popularity, we ought to reward experts for being experts, and not necessarily for being famous or having strong marketing/branding outfits.

ONNA Women's Health** (女性の健康) was originally created on a completely different framework (Flutter, using FlutterFlow). It still focuses on Women's Health Q&A, voted on by trusted doctors. In its original form it was created by Barnard College / Columbia University and Johns Hopkins students and also physicians from the USA and Japan. For the Perplexity hackathon, two team members have continued onward to refactor the application on Vercel, take advantage of v0.dev "vibe coding" platforms, and upgrade the system to utilize Perplexity AI Sonar API for better factuality and Deep Research references for higher-complexity questions and medical scenarios. This in turn can help experts understand the rationale for AI-generated answers even better, to offer endorsements or corrections.

Onna Health for the Perplexity hackathon is built using Vercel, Supabase and Perplexity Sonar AI API; it intends to also use Doximity API for medical expert verification, but we honestly didn't have time to finish that step (and Doximity is protective in a good way about its auth.doximity.com "Sign in with Doximity" API, making it even better for verifying U.S. physician users using a standard OIDC identification / OAuth 2.0 authentication).

Please also see our non-technical user stories design video here, that inspired how we can think about a user making use of a platform like Onna across an entire series of user stories: https://youtu.be/B6muZqcOD2M

In the future, Onna Health may be able to utilize Perplexity.AI for more accurate translation in future iterations. Although some machine translation goes word-for-word, biomedical translations are difficult particularly in Japanese and other high-context languages, which rely on being able to understand preceding and following sentences and full paragraphs. Being able to translate full answers with Perplexity AI, in context, is important to bridging the gap across cultures and languages, for women and users everywhere.

Built With

  • api
  • perplexity
  • sonar
  • supabase
  • v0
  • vercel
Share this project:

Updates