Inspiration

Phone calls are something most people take for granted — scheduling a doctor's appointment, disputing a bill, reporting a maintenance issue. But for many autistic adults, phone calls are a massive barrier. The unpredictability, implied social meaning, real-time processing demands, and sensory overload make calls so overwhelming that people end up with untreated medical conditions, accumulated debt, lost housing, and missed job opportunities — not because they can't handle life, but because they can't handle the phone call standing in the way.

We built PhoneAngel because no one should lose access to healthcare, housing, or employment because of a phone call.

What it does

PhoneAngel meets users where they are with three modes:

  • Call Prep — Before the call, it generates a visual conversation flowchart, word-for-word opening scripts, a list of likely questions with pre-filled answers from your profile, and anxiety notes explaining what's normal (hold music, transfers, silence).
  • Live Coach — During the call, real-time coaching appears on screen. It translates confusing phrases into plain English ("bear with me" = "please wait"), suggests responses, auto-fills info like your insurance or DOB, and provides reassurance during hold times.
  • AI Proxy — When a call feels impossible, the AI makes it for you via Twilio. You set the objective and decision boundaries, review the plan, and confirm. Afterward, you get a full transcript, summary, and list of anything that needs your approval.

How we built it

  • Backend: Python 3.12 with FastAPI, using WebSockets for real-time live coaching and REST endpoints for prep and proxy modes. SQLModel + SQLite for storing user profiles and call sessions.
  • AI: DigitalOcean Gradient AI — Serverless Inference for chat completions and Managed Agents for each of the three modes (prep, coach, proxy). We used a Knowledge Base with common phone call scripts and decoded phrases as RAG context.
  • Frontend: React 18 with TypeScript and TailwindCSS. Conversation flowcharts render as interactive trees. The live coach mode streams coaching messages over WebSocket in real time.
  • Speech-to-Text: Deepgram for real-time transcription during live coaching.
  • Telephony: Twilio for outbound calls in proxy mode.

Challenges we ran into

  • Getting the AI to output structured JSON reliably for flowchart generation — we had to implement markdown fence stripping and JSON error recovery in the Gradient client.
  • Designing the live coaching experience to feel helpful without being overwhelming — the whole point is reducing cognitive load, so adding too many on-screen prompts would defeat the purpose.
  • Balancing the proxy mode's autonomy with user control — the AI needs enough freedom to handle a real conversation but must stay strictly within the decision boundaries the user sets.
  • Making the knowledge base content genuinely useful for autistic users — we had to decode dozens of common phone phrases and explain what's "normal" during calls in literal, concrete language.

Accomplishments that we're proud of

  • The three-mode design that meets users at every comfort level — from "I can do this with some help" to "I need someone else to do this for me."
  • A knowledge base built with genuine understanding of autistic communication needs — literal language, decoded idioms, sensory considerations, and anxiety management baked into every response.
  • Real-time coaching that actually reduces cognitive load instead of adding to it.
  • Profile auto-fill that eliminates the panic of being asked "what's your date of birth?" mid-call when your mind goes blank.

What we learned

  • Accessibility isn't just about screen readers and color contrast — it's about removing invisible barriers that lock people out of everyday life.
  • The biggest impact often comes from the simplest features. Pre-filling your insurance ID number or explaining that hold music is normal can be life-changing for someone who's been avoiding a medical appointment for months.
  • DigitalOcean Gradient's managed agents made it possible to build three distinct AI-powered modes without managing separate model deployments.

What's next for PhoneAngel

  • Real Deepgram integration — fully streaming speech-to-text during live coaching instead of simulated transcript input.
  • Inbound call support — coaching for unexpected incoming calls, not just planned outbound ones.
  • Call history insights — patterns and progress tracking so users can see themselves getting more comfortable over time.
  • Community phone scripts — letting users share successful call scripts for specific scenarios (e.g., "calling Cigna to dispute a claim").
  • Mobile app — bringing PhoneAngel to where calls actually happen, with overlay coaching that works during a real phone call.
  • Expanded accessibility — supporting other communities who struggle with phone calls, including people with social anxiety, ADHD, or speech disorders. Can y

Built With

Share this project:

Updates