Inspiration

Rural areas in Canada often have very limited access to doctors. That lack of care inspired me to create a browser-based AI assistant that offers basic health guidance.

What it does

HealthVoice lets users speak their symptoms in a browser. It uses the browser’s SpeechRecognition API to capture speech. That text is sent to Amazon Bedrock, which returns an AI-generated health advice. The advice appears as plain text in the UI.

How we built it

  • Voice input: Used browser’s SpeechRecognition API to record spoken questions.
  • AI processing: Transcribed text is sent to AWS Bedrock to generate a simple health response.
  • Displaying result: Bedrock’s reply is returned and shown to the user as text.

Challenges we ran into

  • AI safety: Ensuring responses are clear and effective.

Accomplishments that we're proud of

  • Delivered a real helper for rural users without needing high‑bandwidth connections or video calls.
  • Created a browser-based voice-first health assistant powered only by speech recognition and AWS Bedrock, minimising service cost of AWS services.

What we learned

  • How to capture and process voice input in browser using the Web Speech API.
  • Better understanding of AWS and how to use Bedrock Models.

What's next for Health Voice AI

  • Feedback button: Let users indicate if the advice helped.
Share this project:

Updates