1. The Problem: Communication as a Luxury

Millions globally struggle to speak due to ALS, cerebral palsy, or stroke-induced aphasia. They aren't voiceless, but current assistive technology (AAC) often renders them so. Traditional software require hunting through deep, generic menus—the same for a child in school as an adult at home. By the time a user finds "I want to share my perspective," the conversational moment has passed, making real-time connection nearly impossible.


2. The Solution: Personal Independence

ConnectAble is a full-stack system that learns a user’s unique voice and patterns. After one week, it recognizes morning routines and favorite Friday orders, predicting thoughts in a single tap. By retraining nightly based on location and time, it evolves from a static tool into a personal extension. Beyond speaking, it acts as a digital bridge to independence, allowing users to control their environment and schedule without constant caregiver intervention.


3. A Real Scenario

8 AM at home: Sarah, who has ALS, taps "I" then "want." ConnectAble instantly suggests "water" based on her routine. One tap speaks the phrase and logs the data.

2 PM at the hospital: Her daughter calls. Recognizing the medical setting and time, the system prioritizes "I am at the hospital, will call you later" in the suggestion bar for an instant response.

Evening: Sarah want to order pizza, she taps pizza and order icon and delegates the task to the built in agent which places the order for her.

Late night: Sarah regains autonomy via "Agent Chat," typing "Remind me to take meds at 8 AM." The AI executes the command directly, bypassing complex menus.


4. Three Tabs, One System

Tab Functionality
AAC Board 66 emoji-labeled buttons with real-time AI predictive suggestions.
Agent Chat Natural language interface to "Call mom" or "Order pizza" independently.
Profile Customization: Voice modes, dyslexia fonts, and high-contrast visuals.

5. What Makes ConnectAble Different

Feature Legacy AAC software ConnectAble
Vocabulary Static and generic Dynamic and personalized ; learns individual patterns
Speed 30–45 seconds per phrase 4–5 seconds via LLM prediction
Privacy No data stored Fully offline, encrypted on-device data
Autonomy Communication only Integrated AI Agent for tasks and reminders
Context None Understands location, time, and flow

6. Real-World Impact

For those with speech disabilities, cutting response time from 30 seconds to 4 seconds rescues the ability to be spontaneous, humorous, and present. ConnectAble transforms "patients" back into active participants in their own lives.

By automating daily tasks and providing caregivers with data-driven health insights—such as shifts in thirst or energy levels—ConnectAble doesn't just provide a voice; it restores the independence and dignity that speech loss often takes away.

7. The Future: Scaling Autonomy

Adaptive Input Integration: Support for specialized hardware, including eye-tracking, sip-and-puff switches, and future BCI (Brain-Computer Interface) linkages to bypass physical limitations.

Agent Delegation: Expanding "Agent Chat" to communicate directly with external AI ecosystems, allowing the system to independently manage smart homes, medical refills, and scheduling.

Active Context Curation: Real-time "listening" to identify ongoing discussion topics, automatically surfacing relevant vocabulary to keep the user synced with the current conversation.

Hybrid Cloud Backend: A secure, encrypted infrastructure to synchronize preferences across devices while utilizing cloud-based LLM training to refine the user’s personal linguistic model.

Built With

  • chromadb
  • claude-code
  • css
  • elevenlabs
  • fastapi
  • llm
  • lovable
  • microsoft-azure-geolocation
  • node.js
  • ollama
  • phi3-msft-model
  • python
  • pyttsx3
  • react
  • sentence-transformers
  • sqlcipher3
  • typescript
  • vite
Share this project:

Updates