Inspiration

The MindWeaver project was born from a desire to help people with disabilities bridge the gap between thought and action. I’ve always been fascinated by the idea that technology could read human intent and transform it into tangible actions — whether that’s typing a message, controlling smart devices, or navigating the web.

The inspiration struck while reading about Brain-Computer Interfaces (BCI) and how OpenAI-like language models could interpret natural language. I thought:

“What if we could skip the typing, the voice commands, and simply think what we want to do?”

This idea became MindWeaver — an assistant that listens to your brainwaves, interprets your thoughts, and acts accordingly.

What it does

MindWeaver turns noisy human intent (e.g., brain/comparable signals + minimal gestures/voice) into reliable digital actions by orchestrating perception, intent decoding, planning, and tool use—built on GPT-OSS components.

How we built it

We built MindWeaver as a modular system that fuses brain, gaze, speech, and context signals into a shared embedding, decodes user intent with a multimodal model, and uses a tool-augmented LLM to plan and execute actions safely. A dual-agent setup (planner + verifier) ensures that every action passes safety checks before execution, while lightweight confirmations (blink, dwell, keyword) keep it accessible. Personalization is achieved through on-device adapters and federated updates with differential privacy, and the interface adapts dynamically using principles like Fitts’s law to reduce effort. Together, these layers turn small, noisy signals into reliable digital actions for independence and accessibility.

Challenges we ran into

The main challenges we ran into while building MindWeaver were decoding noisy brain and multimodal signals reliably, ensuring safety when the system controls sensitive tools like email or smart-home devices, and balancing accessibility with low user effort. We also faced difficulties in personalization without compromising privacy, real-time performance on edge devices, and designing confirmations that are both lightweight and error-proof.

Accomplishments that we're proud of

We’re proud that MindWeaver can reliably turn minimal signals like gaze, whispers, or brain patterns into meaningful digital actions, enabling true accessibility. We built a safe dual-agent planning system that prevents mistakes, achieved on-device personalization without sacrificing privacy, and created an adaptive interface that reduces effort for users with limited mobility. Most importantly, we proved that small, noisy human inputs can be transformed into powerful, independent control of digital tools and environments.

What we learned

We learned that integrating multiple noisy signals into a coherent intent requires careful fusion and robust modeling, and that safety and usability must be built into every layer, not added later. Lightweight, adaptive confirmations and personalization are crucial for real-world adoption, and even small improvements in signal processing, planning, or interface design can dramatically enhance user independence. Most importantly, we saw that thoughtful system design can turn minimal human input into meaningful, reliable digital actions.

What's next for Mind weaver

Next for MindWeaver is expanding its capabilities to support more tools and environments, improving real-time performance and intent accuracy, and adding richer personalization while maintaining privacy. We aim to integrate advanced BCI inputs, broader smart-home and workplace automation, and collaborative multi-agent planning, making MindWeaver even more seamless and empowering for users with diverse accessibility needs.

Built With

  • accessibility-apis-(os-level-input-simulation)
  • adaptive
  • and
  • and-cloud-technologies:-languages:-python-(core-ai-&-integration)
  • and-optional-federated-learning-coordination.-databases:-sqlite-(on-device-state)
  • both
  • cloud
  • communication.-other-technologies:-differential-privacy-libraries
  • deployable
  • edge-devices-(for-on-device-inference)
  • fitts?s-law-for-adaptive-ui-design.-this-stack-allowed-mindweaver-to-be-multimodal
  • for
  • for-mindweaver
  • hugging-face-transformers-(llm-planning)
  • ica/csp-for-bci-signal-cleaning
  • in
  • internal
  • javascript/typescript-(ui-&-web-components).-frameworks-&-libraries:-pytorch-(multimodal-models)
  • linux
  • numpy/pandas-(data-processing)
  • on-device
  • onnx-runtime-(edge-deployment)
  • postgresql-(centralized-logging-&-analytics).-apis-&-tools:-openapi-compatible-tool-interfaces
  • rest/grpc
  • safe
  • scipy-(signal-processing).-platforms:-windows
  • smart-home-sdks
  • software
  • storage
  • the
  • we-used-a-combination-of-ai
  • web-browsers-for-ui.-cloud-services:-aws-&-gcp-for-model-training
Share this project:

Updates