TLDR: Synapse AI demonstrates how combining RAG with computer use capabilities can transform healthcare administrative workflows. Our goal is to help patients get treated faster and have more time with medical professionals by easing the burden on health systems in performing administrative duties. Built using MongoDB Atlas for vector storage, AWS Bedrock (with Claude 3.5) for LLM interactions and embeddings, and Langchain for orchestration, our system shows how AI can autonomously complete complex multi-step tasks like preparing Quarterly Quality Improvement Reports. This directly increases time available for patient care by reducing administrative overhead.
Synapse uses RAG through Atlas + Bedrock + Langchain, with cutting-edge LLMs from Anthropic (through AWS) to control computer use to get tasks done

Problem We're Solving
In U.S. hospitals, administrative costs consume 25% of total hospital expenditures - more than $250B annually. A significant portion of this comes from regulatory compliance documentation like Quarterly Quality Improvement (QI) Reports.
Creating a single QI report requires:
- Collecting and analyzing departmental performance data
- Creating visualizations to identify trends
- Writing detailed sections on regulatory compliance
- Compiling best practices recommendations
- Formatting and submitting through multiple systems
- Can take 5-10 hours per report
Overview: Synapse AI
We've combined two powerful capabilities that haven't been integrated before:
- RAG pipeline using MongoDB Atlas: Enables context-aware generation by retrieving relevant sections from previous reports, regulatory documents, and best practices guides
- Computer use through Claude 3.5: Allows the AI to control existing software like a human would - moving cursors, clicking buttons, typing text
Our demo shows the system autonomously:
- Opening spreadsheet software to create data visualizations
- Using RAG to write regulatory compliance sections
- Generating best practices recommendations
- Compiling everything into a formatted PDF
- Sending the completed report via email
Technical Implementation
| Component | Technology | Implementation Details |
|---|---|---|
| Vector Store | MongoDB Atlas | - Stores embeddings of historical reports and regulatory docs - Enables semantic search across document corpus - Handles concurrent access for real-time retrieval |
| LLM Provider | AWS Bedrock | - Access to Claude 3.5 with computer use capability - Handles both embeddings and text generation - Single API for all LLM operations |
| Orchestration | Langchain | - Manages RAG pipeline - Coordinates between Atlas and Bedrock - Structures multi-step workflows |
How it Works: High-Level Overview
Document Processing & Indexing
- Langchain document loaders + parsing
- AWS Bedrock generates embeddings for each chunk
- MongoDB Atlas indexes vectors for similarity search
RAG Pipeline
- Query from computer-use-capable AI vectorized using AWS Bedrock
- MongoDB Atlas performs similarity search
- Retrieved context chunks ordered by relevance
- RAG is used by Claude (through AWS bedrock) to help it complete its actions on the computer (below)
Computer Interaction to complete task
- Claude 3.5 with computer use capabilities receives visual interface of desktop (screenshot)
- Determines required actions (mouse movement, clicks, typing, etc.)
- Executes actions through virtual desktop
Future Impact
This project demonstrates a framework for automating complex workflows that we believe could transform administrative efficiency across multiple industries:
Healthcare Impact Potential:
- 25% reduction in administrative overhead
- $60B+ annual savings across U.S. healthcare
- Improved regulatory compliance
- More time for direct patient care
The core technology (RAG + computer use) could be extended to any industry with complex administrative workflows and regulatory requirements.
Log in or sign up for Devpost to join the conversation.