🚀 Inspiration

Wet lab scientists often spend hours in the fume hood, wearing restrictive PPE, juggling pipettes, timers, and instruments—all while trying to log data manually. Data entry disrupts workflows, safety monitoring is limited, and calculations are done too late. We imagined: what if scientists could have a reliable, multi-agent AI assistant watching over the experiment, recording everything, and keeping them safe?


🧪 What it does

Our system is a voice-driven lab assistant that:

  • 🧠 Understands spoken lab instructions and logs experimental data.
  • 📋 Fills out digital experiment tables in real-time.
  • 🔒 Monitors safety-critical parameters (temperature, pressure, gas levels) and alerts or halts the experiment if needed.
  • 🧮 Performs lab-related calculations (like molar ratios or percent yield).
  • 🖥️ Tracks progress through the protocol in a live UI, synced with agent actions.

🛠 How we built it

We used a multi-agent architecture built with:

  • Streamlit for the user interface and data table rendering.
  • OpenAI Whisper and streamlit.audio_input for voice capture and transcription.
  • Gemini 1.5 Pro for multimodal reasoning and intelligent agent responses.
  • CrewAI + A2A (Actions to Agents) to orchestrate agent handoffs (e.g., from voice → data → safety).
  • Weave for observability, media logs, and agent workflow tracing.
  • Google Cloud for running long-lived monitoring agents like the overnight safety system.
  • Fake/mock I/O streams to simulate sensor data (temp, pressure, gases) for safety monitoring.

Agents included:

  • Voice Agent – Converts voice to commands.
  • Data Collection Agent – Logs experimental input/output values.
  • Safety Agent – Listens to sensor streams and reacts to anomalies.
  • Calculation Agent – Performs scientific math or literature lookups.
  • Lab Control Agent – (Stretch goal) could turn instruments on/off via voice or API.

🧱 Challenges we ran into

  • Orchestrating agents in a live lab workflow required tightly scoped prompts and persistent memory between steps.
  • Voice transcription quality was inconsistent in noisy environments.
  • Streamlit's reactive rerendering made it tricky to persist agent state during interactions.
  • Coordinating UI updates with agent actions without interrupting data flow took fine-tuning.

✅ Accomplishments that we're proud of

  • Created a fully functional end-to-end pipeline from spoken lab command → agentic interpretation → structured data logging.
  • Built a simulated lab environment where safety alerts were triggered and shown in real time.
  • Implemented CrewAI and Weave tracing for agent-to-agent handoffs and observability.
  • Enabled parameter-based computation like molar ratios and percent yield directly from the experiment log.

📚 What we learned

  • Agent orchestration needs more than just LLMs — tools like A2A and CrewAI make multi-step, multi-role workflows possible.
  • UI and human interaction design are crucial for scientific tools — trust, clarity, and safety matter more than flash.
  • Voice is powerful in constrained environments (like wet labs), but requires redundancy/fallback for critical steps.

🔮 What's next for Research Lab Assistant

  • Integrate with Benchling, Scinote, and other ELNs to sync experiment logs and inventory.
  • Use computer vision to monitor liquid levels, color changes, or pipette positions via lab camera feeds.
  • Enable automated instrument control (centrifuges, UV-Vis, stir plates) via APIs and remote commands.
  • Improve safety response: incorporate phone/SMS alerts or backup shutdown failsafes.
  • Add longitudinal experiment analytics to correlate experimental parameters with UV-Vis spectral success.

Built With

Share this project:

Updates