Bench Brain is revolutionizing how biology researchers interact with their lab notebook.
By combining AI with the context of the entire researcher's entire notebook we turn complex bio data information into clear, actionable insights. Instead of spending hours going through literature or running trial-and-error experiments, researchers can rely on BenchBrain to plan out the next steps, anticipate interactions, run statistical analysis, build the not-so clear connections, and guide experiments in real time.
Inspiration
The inspiration behind BenchBrain came from a firsthand experience. We have been working in biology labs for the past seven years. We also spoke with many researchers doing this Hackathon. For example we spoke to Dr. Gaikwad from Texas Tech . We learned about the difficulties of analyzing their own experimental data once they have collected the wetlab results.
We realized that companies like Benchling only solve a portion of the issue - they help make storing research data more friendlier and accessible. However, if AI could step in as a "research partner", being able to handle through the complex and repetitive aspects of the process while really pushing for key insights that could be missed could really be a game changer, elevating the whole process to another level. So we decided to focus on a very specific subset of the bio industry - proteins.
We want researchers to focus their main energy on creativity and discovery, not data wrangling - and that's where BenchBrain becomes a true AI collaborator that accelerates scientific breakthroughs.
What it does
BenchBrain has the context of everything a researcher has done - raw lab notes, protein measurements, or assay results - and turns it into actionable insights. It analyzes the data, runs statistical reasoning, cross-references it with public datasets and previous expirements up to date, and visualize key patterns (critical protein interactions or regions the researcher should invest their time too) by creating protein graphs. It will even suggest the most optimal experimental edits with reasoning, giving researchers a clear outline of moving forward - faster, smarter, and having intention.
How we built it
We built an Agent to act as the research assistant with Letta
We created in depth data retrieval via Bright Data for the Letta agent to use
Through Letta, we store changes to the Notebook in a memory block, allowing the Agent to understand everything that the researcher is inputting into the Notebook
Next, Bright Data is used to do extensive web scraping and find every single possible research article/website relevant to what the researcher is currently working on, which Letta is able to determine through its memory
Then, we store all of the relevant information, along with their respective links, inside Letta's built in vector database (archival memory), which we can later access with Letta's built in RAG when needed
Every time the researcher chooses to save, Letta receives the new notebook additions and researches relevant information online
Once the researcher clicks the "Analyze" button, Letta reads through all of the notebook and does a RAG search over its database of relevant online information that we scraped wtih Bright Data. It is tasked with finding a new breakthrough based off the notebook data and the research data, which will help the researcher accomplish something significant. First, to ensure the Agent does not hallucinate, we instruct it to determine useful statistical tests and complete them based off the notebook and online data. Next, it uses the results of the tests as justification for the breakthrough it comes up with. We display all of this to the researcher, along with nice visuals to make it clear what the Agent accomplished, and each piece information includes a relevant link so that the researcher can fact check. We also include a "Cross-Reference Data" tab so that the researcher can easily look at all the links used
Complete tech stack: Backend: Python FastAPI Uvicorn Pydantic Letta BrightData
Frontend: React Vite Tailwind Cytoscape.js Chart.js Context API
Challenges we ran into
We found a bug in Letta that prevents us from receiving an output from it when giving it JSON inputs. After talking with the Letta team, we provided some good feedback and changed our input system to avoid running into this bug. More significantly, closer to the deadline, we ran into another problem with Letta. When waiting for a message back from the Letta agent, if it took too long, the Cloudfare API timed out. This meant we had to completely change our architecture so that if the Agent took a lot of time thinking, we could still receive the response in the backend and send it to the frontend. This took up a lot of time and involved lots of refactoring.
What we learned
Building BenchBrain was definitely a super challenging experiences for us, but it was extremely rewarding because we were able to dabble with technologies we have never used. Right from the ideation, we really want to push ourselves to experiment with new things and really learn along the way.
To be exact: - We learned so much from how to design to deploy Letta AI agents that are able to have chaining reasoning steps, interact with external tools through MCP, and retain experiment context. - We learned how to configure the MCP integration into Letta in creating an authenticated scraping pipeline. Pulling from validated scientific datasets and even converting unstructured data into actionable bio insights. - We learned how powerful data context is and how to utilize tools such as Brightdata to bring the power of context into these difficult fields
What's next for BenchBrain
We want to get this into the real world. When we showed our product to researchers they were taken back by the amount of analysis they would be able to get done from their current workflows and we want to make that a reality.
Built With
- brightdata
- chartjs
- contextapi
- cytoscape
- fastapi
- javascript
- letta
- pydantic
- python
- react
- tailwind
- uvicorn
- vite

Log in or sign up for Devpost to join the conversation.