Inspiration

In the beginning of our brainstorming, we thought about one of the main problems in the workplace: communication. Companies tend to have their productivity decreased due to the difficulty in communication, which might be linked to employees being in different time zones, cultural differences and decentralization of information. Below, there is a brief text describing the work experience of one of our members, which inspired us to develop this solution. From my experience working full time at an investment bank, flow of information and knowledge transfer has often been a bottleneck. Too frequently do I have a question that sends me down a chain of referrals, wasting hours awaiting replies. We aim to solve this problem once and for all with C8 by consolidating relevant firm knowledge across video call, chat, and documentation platforms to update and maintain an efficient Neo4J graph data structure. The graph neural structure can be edited by agents and humans and is a real time representation of all the firm’s ongoing work, like context for an LLM, but much more efficient and compact. So next time I want to know the source of a weird bug, I can directly query the graph to know just who to reach out to.

What it does

Our solution is a series of AI agents hosted in a website, which companies can utilise during meetings to transcribe the meeting and voice-ask AI about relevant information about the company project. Apart from this feature, there is a chat where employees can ask for information and upload pertinent documents. The information can also be displayed in a neural graph, which can be edited by the AI agents in real time. Besides the visual display, our solution can reply by audio or text using the data captured by audio and text, providing a multi-sourced response

How we built it

We build a muti-agent network containing the following agents: a transcribe agent to turn realtime voice to realtime transcript, using amazon transcribe, websocket realtime transcribe. a text agent to reply to user's questions or receive related text or documents to keep it updated, using gemini-2.5-flash api. An allocation agent to receive text and transcripts, processes and parses the information leverages an API call to Google Gemini to output a correctly formatted JSON string representation of a Neo4J graph data structure for downstream translation into a Neo4J graph via CYPHER. A speech agent to reply to voice questions during meetings, using gpt-5 to generate response, and elevenlabs-tts-2 to transform it to audio data.

Challenges we ran into

Choosing suitable models for each task, we will have to consider latency, performance, cost-efficiency and many other aspects.

Accomplishments that we're proud of

We build an architecture that has infinite expandability, making it open and customizable for users, they can integrate their own agents for specific use, and the existing agent will be able to call and use them. The information gathering sequence can be done almost without any extra requirements to the users, and all related data will be stored in a good position for future search and use. Which will make the model smarter every time.

What we learned

  1. How to interconnect AI agents, which provided different types of data
  2. Organizing data and storing data in graph style.

What's next for The Bridger C8

  1. Adding more tool-based agents to the architecture.
  2. Creating a template to build customized agents for specific use.

Built With

Share this project:

Updates