Inspiration
On many occasions where artificial intelligence is used as an aid in academic or professional settings, users increasingly place blind trust in information generated by large language models without understanding how that information is produced. These systems often rely on incomplete, biased, or probabilistic data, which can result in inaccurate, misleading, or unhelpful responses.
Greater transparency in how AI systems formulate their outputs could help users develop a more critical understanding of the information they receive. By exposing limitations, uncertainty, or reasoning pathways, AI tools can promote critical AI literacy rather than passive consumption.
Increased transparency in AI systems can help users better evaluate the information they receive, mitigating the risk of misinformation and lowering barriers to meaningful engagement with AI technologies. Rather than positioning users as passive recipients, such design choices encourage more thoughtful and informed interaction.
What it does
Liquid is a chat bot in which every interaction that requires research from the bot lists every link that the bot used as reference when completing it's research, and provides said information to the user in an easy to access link sidebar, highlighting the user's desired interaction and allowing clickable directories for the user to traverse to see where the bot's research took place.
How we built it
Liquid is a React-based web application built using modern functional components and state management with React Hooks. The interface maintains a conversational message history, distinguishing between user input and assistant responses while providing real-time feedback through a “Thinking…” placeholder during API requests.
Liquid integrates Google’s Gemini API to generate responses based on the conversation context. Each user query is appended to the conversation history and sent to the model, ensuring continuity and relevance in responses. After generating a primary answer, Liquid performs a second, follow-up request prompting the model to surface relevant source links that support or contextualize its response. These links are then parsed, extracted, and attached directly to the corresponding assistant message.
To keep the user experience intuitive, referenced links are hidden by default and can be toggled through a simple “View” button. This design choice keeps the main conversation uncluttered while still allowing users to explore sources when desired. The UI is structured to emphasize readability and ease of interaction, reinforcing the goal of transparency without overwhelming the user.
Challenges we ran into
One of the primary challenges was ensuring that source links were meaningfully tied to individual responses rather than presented as a separate or disconnected element. Managing asynchronous API calls while maintaining a consistent conversation flow required careful state updates to avoid race conditions or duplicated messages.
Accomplishments that we're proud of
We are proud that we have essentially fully completed the project that we set out to complete, the end product fully matching the expectations that we had in our mind, and ending up a product that both members of our team have expressed interest in utilizing personally as a product in our other endeavors.
What we learned
Through building Liquid, we gained a deeper understanding of how interface design shapes user trust in AI systems. We learned that transparency does not necessarily require exposing raw model internals; even modest signals—such as accessible sources—can significantly influence how users interpret and evaluate AI output. We also learned the importance of prompt engineering and state management when working with generative models, especially when attempting to guide models toward more accountable and explainable behavior within existing technical constraints.
What's next for Liquid:
We hope to use it in our daily life, for work purposes, and for schooling and personal education purposes.
Log in or sign up for Devpost to join the conversation.