Our Journey Building the Ethical AI Audit Assistant
The Spark of Inspiration: Why We Built This
The rapid acceleration of AI has been both awe-inspiring and, frankly, a little daunting. As we plunged into the world of "vibe coding" — building applications with natural language rather than complex code — we quickly realized something critical: AI's power comes with immense responsibility. We saw countless tools generating content, images, and even code at lightning speed, but often with a crucial question hanging in the air: Is this fair? Is it biased? Can I trust it?
This concern was amplified by stories we'd seen: AI systems unknowingly perpetuating stereotypes, making opaque decisions, or even introducing subtle privacy risks. For citizen developers and small businesses, the very people vibe coding empowers, navigating these ethical minefields is incredibly difficult. They don't have ethics review boards or dedicated AI safety teams.
That's when the idea for the Ethical AI Audit Assistant (EAA) clicked. We were inspired to create a tool that could democratize AI ethics, making it as intuitive to audit AI as it is to prompt it. We wanted to build something that wasn't just innovative but profoundly impactful, aligning perfectly with Bolt.new's focus on AI Innovation and Social Impact.
What We Learned Along the Way
This hackathon was a steep, exhilarating learning curve. We dove deep into several key areas:
- The Nuances of AI Ethics: We gained a much richer understanding of various forms of AI bias (gender, racial, cultural, etc.), the complexities of AI explainability (XAI) techniques, and the importance of privacy-preserving AI. It's not just about "good" or "bad" but about understanding shades of gray and context.
- Multimodal AI for Practical Applications: We learned how powerful LLMs like Gemini 1.5 Pro are at interpreting diverse data types. Going beyond text, we explored how to feed images into the model and prompt it to identify visual biases, which was a significant learning experience.
- Prompt Engineering for Complex Tasks: Crafting prompts for ethical analysis, explanation generation, and remediation suggestions required precision. We learned to break down complex tasks for the LLM, provide clear instructions, and iterate relentlessly to get the desired quality of output.
- Rapid Prototyping with Bolt.new: We discovered the true agility of Bolt.new's environment. Its browser-based IDE and AI agent building capabilities allowed us to spin up a full-stack prototype far quicker than traditional methods, letting us focus more on the AI logic itself.
How We Built It: A Vibe Coding Journey
Our build process was a true embodiment of "vibe coding," powered by Bolt.new:
- UI First (Vibe Prompting the Frontend): We started by prompting Bolt.new to scaffold our React frontend. We described the input fields (text area, image upload), the audit prompt box, and the results display area. Bolt.new rapidly generated the initial components and basic styling, allowing us to visualize the user flow immediately.
- Backend Agent Core (Node.js & Gemini API): Next, we focused on the backend. We prompted Bolt.new to set up a Node.js Express server to handle API requests. The core of our logic involved integrating the Gemini 1.5 Pro API. We crafted an initial prompt for Gemini, instructing it to act as an "Ethical AI Auditor."
- Multimodal Magic: This was a key step. We designed prompts for Gemini to analyze both text and base64-encoded images. For instance, if a user uploaded an image, our backend would send it along with the audit prompt to Gemini, asking it to identify visual biases.
- Explain, Suggest, Repeat: Once Gemini returned its initial findings (bias detected, explanation), we iterated on prompts to generate actionable remediation suggestions. For text, this meant alternative phrasing. For images, it involved broader conceptual advice. We built the frontend to dynamically display these results and allow users to "apply" suggestions, creating a valuable feedback loop.
- Integrating Builder Pack Tools: We enhanced the experience using Bolt.new's Builder Pack. We used 21st.dev to rapidly refine the CSS for a polished, readable audit report display. While we didn't fully integrate ElevenLabs for the core MVP due to time, its potential for audio summaries was clear.
Challenges We Faced and Overcame
No hackathon project is without its hurdles, and the EAA was no exception:
- Prompt Precision vs. Generality: Balancing the need for Gemini to understand specific ethical nuances while remaining broadly applicable to various user inputs was tough. Initial prompts were either too narrow or too vague. We overcame this by extensive testing, breaking down complex ethical checks into smaller, more manageable sub-tasks for the AI, and refining our instruction sets.
- Multimodal Interpretation Depth: While Gemini is powerful, getting it to consistently identify subtle visual biases and articulate them effectively required careful prompt engineering. We learned to provide very specific examples of what we considered a "bias" in an image context to guide the model.
- Managing LLM Latency & Token Usage: For real-time auditing, LLM response times can be a factor. We focused on optimizing our prompts to be concise yet effective, and considering Gemini 1.5 Flash for future speed improvements, ensuring we stayed within reasonable token limits for efficient operation.
- Hackathon Time Crunch: The biggest challenge was, as always, time! Prioritizing core features, cutting scope without losing impact, and relentlessly focusing on a demonstrable MVP was crucial. We streamlined the UI to its essentials and focused on the core AI agent logic.
Building the Ethical AI Audit Assistant has been a profoundly rewarding experience. We're proud to present a project that not only showcases technical innovation through vibe coding but also strives to build a more responsible and trustworthy AI future.
Log in or sign up for Devpost to join the conversation.