Inspiration
As artificial intelligence becomes more integrated into everyday products, the regulatory environment around AI is becoming increasingly complex. Developers and startups often focus on building powerful models and applications but may not fully understand which laws, governance frameworks, or compliance standards apply to their systems.
This inspired the idea behind Better Call Nova. We wanted to create a tool that could help developers quickly understand the regulatory risks associated with their AI systems. Instead of reading hundreds of pages of legal documents or governance frameworks, developers should be able to simply describe their AI system and receive clear guidance on risk levels, applicable frameworks, and recommended governance practices.
The goal was to build something that bridges the gap between AI development and AI governance, making responsible AI development easier and more accessible.
What it does
Better Call Nova analyzes a description of an AI system and generates a structured governance assessment. The platform identifies:
- the risk level of the AI system
- relevant regulatory and governance frameworks
- potential compliance concerns
- recommended actions for responsible AI deployment
For example, if a user describes an AI hiring system that ranks job candidates, the system can identify that this type of AI may fall into a high-risk category under frameworks such as the EU AI Act or the NIST AI Risk Management Framework.
In simplified terms, the system evaluates AI governance risk like this:
$$ \text{AI Governance Risk} = f(\text{Use Case}, \text{Data Sensitivity}, \text{Decision Impact}) $$
This allows Better Call Nova to provide actionable insights that help teams design more transparent and responsible AI systems.
How we built it
The platform was built as a full-stack application that combines modern web technologies with large language models.
The frontend was developed using React, Vite, TypeScript, and Tailwind CSS to provide a simple interface where users can describe their AI systems.
The backend was built with FastAPI and SQLAlchemy, which provides REST API endpoints for creating projects, retrieving data, and running AI analysis.
A PostgreSQL database stores project information and analysis results, allowing the system to maintain a history of AI governance evaluations.
The core analysis is powered by Amazon Nova through AWS Bedrock, which processes the AI system description and generates a structured compliance-style response including frameworks, risk levels, concerns, and recommended actions.
The architecture looks like this:
Challenges we ran into
One of the biggest challenges was ensuring that the AI model returned structured and reliable outputs. Large language models often generate free-form responses, so we needed to design prompts and backend logic that could consistently extract structured information such as risk levels and recommended actions.
Another challenge involved integrating and normalizing regulatory datasets within PostgreSQL. Regulatory information is often stored in complex formats, and adapting it to a relational database schema required multiple iterations.
We also encountered challenges while integrating Amazon Nova through AWS Bedrock, particularly when configuring model access and handling API responses.
Finally, we worked to make the system flexible enough to support different AI use cases and even multilingual descriptions, ensuring the platform could analyze AI systems described in languages beyond English.
What we learned
Through this project we gained valuable insights into both the technical and governance aspects of artificial intelligence.
We learned how to integrate large language models into real applications using Amazon Bedrock, how to design APIs that support AI-powered workflows, and how to structure databases for AI governance analysis.
We also learned more about the complexity of global AI regulations and frameworks such as the EU AI Act and the NIST AI Risk Management Framework, and how difficult it can be for developers to navigate them.
Most importantly, we realized that tools like Better Call Nova can play an important role in helping developers build more transparent, accountable, and responsible AI systems.
Built With
- amazon-web-services
- bedrock
- css
- fastapi
- nova
- python
- react
- sql
- typescript
Log in or sign up for Devpost to join the conversation.