The inspiration came from the growing levels of anxiety surounding the ever-changing EU AI Act. This massive regulatory framework will affect all of the companies within the European Union who deploy AI-driven solutions. There needs to be an easy way to get some more guidence on the risk levels of the EU AI Act without diving deep into the complicated legal text!

What it does

The AI Risk Inspector is an NLP tool that gives 1) an accurate estimation of the risk level of the AI product 2) an explanation of why that product was put into that risk category 3) in case it's a high risk use case, a check list of actions that is required by the EU AI Act 4) tool recommendations to mitigate issues and align the AI use case with the regulation. This is all based on the user's text-based description of their AI use case.

How we built it

The product was iteratively developed based on business requirements, first an AI NLP model (BERT) was chosen to classify AI systems into risk categories proposed by the EU. Different BERT instances were combined and a frontend including a user interface were added to achieve the final functionality.

Challenges we ran into

The dataset had a lot of empty fields and confusing information so we did a lot of data cleaning and adding additional use cases, especially to achieve an accurate result for determining prohibited cases as there are not many current European examples (luckily).

Accomplishments that we're proud of

Our team dinamics and that we were able to become semi-experts in the EU AI Act.

What we learned

We learned that diverse teams come up with the best solutions to real-life problems.

What's next for AI Risk Inspector

Hopefully we'll be able to partner with appliedAI or some other organisation to bring this product to the market and offer this as a subscription service to for instance bigger technology companies, consultancies, startup hubs.

Built With

Share this project: