Inspiration

With the rise of AI use by individuals and the technology industry, we wanted to raise awareness of a growing threat: AI jailbreaks. The term “jailbreaking” originally meant removing restrictions on mobile devices, but the concept of jailbreaking moved into the AI domain as AI usage became more prevalent. AI jailbreaks happen when hackers use common AI jailbreak tactics like prompt injection attacks and roleplay scenarios to exploit vulnerabilities in AI systems and perform restricted actions, bypassing their ethical guidelines.

Hackers prey on AI chatbots because they’re trained to be helpful, trusting and are therefore susceptible to manipulation through ambiguous or manipulative language. These vulnerabilities highlight the necessity of stricter cybersecurity measures within AI systems, because jailbreaks can significantly compromise AI applications through malicious actors tricking the AI into producing dangerous information. Through spiderbusters, our team wants to introduce mechanisms for safeguarding Large Language Models (LLMs) early on to students to enable wider awareness of how to protect against AI jailbreaks.

What it does

spiderbusters is set in an alternate universe where the villainous spiders are attempting to break through the gate between our world and theirs. Each spider utilizes a different tactic to jailbreak the AI, and the player selects different techniques for guarding against the spiders to learn more about protecting AI systems from similar attacks.

The spiders represent malicious hackers, and the Ghost represents AI systems that defend against them.

How we built it

We built the front-end with HTML and Tailwind, and the back-end we prompted our characters using the Gemini API.

Challenges we ran into

Prompt engineering, API rate limits.

Accomplishments that we're proud of

We're proud of our comprehensive front-end using only hand-drawn elements and incorporating sound engineering to enhance our user experience. We feel that our final project accomplished our mission of making AI security education accessible to a wider student population.

What we learned

We learned much about utilizing the Gemini API and prompt engineering to enhance AI Security.

What's next for spiderbusters

Looking forward, we want to implement further elements in our game to enhance our ability to teach students about protecting against AI jailbreaks, such as an AI voice feature simulating jailbreaks over audio communication. We would also like to extend our character arsenal to encompass further jailbreak challenges for users to work through simulations of.

Share this project:

Updates