TL;DR
AI secure platform that protects users, companies and employees from leaking sensitive data while using their favorite AI chatbots.
- From the user perspective they get the same AI chat interface, but it automatically prevents them from unintentionally data leaks, for example it will redact user emails, passwords, tokens, and other PII.
- From the company's standpoint, it provides enhanced control over employees utilizing AI chatbots. Through our tailor-made dashboard, company administrators can identify instances where PII data may have been inadvertently (almost) leaked and reach out to specific employees to raise awareness. This comprehensive approach serves to safeguard company assets and employees data effectively.
Thanks to pangea.cloud API services we were able to accelerate our development as the API usage is super simple to implement and enabled us to focus on the logic business of the platform without thinking too much on the project peripherals such as registration, login, user management, redaction logic, user management, audit logs, and much more.
Reference
- Samsung bans use of generative AI tools like ChatGPT after April internal data leak
- AMAZON BEGS EMPLOYEES NOT TO LEAK CORPORATE SECRETS TO CHATGPT
Inspiration
We were inspired to build this platform by the realization that as AI becomes more ubiquitous, the risk of sensitive data leaks escalates beyond what companies can manage. Witnessing how everyone, from individuals to corporations, relies on AI, we recognized the urgent need for a solution that empowers companies to regain control over their data. Our motivation stems from a determination to offer a robust platform that not only addresses this pressing concern but also ensures that businesses can harness the power of AI without compromising data security. Ultimately, we aim to provide a vital tool in safeguarding sensitive information in an AI-driven world.
What it does
Our tool serves as a comprehensive solution for safeguarding sensitive data in the era of widespread AI usage. By leveraging advanced DLP techniques and powerful access controls, it effectively shields personally identifiable information (PII) from unauthorized access and leaks via AI interaction. Additionally, our platform features a centralized dashboard, providing companies with unprecedented visibility and control over their data protection measures when their employees are using AI systems. Whether it's preventing accidental data leaks or thwarting malicious attacks trying to leak data over AI, our tool empowers businesses to regain control over their data security in an AI-dominated landscape.
How we built it
We wrote the tool in NextJS on top of the Vercel platform while integrating many Pnagea.cloud API services including:
- Audit: to keep track on all users activities while interacting with the AI systems. This includes anything from user login to user prompts. This what enables the company to keep track on all potential data leaks and contact any employee who leaks data, by accident or not.
- Auth: to handle user registration, login and user management.
- Redact: using the redact API service to redact PII and potential data leaks.
Challenges we ran into
We want our platform to be very comfortable for all users so we had to design the chat in a way that will be very easy for the end-users to use. We spent some time thinking how to do this.
Accomplishments that we're proud of
The centralized dashboard where the company admins can gain control back is our favorite part of the project. We believe it can be very beneficial for both the employees and the managers.
What we learned
We discussed the AI DLP issue with many people and we learned that leaking PII via AI interaction is too easy and in many cases employees are not doing this on purpose. In many cases developers want help with their code so they will copy-past portions of their code which contains PII. That's why an automated platform that prevents data leaks is needed.
What's next for Pangea Secure AI
We are planning to improve the dashboard and add support. We also want to make it easy for the admins to easily contact any employee who's leaking data. Eduction is very important when fighting DLP in the AI era.
Built With
- nextjs
- openai
- pangea
- react
- vercel
Log in or sign up for Devpost to join the conversation.