Inspiration

Artists spend countless hours creating work, only to have it scraped online and used to train AI without their knowledge or consent. Some have tried “poisoning” images to protect their art, but this creates inefficiencies, higher costs, and environmental harm in AI training. We wanted to build a solution that respects creators, promotes ethical AI, and keeps innovation sustainable, giving artists control while supporting responsible AI development.

What it does

Inkscape lets artists define exactly how their work can be used in AI training. Creators upload artwork, attach security tags, and set permissions covering general training, fine-tuning, style learning, commercial use, and attribution. AI companies can then scan datasets against these tags, ensuring compliance. When permissions are conditional, Inkscape can generate compliance agreements that align with the artist’s terms. The result is a transparent ecosystem where art is protected, AI is trained responsibly, and creators receive credit when their work is used.

How we built it

We built Inkscape as a two-sided web application with separate experiences for artists and AI companies. The frontend is a React application styled with Tailwind CSS and hosted on Vercel, designed to be accessible and easy to use for non-technical creators. Authentication is handled through Auth0 with Google OAuth, allowing users to select a role (artist or company) during onboarding.

On the backend, we used a REST API hosted on Vultr to handle file uploads, SHA-256 hashing, security tag generation, permission storage, and dataset scanning. Metadata, tags, permissions, and compliance events are stored in MongoDB. When AI companies upload datasets, files are hashed and compared against stored artwork hashes to detect exact matches. Additionally, the system computes perceptual hashes and uses Hamming distance to flag visually similar images that may not be byte-for-byte identical.

For conditional permissions, we integrated the Google Gemini API to generate plain-language agreement summaries that strictly respect artist-defined permissions and align with the company's declared use cases, granting no additional rights. The system logs compliance outcomes so artists can see how and when their work is used, creating transparency without blocking AI innovation.

Challenges we ran into

The main challenges we ran into involved proper integration between our frontend and backend, as differing endpoints and formatting issues had caused major errors that kept us in circles. Moving forward, we hope to ensure improved bug-testing throughout the building to smooth the process.

Accomplishments that we're proud of

For our first hackathon, we are proud of the idea we came up with, specifically the innovation it brings to both the present and future. Although our demo is definitely not a full production model, it serves as a powerful proof of concept of what's possible and the next step toward positive use of AI.

What we learned

We learned that building ethical AI systems requires clear communication and thoughtful design, not just technical solutions. We also gained hands-on experience integrating a full-stack application under time constraints and learned the value of keeping systems simple, transparent, and user-focused.

What's next for Inkscape

Next, we want to let companies ethically choose the art they train on. We plan to build a gallery where companies can browse creator-uploaded work and clearly see what’s allowed, including use cases and attribution. This makes consent proactive, gives artists visibility, and helps companies source high-quality data responsibly.

Built With

Share this project:

Updates