Inspiration
In our daily life, whether at a cafe or walking in the park, we kept hearing curious questions about AI art. "Who's behind this artwork?" "Where did the AI pull these images from?" It seemed like everywhere we went, people were buzzing with wonder about the origins of AI-generated pieces.
Every time we heard these questions, an idea started to form in the back of our minds. "What if there was a clear way to answer all these questions?" We thought about a tool that could not only craft captivating visuals but also trace back to their roots.
What it does
sKrt.ai integrates the training data citation information within the diffusion model's generative process, therefore creating a new more ethical way of generative AI art. This approach to diffusion model training allows for greater consistent artistic integrity, ownership attribution, and positively influence on creators.
How we built it
We used transformer encoder and decoder and replaced the stochastic noise of the diffusion model with transformer embeddings, resulting in a novel interpretable network architecture.
Challenges we ran into
Google Cloud deployment of front-end, backend, data-base and model inference was really painful and we could not finish it in time.
Accomplishments that we're proud of
Inventing a new diffusion model architecture and defeating google cloud (partially)! :)
What's next for us
We are looking to scale the sKrt.ai idea up and train it on larger datasets, with greater variability, maybe even videos. We are extremely passionate about this idea and we believe that it may have a potential to make the world a better place, by ensuring the transparency and reliability for all.

Log in or sign up for Devpost to join the conversation.