Inspiration
The increasing prevalence of deep fakes poses significant risks to privacy, security, and trust in digital media. Deep fakes can be used maliciously to manipulate information, damage reputations, and even commit fraud. The dangers deep fakes pose greatly overshadow the amazing utility they have for those who want to protect their identity online. We were inspired to create SafeFace to remove the risk involved in giving deep fakes to the average consumer by providing a secure platform that only uses exclusively generated faces for swaps. This ensures that real identities are protected, promoting privacy and data security. Additionally, we wanted to offer a creative and fun way for users to create unique avatars and use them safely in digital interactions. The aim is to empower individuals to hide their identities and enhance their online privacy.
What it does
SafeFace allows users to create unique avatars and perform face swaps using these generated faces. Users can select from a range of attributes to customize their avatars, ensuring diverse and unbiased representations. The platform facilitates high-quality face swaps onto target images or videos while ensuring the safety and privacy of the users' data. By restricting face swaps to generated characters, SafeFace eliminates the risk of using real, identifiable faces in potentially harmful ways. This makes SafeFace an essential tool for anyone concerned about their privacy and the potential misuse of their likeness online.
How we built it
We built SafeFace using the following tools and technologies:
- Backend Development: Python and Django were used to build a robust and secure backend.
- Web Development: HTMX was utilized for dynamic and interactive web applications.
- Database: SQL was chosen for efficient and reliable data management.
- APIs and Models:
- Mobius from Hugging Face for image generation, ensuring domain-agnostic debiasing and high-quality avatars.
- Face Fusion from GitHub for high-quality face swaps, ensuring user privacy and data security.
- Libraries Used: HTMX for dynamic interactions, SQLite for database management, Django as the web framework, and Tailwind CSS for creating a modern and responsive UI.
- Hosting: Deployed using Vast.ai for demo
Challenges we ran into
One of the significant challenges was integrating the Mobius image generation model and the Face Fusion Model seamlessly into our application. Ensuring that the generated faces were diverse, unbiased, and high-quality required fine-tuning and extensive testing. Additionally, maintaining data security and privacy while performing face swaps was a critical challenge that we had to address through careful implementation of security measures and protocols. Using HTMX instead of a traditional frontend framework was a challenge, but once we became accustomed to the new approach it was beneficial. We were able to host on servers with minimal storage since we did not have a lot of frontend bloat. Another big challenge was getting our hosted code to run on GPU. While this was working on our local machines when hosted we were unable to configure the server well enough to take full advantage of its GPUs.
Accomplishments that we're proud of
We are proud of creating a user-friendly and secure platform that effectively addresses the risks associated with deep fakes. Our ability to generate diverse and unbiased avatars using the Mobius model and perform high-quality face swaps with the Face Fusion Model is a significant accomplishment. Additionally, successfully integrating these models into an interactive web application using HTMX and Django is a testament to our technical skills and collaborative efforts. We were able to complement each other's weaknesses to make progress quickly and consistently. We first heard of the TechJam a month ago and are proud of how much we accomplished with little time and an ambitious project.
What we learned
Throughout this project, we learned a great deal about the complexities of current image generation and face-swapping technologies. We gained valuable experience in integrating advanced models into web applications and ensuring data security and privacy. We also learned the importance of user experience and the need to create a platform that is both functional and easy to use. The project taught us the value of collaboration and iterative development in building a robust and user-centric application. We were challenged to learn quickly and learn to fail fast so we were not stagnated by any of the many challenges in our development.
What's next for SafeFace
In the future, we plan to enhance SafeFace by introducing more customization options for avatar creation and improving our generation and face swap implementation for even higher speeds. There are many ways we can optimize our forms to improve the efficiency of data transfer and speed up the UI. That is the challenge and the reward that comes with our hypermedia approach. We want to improve our deployment strategy so we can use more GPU, and speed up our generation times, which would allow us to deploy to production. We also aim to expand our platform to support more media formats and offer additional privacy-enhancing features. Our goal is to implement a paid user tier with additional features, and then slowly scale our resources.
Log in or sign up for Devpost to join the conversation.