Deep learning has had a lot of success with Generative Adversarial Networks (GANs) in recent times which are used to generate high-quality outputs that are comparable to the original inputs. GANs have been widely utilized to create new realistic pictures and to improve existing ones. On the other hand, GANs may be used to fool individuals by generating false data. Fake faces made by GANs, for example, may deceive not only humans but also machine learning classifiers. Synthetic photographs for identification and authentication purposes, for example, can be used maliciously.
Furthermore, advanced picture editing software such as Adobe Photoshop allows for the alteration of complicated input photographs as well as the creation of high-quality new images. These techniques have improved to the point that they can now build realistic and intricate false pictures that are difficult to distinguish from genuine things. YouTube has step-by-step directions and tutorials for making these sorts of fictitious graphics. As a result, these technologies have the potential to be utilized for defamation, impersonation, and factual distortion. Furthermore, with social media, fraudulent material may be swiftly and extensively shared on the Internet.
💻 What it does
Deforgify is a tool that utilizes the power of Deep Learning to distinguish Real images from Fake ones. For instance, if someone takes your original image and inserts your face into a murder scene or photoshops it onto someone else's body, Deforgify will tag it as fake reducing the chances of it being used to smear you.
Simply submit the image, and the machine learning model will evaluate it and provide a response in a fraction of a second.
🤖 Machine Learning Process
📊 Getting the Data and EDA Process
The dataset was taken from Kaggle and you can find it here.
The Dataset contains 1288 faces out of which
- 589 are Real
- 700 are Fake
The "fake" faces collected in this dataset are generated using the StyleGAN2, which presents a harder challenge to classify them correctly even for the human eye.
⚙️ Model Architecture
- We designed a Sequential Model having 5 Convolutional Layers and 4 Dense Layers.
- The first layer started with 32 filters and a kernel of 2x2.
- The number of filters is doubled at every next layer and the kernel is incremented by 1.
- We introduced some Max Pooling Layers after Convolutional Layers to avoid over-fitting and reduce Computational Costs.
- The Output from Convolutional Layer is Flattened and passed over to Dense Layers.
- We started with 512 neurons in the first Dense layer and reduced them to half over the next two Dense layers.
- Some Dropout Layers were also introduced through the model to randomly ignore some of the neurons and reduce over-fitting.
- We used ReLU activation in all layers except the output layer to reduce computation cost and introduce non-linearity.
- Finally, the Output Layer was constructed containing 2 neurons (1 for each class) and softmax activation.
- The model with the least Validation Loss was saved during the training and reloaded before obtaining the final results.
- The model was able to classify all of the samples correctly.
⚙️ How we built it
- Django: For backend
- Python: For backend
- HTML and CSS: For frontend
- GitHub pages: For CI/CD and deployment
🤝 Most Creative Use of GitHub
- GitHub makes it easy to implement the CI/CD workflow and makes the deployment process easy.
- Deploying the project on GitHub helped us to get the project deployed on the network to be accessed by other people.
- We are using GitHub for Collaboration. GitHub makes it easy to share code with others and helps a lot in collaboration. GitHub makes it easy to set up a project and get started.
🛠 Best Usage of CI/CD sponsored by CircleCI
We are using CircleCI for continuous integration and deployment. CircleCI is a free service that provides continuous integration and deployment for your applications. It’s a great way to test our code, run our tests, and deploy our code.
🧠 Challenges we ran into
After achieving 96% accuracy on the testing data initially, we were released thinking we have built something great that can correctly classify most of the fake images online. We decided to put it to the test by downloading more test images from Google, but the results were utterly unexpected. Six of the ten photos we evaluated were incorrectly categorized. We were so disappointed with the outcome that we decided to start over with a fresh dataset and a little different strategy, and it worked like magic.
🏅 Accomplishments that we're proud of
- Completing the Project in such a short time frame
- Achieving a 100% accuracy on the test set (and 90% on different images from google)
📖 What we learned
- The power of Deep Learning is that it can be used to classify images in a very fast and accurate manner.
- Working and deploying a model on the cloud.
- Deploying web app on GitHub Pages.
- Efficient use of GitHub actions.
🚀 What's next for Deforgify
- Building a mobile app
- Adding more data to the dataset