Inspiration

I always wanted to combine privacy preserving techniques with traditional deep learning methods. In today's era , the privacy preserving machine learning and deep learning techniques is required so as to encrypt your gradient computations while training the model so that you don't have to upload any of your data to any organisation's server and keep your data safe while organisation trains their model on your devices and even use someone's else model for inference on their own data keeping model and data encrypted while testing/predictions part.

What it does

It uses federated learning on the training part of model and then it uses secure multi party computations (SMPC) on testing part by encrypting dataset and model and its predictions.

How I built it

I used PyTorch and PySyft framework to work on this project.

Challenges I ran into

It was quite challenging to keep model and dataset private and secure while training and testing. That's why i learnt to use virtual workers in PySyft and simulated the actual work.

Accomplishments that I'm proud of

I was able to achieve very good accuracy (about 98%) on testing unseen data while keeping model and dataset encrypted.

What I learned

I learnt how to combine privacy preserving deep learning techniques with traditional deep learning approach.

What's next for Neural Style Transfer using VGG19

I will now work on PATE analysis application on the dataset (MNIST) and using more complex architectures in training part.

Built With

Share this project:
×

Updates