Inspiration

We are deeply concerned with the reliability and trustworthiness of future machine learning algorithms. They are handling increasingly important and critical tasks and need to do so safely.

What it does

Our solution helps detects adversarial attacks and performs a recovery process to extract the underlying image while removing the content used for the adversarial attack. It can be used prior to the task performed by the customer which guarantees that the image that it processes is free of any adversarial attack. As such, it can fit in existing machine learning deployment pipelines.

How we built it

Using methodologies to create adversarial attacks, we were able to generate a large number of genuine and adversarial images, which are used both to detect attacks and recover the underlying genuine images.

Challenges we ran into

  • Researching untested methods
  • Time management
  • Working online as one team member was working from Iran
  • Starting from a perfectly clean slate

Accomplishments that we're proud of

  • Built an adversarial samples based dataset
  • Designed both a Defense & detection method for adversarial attacks
  • Implemented a robustness metric
  • Interactive demo website

What we learned

  • To work in a new team
  • To split the work effectively
  • To have open and honest communication
  • To focus on results and the prototype
  • To build technical expertise

What's next for Deep Image Prior

We are considering the following tasks:

  • Evaluate additional models
  • Scrutinize evaluation, detection, and defense methods
  • Research inherently robust novel machine learning models
  • Contact the autonomous car players as potential customers to get their feedback on the project results

Built With

  • adversarial-attacks
  • computer-vision
  • deep-learning
  • gpu
  • pytorch
  • robustness
  • streamlit
Share this project:

Updates