Skip to content

Latest commit

 

History

History
8 lines (7 loc) · 592 Bytes

README.md

File metadata and controls

8 lines (7 loc) · 592 Bytes

ReadMe

To use

  1. The model used is the pretrained MNIST model which can be downloaded from this link.
  2. Place downloaded file where it can be accessed by the orignal_model_path variable in the adversarial_attack.py file.
  3. The code does not currently support passing arguments from the terminal. The target label for the attack should be specified by the target_label variable in the code when running the script.
  4. Setup by installing the contents of requirements.txt in a virtual environment.
  5. Run with command python adversarial_attack.py.