- The model used is the pretrained MNIST model which can be downloaded from this link.
- Place downloaded file where it can be accessed by the orignal_model_path variable in the adversarial_attack.py file.
- The code does not currently support passing arguments from the terminal. The target label for the attack should be specified by the target_label variable in the code when running the script.
- Setup by installing the contents of requirements.txt in a virtual environment.
- Run with command python adversarial_attack.py.