This project Implements the paper “Causal Fair Metric: Bridging Causality, Individual Fairness, and Adversarial Robustness” using the Python language.
-
Updated
Oct 30, 2023 - Python
This project Implements the paper “Causal Fair Metric: Bridging Causality, Individual Fairness, and Adversarial Robustness” using the Python language.
'Robust Deepfake Detection' project for the Deep Learning course at ETH Zurich, 2021. Authors (alphabetic): David Kamm, Nicolas Muntwyler, Alexander Timans, Moritz Vandenhirtz.
An extension of the PuVAE architecture for adversarial robustness
Characterizing Data Point Vulnerability via Average-Case Robustness, UAI 2024
Nearest Category Generalization
implementation for "Overcoming Adversarial Attacks for HITL Applications"
[SANER 2023] "CLAWSAT: Towards Both Robust and Accurate Code Models" by Jinghan Jia*, Shashank Srikant*, Tamara Mitrovska, Chuang Gan, Shiyu Chang, Sijia Liu, Una-May O'Reilly
Extending Sparse Dictionary Learning Methods for Adversarial Robustness
The all-in-one tool for comprehensive experimentation with adversarial attacks on image recognition.
Uses the simplex to propose a tighter boundary for the l1 perturbation of the convex activation function network, improving the effect of the CROWN algorithm.
Official repository for the paper: "On Adversarial Training without Perturbing all Examples", Accepted at ICLR 2024
This repo implements our paper, "Fault-Tolerant Federated Reinforcement Learning with Theoretical Guarantee", which has been accepted at NuerIPS 2021.
[Partial] RADLER: (adversarially) Robust Adversarial Distributional LEaRner
Code for "Adversarially Robust Spiking Neural Networks Through Conversion" [TMLR 2024]
Random Projections for improved Adversarial Robustness
[Pattern Recognition 2024] Towards Robust Neural Networks via Orthogonal Diversity"
[SRML@ICLR 2022] Robust and Accurate -- Compositional Architectures for Randomized Smoothing
The official implementation of "DataFreeShield: Defending Adversarial Attacks without Training Data" accepted in ICML 2024.
[TMLR 22] "Queried Unlabeled Data Improves and Robustifies Class- Incremental Learning" by Tianlong Chen, Sijia Liu, Shiyu Chang, Lisa Animi, Zhangyang Wang
👀🛡️ Code for the paper “Carefully Blending Adversarial Training and Purification Improves Adversarial Robustness” by Emanuele Ballarin, Alessio Ansuini and Luca Bortolussi (2024)
Add a description, image, and links to the adversarial-robustness topic page so that developers can more easily learn about it.
To associate your repository with the adversarial-robustness topic, visit your repo's landing page and select "manage topics."