The Wayback Machine - https://web.archive.org/web/20210729205056/https://github.com/topics/adversarial-attacks
Here are
408 public repositories
matching this topic...
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
Updated
Jul 29, 2021
Python
Data augmentation for NLP
Updated
Jul 28, 2021
Jupyter Notebook
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
Updated
Jul 6, 2021
Python
Adversary Emulation Framework
Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning models. Advbox give a command line tool to generate adversarial examples with Zero-Coding.
Updated
Jun 8, 2021
Jupyter Notebook
A Toolbox for Adversarial Robustness Research
Updated
Jul 29, 2021
Jupyter Notebook
Must-read Papers on Textual Adversarial Attack and Defense
A collection of anomaly detection methods (iid/point-based, graph and time series) including active learning for anomaly detection/discovery, bayesian rule-mining, description for diversity/explanation/interpretability. Analysis of incorporating label feedback with ensemble and tree-based detectors. Includes adversarial attacks with Graph Convolutional Network.
Updated
Jul 2, 2021
Python
PyTorch implementation of adversarial attacks.
Updated
Jul 24, 2021
Python
A pytorch adversarial library for attack and defense methods on images and graphs
Updated
Jul 23, 2021
Python
A curated list of adversarial attacks and defenses papers on graph-structured data.
A Harder ImageNet Test Set (CVPR 2021)
Updated
Mar 1, 2021
Python
A Model for Natural Language Attack on Text Classification and Inference
Updated
Jun 7, 2021
Python
Implementation of Papers on Adversarial Examples
Updated
Jan 19, 2019
Python
An Open-Source Package for Textual Adversarial Attack.
Updated
Jul 27, 2021
Python
Code relative to "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks"
Updated
Jul 23, 2021
Python
A Python library for adversarial machine learning focusing on benchmarking adversarial robustness.
Updated
May 21, 2021
Python
Adversarial attacks and defenses on Graph Neural Networks.
Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models (published in ICLR2018)
Updated
Oct 24, 2019
Python
Implementation of the paper "Adversarial Attacks on Neural Networks for Graph Data".
Updated
Jan 7, 2020
Python
🔥 🔥 Defending Against Deepfakes Using Adversarial Attacks on Conditional Image Translation Networks
Updated
May 7, 2020
Python
DEEPSEC: A Uniform Platform for Security Analysis of Deep Learning Model
Updated
May 21, 2019
Python
Code for our nips19 paper: You Only Propagate Once: Accelerating Adversarial Training Via Maximal Principle
Updated
Sep 23, 2020
Python
Simple pytorch implementation of FGSM and I-FGSM
Updated
Mar 21, 2018
Python
Official TensorFlow Implementation of Adversarial Training for Free! which trains robust models at no extra cost compared to natural training.
Updated
Jun 8, 2019
Python
Code for the CVPR 2019 article "Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses"
Updated
Nov 25, 2020
Python
Physical adversarial attack for fooling the Faster R-CNN object detector
Updated
Jan 13, 2020
Jupyter Notebook
Implementation of the KDD 2020 paper "Graph Structure Learning for Robust Graph Neural Networks"
Updated
Jul 14, 2021
Python
Implementation for <Decoupled Networks> in CVPR'18.
Updated
Jun 29, 2018
Python
Improve this page
Add a description, image, and links to the
adversarial-attacks
topic page so that developers can more easily learn about it.
Curate this topic
Add this topic to your repo
To associate your repository with the
adversarial-attacks
topic, visit your repo's landing page and select "manage topics."
Learn more
You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session.
Output when I specify an attack without a model: