| ZeroWangZY/federated-learning |
300 |
|
0 |
0 |
almost 5 years ago |
0 |
|
0 |
mit |
|
| Everything about Federated Learning (papers, tutorials, etc.) -- 联邦学习 |
| ebagdasa/backdoor_federated_learning |
102 |
|
0 |
0 |
over 5 years ago |
0 |
|
0 |
mit |
Python |
| Source code for paper "How to Backdoor Federated Learning" (https://arxiv.org/abs/1807.00459) |
| zihao-ai/Awesome-Backdoor-in-Deep-Learning |
73 |
|
0 |
0 |
over 2 years ago |
0 |
|
2 |
gpl-3.0 |
Python |
| A curated list of papers & resources on backdoor attacks and defenses in deep learning. |
| Body-Alhoha/OpenEctasy |
35 |
|
0 |
0 |
about 3 years ago |
0 |
|
0 |
mit |
Java |
| Minecraft Server (Bukkit, Spigot, Paper) backdoor, using ow2 asm |
| VinAIResearch/input-aware-backdoor-attack-release |
27 |
|
0 |
0 |
about 5 years ago |
0 |
|
0 |
mit |
Python |
| Input-aware Dynamic Backdoor Attack (NeurIPS 2020) |
| DreamtaleCore/Refool |
21 |
|
0 |
0 |
almost 4 years ago |
0 |
|
7 |
|
Python |
| moranant/attacking_distributed_learning |
19 |
|
0 |
0 |
almost 3 years ago |
0 |
|
2 |
|
Python |
| An implementation for the paper "A Little Is Enough: Circumventing Defenses For Distributed Learning" (NeurIPS 2019) |
| locuslab/breaking-poisoned-classifier |
17 |
|
0 |
0 |
over 4 years ago |
0 |
|
0 |
mit |
Jupyter Notebook |
| Code for paper "Poisoned classifiers are not only backdoored, they are fundamentally broken" |
| superrrpotato/Defending-Neural-Backdoors-via-Generative-Distribution-Modeling |
10 |
|
0 |
0 |
about 6 years ago |
0 |
|
0 |
mit |
Jupyter Notebook |
| The code is for our NeurIPS 2019 paper: https://arxiv.org/abs/1910.04749 |
| ShihaoZhaoZSH/Video-Backdoor-Attack |
10 |
|
0 |
0 |
almost 6 years ago |
0 |
|
0 |
apache-2.0 |
Python |
| Clean-Label Backdoor Attacks on Video Recognition Models, CVPR2020 |