| Trusted-AI/adversarial-robustness-toolbox |
4,273 |
|
0 |
9 |
about 2 years ago |
56 |
September 22, 2023 |
145 |
mit |
Python |
| Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams |
| TsingZ0/PFLlib |
935 |
|
0 |
0 |
about 2 years ago |
0 |
|
7 |
gpl-2.0 |
Python |
| Personalized federated learning simulation platform with non-IID and unbalanced dataset |
| privacytrustlab/ml_privacy_meter |
501 |
|
0 |
0 |
over 2 years ago |
1 |
May 13, 2022 |
12 |
mit |
Jupyter Notebook |
| Privacy Meter: An open-source library to audit data privacy in statistical and machine learning algorithms. |
| stratosphereips/awesome-ml-privacy-attacks |
488 |
|
0 |
0 |
over 2 years ago |
0 |
|
0 |
|
|
| An awesome list of papers on privacy attacks against machine learning |
| tonybeltramelli/Deep-Spying |
173 |
|
0 |
0 |
almost 9 years ago |
0 |
|
0 |
apache-2.0 |
Python |
| Spying using Smartwatch and Deep Learning |
| trailofbits/PrivacyRaven |
172 |
|
0 |
0 |
almost 3 years ago |
0 |
|
36 |
apache-2.0 |
Python |
| Privacy Testing for Deep Learning |
| microsoft/robustdg |
160 |
|
0 |
0 |
almost 3 years ago |
0 |
|
13 |
mit |
Python |
| Toolkit for building machine learning models that generalize to unseen domains and are robust to privacy and other attacks. |
| bargavj/EvaluatingDPML |
112 |
|
0 |
0 |
over 3 years ago |
0 |
|
1 |
mit |
Python |
| This project's goal is to evaluate the privacy leakage of differentially private machine learning models. |
| spring-epfl/mia |
81 |
|
0 |
0 |
over 4 years ago |
4 |
September 23, 2018 |
15 |
mit |
Python |
| A library for running membership inference attacks against ML models |
| PrivPkt/PrivPkt |
81 |
|
0 |
0 |
about 3 years ago |
0 |
|
26 |
mit |
Python |
| Privacy Preserving Collaborative Encrypted Network Traffic Classification (Differential Privacy, Federated Learning, Membership Inference Attack, Encrypted Traffic Classification) |