GradAttack is a Python library for easy evaluation of privacy risks in public gradients in Federated Learning

Overview

GradAttack

GradAttack CI

GradAttack is a Python library for easy evaluation of privacy risks in public gradients in Federated Learning, as well as corresponding mitigation strategies. The current version focuses on the gradient inversion attack in the image classification task, which recovers private images from public gradients.

Motivation

Recent research shows that sending gradients instead of data in Federated Learning can leak private information (see this growing list of attack paper). These attacks demonstrate that an adversary eavesdropping on a client’s communications (i.e. observing the global modelweights and client update) can accurately reconstruct a client’s private data using a class of techniques known as “gradient inversion attacks", which raise serious concerns about such privacy leakage.

To counter these attacks, researchers have proposed defense mechanisms (see this growing list of defense paper). We are developing this framework to evaluate different defense mechanisms against state-of-the-art attacks.

Why GradAttack?

There are lots of reasons to use GradAttack:

  • 😈   Evaluate the privacy risk of your Federated Learning pipeline by running on it various attacks supported by GradAttack

  • 💊   Enhance the privacy of your Federated Learning pipeline by applying defenses supported by GradAttack in a plug-and-play fashion

  • 🔧   Research and develop new gradient attacks and defenses by reusing the simple and extensible APIs in GradAttack

Slack Channel

For help and realtime updates related to GradAttack, please join the GradAttack Slack!

Installation

You may install GradAttack directly from PyPi using pip:

pip install gradattack

You can also install directly from the source for the latest features:

git clone https://github.com/Princeton-SysML/GradAttack
cd GradAttack
pip install -e .

Getting started

To evaluate your model's privacy leakage against the gradient inversion attack, all you need to do is to:

  1. Define your deep learning pipeline
datamodule = CIFAR10DataModule()
model = create_lightning_module(
        'ResNet18',
        training_loss_metric=loss,
        **hparams,
    )
trainer = pl.Trainer(
        gpus=devices,
        check_val_every_n_epoch=1,
        logger=logger,
        max_epochs=args.n_epoch,
        callbacks=[early_stop_callback],
    )
pipeline = TrainingPipeline(model, datamodule, trainer)
  1. (Optional) Apply defenses to the pipeline
defense_pack = DefensePack(args, logger)
defense_pack.apply_defense(pipeline)
  1. Run training with the pipeline (see detailed example scripts and bashes in examples)
pipeline.run()
pipeline.test()

You may use the tensorboard logs to track your training and to compare results of different runs:

tensorboard --logdir PATH_TO_TRAIN_LOGS

Example of training logs

  1. Run attack on the pipeline (see detailed example scripts and bashes in examples)
# Fetch a victim batch and define an attack instance
example_batch = pipeline.get_datamodule_batch()
batch_gradients, step_results = pipeline.model.get_batch_gradients(
        example_batch, 0)
batch_inputs_transform, batch_targets_transform = step_results[
    "transformed_batch"]
attack_instance = GradientReconstructor(
    pipeline,
    ground_truth_inputs=batch_inputs_transform,
    ground_truth_gradients=batch_gradients,
    ground_truth_labels=batch_targets_transform,
)

# Define the attack instance and launch the attack
attack_trainer = pl.Trainer(
    max_epochs=10000,
)
attack_trainer.fit(attack_instance,)

You may use the tensorboard logs to track your attack and to compare results of different runs:

tensorboard --logdir PATH_TO_ATTACK_LOGS

Example of training logs

  1. Evalute the attack results (see examples)
python examples/calc_metric.py --dir PATH_TO_ATTACK_RESULTS

Contributing to GradAttack

GradAttack is currently in an "alpha" stage in which we are working to improve its capabilities and design.

Contributions are welcome! See the contributing guide for detailed instructions on how to contribute to our project.

Citing GradAttack

If you want to use GradAttack for your research (much appreciated!), you can cite it as follows:

@inproceedings{huang2021evaluating,
  title={Evaluating Gradient Inversion Attacks and Defenses in Federated Learning},
  author={Huang, Yangsibo and Gupta, Samyak and Song, Zhao and Li, Kai and Arora, Sanjeev},
  booktitle={NeurIPS},
  year={2021}
}

Acknowledgement

This project is supported in part by Ma Huateng Foundation, Schmidt Foundation, NSF, Simons Foundation, ONR and DARPA/SRC. Yangsibo Huang and Samyak Gupta are supported in part by the Princeton Graduate Fellowship. We would like to thank Quanzheng Li, Xiaoxiao Li, Hongxu Yin and Aoxiao Zhong for helpful discussions, and members of Kai Li’s and Sanjeev Arora’s research groups for comments on early versions of this library.

Code accompanying "Adaptive Methods for Aggregated Domain Generalization"

Adaptive Methods for Aggregated Domain Generalization (AdaClust) Official Pytorch Implementation of Adaptive Methods for Aggregated Domain Generalizat

Xavier Thomas 15 Sep 20, 2022
pytorch implementation of dftd2 & dftd3

torch-dftd pytorch implementation of dftd2 [1] & dftd3 [2, 3] Install # Install from pypi pip install torch-dftd # Install from source (for developer

33 Nov 28, 2022
Implementation of Convolutional LSTM in PyTorch.

ConvLSTM_pytorch This file contains the implementation of Convolutional LSTM in PyTorch made by me and DavideA. We started from this implementation an

Andrea Palazzi 1.3k Dec 29, 2022
Implementation of neural class expression synthesizers

NCES Implementation of neural class expression synthesizers (NCES) Installation Clone this repository: https://github.com/ConceptLengthLearner/NCES.gi

NeuralConceptSynthesis 0 Jan 06, 2022
A python module for configuration of block devices

Blivet is a python module for system storage configuration. CI status Licence See COPYING Installation From Fedora repositories Blivet is available in

78 Dec 14, 2022
The "breathing k-means" algorithm with datasets and example notebooks

The Breathing K-Means Algorithm (with examples) The Breathing K-Means is an approximation algorithm for the k-means problem that (on average) is bette

Bernd Fritzke 75 Nov 17, 2022
Official Pytorch implementation of "CLIPstyler:Image Style Transfer with a Single Text Condition"

CLIPstyler Official Pytorch implementation of "CLIPstyler:Image Style Transfer with a Single Text Condition" Environment Pytorch 1.7.1, Python 3.6 $ c

201 Dec 29, 2022
A library for low-memory inferencing in PyTorch.

Pylomin Pylomin (PYtorch LOw-Memory INference) is a library for low-memory inferencing in PyTorch. Installation ... Usage For example, the following c

3 Oct 26, 2022
《LightXML: Transformer with dynamic negative sampling for High-Performance Extreme Multi-label Text Classification》(AAAI 2021) GitHub:

LightXML: Transformer with dynamic negative sampling for High-Performance Extreme Multi-label Text Classification

76 Dec 05, 2022
SeqTR: A Simple yet Universal Network for Visual Grounding

SeqTR This is the official implementation of SeqTR: A Simple yet Universal Network for Visual Grounding, which simplifies and unifies the modelling fo

seanZhuh 76 Dec 24, 2022
Clustering with variational Bayes and population Monte Carlo

pypmc pypmc is a python package focusing on adaptive importance sampling. It can be used for integration and sampling from a user-defined target densi

45 Feb 06, 2022
[NeurIPS-2021] Slow Learning and Fast Inference: Efficient Graph Similarity Computation via Knowledge Distillation

Efficient Graph Similarity Computation - (EGSC) This repo contains the source code and dataset for our paper: Slow Learning and Fast Inference: Effici

23 Nov 11, 2022
PyTorch implementation of normalizing flow models

PyTorch implementation of normalizing flow models

Vincent Stimper 242 Jan 02, 2023
[AI6101] Introduction to AI & AI Ethics is a core course of MSAI, SCSE, NTU, Singapore

[AI6101] Introduction to AI & AI Ethics is a core course of MSAI, SCSE, NTU, Singapore. The repository corresponds to the AI6101 of Semester 1, AY2021-2022, starting from 08/2021. The instructors of

AccSrd 1 Sep 22, 2022
Learnable Boundary Guided Adversarial Training (ICCV2021)

Learnable Boundary Guided Adversarial Training This repository contains the implementation code for the ICCV2021 paper: Learnable Boundary Guided Adve

DV Lab 27 Sep 25, 2022
A high-performance Python-based I/O system for large (and small) deep learning problems, with strong support for PyTorch.

WebDataset WebDataset is a PyTorch Dataset (IterableDataset) implementation providing efficient access to datasets stored in POSIX tar archives and us

1.1k Jan 08, 2023
Apply AnimeGAN-v2 across frames of a video clip

title emoji colorFrom colorTo sdk app_file pinned AnimeGAN-v2 For Videos 🔥 blue red gradio app.py false AnimeGAN-v2 For Videos Apply AnimeGAN-v2 acro

Nathan Raw 36 Oct 18, 2022
Directed Greybox Fuzzing with AFL

AFLGo: Directed Greybox Fuzzing AFLGo is an extension of American Fuzzy Lop (AFL). Given a set of target locations (e.g., folder/file.c:582), AFLGo ge

380 Nov 24, 2022
Hypercomplex Neural Networks with PyTorch

HyperNets Hypercomplex Neural Networks with PyTorch: this repository would be a container for hypercomplex neural network modules to facilitate resear

Eleonora Grassucci 21 Dec 27, 2022
PFLD pytorch Implementation

PFLD-pytorch Implementation of PFLD A Practical Facial Landmark Detector by pytorch. 1. install requirements pip3 install -r requirements.txt 2. Datas

zhaozhichao 669 Jan 02, 2023