The official implementation of the IEEE S&P`22 paper "SoK: How Robust is Deep Neural Network Image Classification Watermarking".

Overview

Watermark-Robustness-Toolbox - Official PyTorch Implementation

contact Python 3.6 PyTorch 1.3.1 cuDNN 10.1.2 Website shields.io GPLv3 license

This repository contains the official PyTorch implementation of the following paper to appear at IEEE Security and Privacy 2022:

SoK: How Robust is Deep Neural Network Image Classification Watermarking?
Nils Lukas, Edward Jiang, Xinda Li, Florian Kerschbaum
https://arxiv.org/abs/2108.04974

Abstract: Deep Neural Network (DNN) watermarking is a method for provenance verification of DNN models. Watermarking should be robust against watermark removal attacks that derive a surrogate model that evades provenance verification. Many watermarking schemes that claim robustness have been proposed, but their robustness is only validated in isolation against a relatively small set of attacks. There is no systematic, empirical evaluation of these claims against a common, comprehensive set of removal attacks. This uncertainty about a watermarking scheme's robustness causes difficulty to trust their deployment in practice. In this paper, we evaluate whether recently proposed watermarking schemes that claim robustness are robust against a large set of removal attacks. We survey methods from the literature that (i) are known removal attacks, (ii) derive surrogate models but have not been evaluated as removal attacks, and (iii) novel removal attacks. Weight shifting, transfer learning and smooth retraining are novel removal attacks adapted to the DNN watermarking schemes surveyed in this paper. We propose taxonomies for watermarking schemes and removal attacks. Our empirical evaluation includes an ablation study over sets of parameters for each attack and watermarking scheme on the image classification datasets CIFAR-10 and ImageNet. Surprisingly, our study shows that none of the surveyed watermarking schemes is robust in practice. We find that schemes fail to withstand adaptive attacks and known methods for deriving surrogate models that have not been evaluated as removal attacks. This points to intrinsic flaws in how robustness is currently evaluated. Our evaluation includes a discussion of the runtime of each attack to underpin their practical relevance. While none of the schemes is robust against all attacks, none of the attacks removes all watermarks. We show that attacks can be combined and find combined attacks that remove all watermarks. We show that watermarking schemes need to be evaluated against a more extensive set of removal attacks with a more realistic adversary model. Our source code and a complete dataset of evaluation results will be made publicly available, which allows to independently verify our conclusions.

Features

All watermarking schemes and removal attacks are configured for the image classification datasets CIFAR-10 (32x32 pixels, 10 classes) and ImageNet (224x224 pixels, 1k classes). We implemented the following watermarking schemes, sorted by their categories:

.. and the following removal attacks, sorted by their categories:

Get Started

At this point, the Watermark-Robustness-Toolbox project is not available as a standalone pip package, but we are working on allowing an installation via pip. We describe a manual installation and usage. First, install all dependencies via pip.

$ pip install -r requirements.txt

The following four main scripts provide the entire toolbox's functionality:

  • train.py: Pre-trains an unmarked neural network.
  • embed.py: Embeds a watermark into a pre-trained neural network.
  • steal.py: Performs a removal attack against a watermarked neural network.
  • decision_threshold.py: Computes the decision threshold for a watermarking scheme.

We use the mlconfig library to pass configuration hyperparameters to each script. Configuration files used in our paper for CIFAR-10 and ImageNet can be found in the configs/ directory. Configuration files store all hyperparameters needed to reproduce an experiment.

Step 1: Pre-train a Model on CIFAR-10

$ python train.py --config configs/cifar10/train_configs/resnet.yaml

This creates an outputs directory and saves a model file at outputs/cifar10/null_models/resnet/.

Step 2: Embed an Adi Watermark

$ python embed.py --wm_config configs/cifar10/wm_configs/adi.yaml \
                  --filename outputs/cifar10/null_models/resnet/best.pth

This embeds an Adi watermark into the pre-trained model from 'Example 1' and saves (i) the watermarked model and (ii) all data to read the watermark under outputs/cifar10/wm/adi/00000_adi/.

Step 3: Attempt to Remove a Watermark

$ python steal.py --attack_config configs/cifar10/attack_configs/ftal.yaml \
                  --wm_dir outputs/cifar10/wm/adi/00000_adi/

This runs the Fine-Tuning (FTAL) removal attack against the watermarked model and creates a surrogate model stored under outputs/cifar10/attacks/ftal/. The directory also contains human-readable debug files, such as the surrogate model's watermark and test accuracies.

Datasets

Our toolbox currently implements custom data loaders (class WRTDataLoader) for the following datasets.

  • CIFAR-10
  • ImageNet (needs manual download)
  • Omniglot (needs manual download)
  • Open Images (needs manual download)

Documentation

We are actively working on documenting the parameters of each watermarking scheme and removal attack. At this point, we can only refer to the method's source code (at wrt/defenses/ and wrt/attacks/). Soon we will host a complete documentation of all parameters, so stay tuned!

Contribute

We encourage authors of watermarking schemes or removal attacks to implement their methods in the Watermark-Robustness-Toolbox to make them publicly accessible in a unified framework. Our aim is to improve reproducibility which makes it easier to evaluate a scheme's robustness. Any contributions or suggestions for improvements are welcome and greatly appreciated. This toolbox is maintained as part of a university project by graduate students.

Reference

The codebase has been based off an early version of the Adversarial-Robustness-Tooblox.

Cite our paper

@InProceedings{lukas2022watermarkingsok,
  title={SoK: How Robust is Deep Neural Network Image Classification Watermarking?}, 
  author={Lukas, Nils and Jiang, Edward and Li, Xinda and Kerschbaum, Florian},
  year={2022},
  booktitle={IEEE Symposium on Security and Privacy}
}
Membership Inference Attack against Graph Neural Networks

MIA GNN Project Starter If you meet the version mismatch error for Lasagne library, please use following command to upgrade Lasagne library. pip insta

6 Nov 09, 2022
Reproduce partial features of DeePMD-kit using PyTorch.

DeePMD-kit on PyTorch For better understand DeePMD-kit, we implement its partial features using PyTorch and expose interface consuing descriptors. Tec

Shaochen Shi 8 Dec 17, 2022
A simple Rock-Paper-Scissors game using CV in python

ML18_Rock-Paper-Scissors-using-CV A simple Rock-Paper-Scissors game using CV in python For IITISOC-21 Rules and procedure to play the interactive game

Anirudha Bhagwat 3 Aug 08, 2021
Run Effective Large Batch Contrastive Learning on Limited Memory GPU

Gradient Cache Gradient Cache is a simple technique for unlimitedly scaling contrastive learning batch far beyond GPU memory constraint. This means tr

Luyu Gao 198 Dec 29, 2022
pytorch implementation for Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network arXiv:1609.04802

PyTorch SRResNet Implementation of Paper: "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network"(https://arxiv.org/abs

Jiu XU 436 Jan 09, 2023
A framework for joint super-resolution and image synthesis, without requiring real training data

SynthSR This repository contains code to train a Convolutional Neural Network (CNN) for Super-resolution (SR), or joint SR and data synthesis. The met

83 Jan 01, 2023
Pytorch Implementation of Various Point Transformers

Pytorch Implementation of Various Point Transformers Recently, various methods applied transformers to point clouds: PCT: Point Cloud Transformer (Men

Neil You 434 Dec 30, 2022
Uncertainty Estimation via Response Scaling for Pseudo-mask Noise Mitigation in Weakly-supervised Semantic Segmentation

Uncertainty Estimation via Response Scaling for Pseudo-mask Noise Mitigation in Weakly-supervised Semantic Segmentation Introduction This is a PyTorch

XMed-Lab 30 Sep 23, 2022
Pytorch implementation of "Neural Wireframe Renderer: Learning Wireframe to Image Translations"

Neural Wireframe Renderer: Learning Wireframe to Image Translations Pytorch implementation of ideas from the paper Neural Wireframe Renderer: Learning

Yuan Xue 7 Nov 14, 2022
Codes for Causal Semantic Generative model (CSG), the model proposed in "Learning Causal Semantic Representation for Out-of-Distribution Prediction" (NeurIPS-21)

Learning Causal Semantic Representation for Out-of-Distribution Prediction This repository is the official implementation of "Learning Causal Semantic

Chang Liu 54 Dec 01, 2022
Regression Metrics Calculation Made easy for tensorflow2 and scikit-learn

Regression Metrics Installation To install the package from the PyPi repository you can execute the following command: pip install regressionmetrics I

Ashish Patel 11 Dec 16, 2022
The official implementation of A Unified Game-Theoretic Interpretation of Adversarial Robustness.

This repository is the official implementation of A Unified Game-Theoretic Interpretation of Adversarial Robustness. Requirements pip install -r requi

Jie Ren 17 Dec 12, 2022
PyTorch Implementation of "Light Field Image Super-Resolution with Transformers"

LFT PyTorch implementation of "Light Field Image Super-Resolution with Transformers", arXiv 2021. [pdf]. Contributions: We make the first attempt to a

Squidward 62 Nov 28, 2022
Vikrant Deshpande 1 Nov 17, 2022
TrackFormer: Multi-Object Tracking with Transformers

TrackFormer: Multi-Object Tracking with Transformers This repository provides the official implementation of the TrackFormer: Multi-Object Tracking wi

Tim Meinhardt 321 Dec 29, 2022
Linear Variational State Space Filters

Linear Variational State Space Filters To set up the environment, use the provided scripts in the docker/ folder to build and run the codebase inside

0 Dec 13, 2021
PaddlePaddle GAN library, including lots of interesting applications like First-Order motion transfer, wav2lip, picture repair, image editing, photo2cartoon, image style transfer, and so on.

English | 简体中文 PaddleGAN PaddleGAN provides developers with high-performance implementation of classic and SOTA Generative Adversarial Networks, and s

6.4k Jan 09, 2023
Alphabetical Letter Recognition

DecisionTrees-Image-Classification Alphabetical Letter Recognition In these demo we are using "Decision Trees" Our database is composed by Learning Im

Mohammed Firass 4 Nov 30, 2021
Learning Versatile Neural Architectures by Propagating Network Codes

Learning Versatile Neural Architectures by Propagating Network Codes Mingyu Ding, Yuqi Huo, Haoyu Lu, Linjie Yang, Zhe Wang, Zhiwu Lu, Jingdong Wang,

Mingyu Ding 36 Dec 06, 2022
[ICRA 2022] An opensource framework for cooperative detection. Official implementation for OPV2V.

OpenCOOD OpenCOOD is an Open COOperative Detection framework for autonomous driving. It is also the official implementation of the ICRA 2022 paper OPV

Runsheng Xu 322 Dec 23, 2022