Source code of "Hold me tight! Influence of discriminative features on deep network boundaries"

Overview

Hold me tight! Influence of discriminative features on deep network boundaries

This is the source code to reproduce the experiments of the NeurIPS 2020 paper "Hold me tight! Influence of discriminative features on deep network boundaries" by Guillermo Ortiz-Jimenez*, Apostolos Modas*, Seyed-Mohsen Moosavi-Dezfooli and Pascal Frossard.

Abstract

Important insights towards the explainability of neural networks reside in the characteristics of their decision boundaries. In this work, we borrow tools from the field of adversarial robustness, and propose a new perspective that relates dataset features to the distance of samples to the decision boundary. This enables us to carefully tweak the position of the training samples and measure the induced changes on the boundaries of CNNs trained on large-scale vision datasets. We use this framework to reveal some intriguing properties of CNNs. Specifically, we rigorously confirm that neural networks exhibit a high invariance to non-discriminative features, and show that very small perturbations of the training samples in certain directions can lead to sudden invariances in the orthogonal ones. This is precisely the mechanism that adversarial training uses to achieve robustness.

Dependencies

To run our code on a Linux machine with a GPU, install the Python packages in a fresh Anaconda environment:

$ conda env create -f environment.yml
$ conda activate hold_me_tight

Experiments

This repository contains code to reproduce the following experiments:

You can reproduce this experiments separately using their individual scripts, or have a look at the comprehensive Jupyter notebook.

Pretrained architectures

We also provide a set of pretrained models that we used in our experiments. The exact hyperparameters and settings can be found in the Supplementary material of the paper. All the models are publicly available and can be downloaded from here. In order to execute the scripts using the pretrained models, it is recommended to download them and save them under the Models/Pretrained/ directory.

Architecture Dataset Training method
LeNet MNIST Standard
ResNet18 MNIST Standard
ResNet18 CIFAR10 Standard
VGG19 CIFAR10 Standard
DenseNet121 CIFAR10 Standard
LeNet Flipped MNIST Standard + Frequency flip
ResNet18 Flipped MNIST Standard + Frequency flip
ResNet18 Flipped CIFAR10 Standard + Frequency flip
VGG19 Flipped CIFAR10 Standard + Frequency flip
DenseNet121 Flipped CIFAR10 Standard + Frequency flip
ResNet50 Flipped ImageNet Standard + Frequency flip
ResNet18 Low-pass CIFAR10 Standard + Low-pass filtering
VGG19 Low-pass CIFAR10 Standard + Low-pass filtering
DenseNet121 Low-pass CIFAR10 Standard + Low-pass filtering
Robust LeNet MNIST L2 PGD adversarial training (eps = 2)
Robust ResNet18 MNIST L2 PGD adversarial training (eps = 2)
Robust ResNet18 CIFAR10 L2 PGD adversarial training (eps = 1)
Robust VGG19 CIFAR10 L2 PGD adversarial training (eps = 1)
Robust DenseNet121 CIFAR10 L2 PGD adversarial training (eps = 1)
Robust ResNet50 ImageNet L2 PGD adversarial training (eps = 3) (copied from here)
Robust LeNet Flipped MNIST L2 PGD adversarial training (eps = 2) with Dykstra projection + Frequency flip
Robust ResNet18 Flipped MNIST L2 PGD adversarial training (eps = 2) with Dykstra projection + Frequency flip
Robust ResNet18 Flipped CIFAR10 L2 PGD adversarial training (eps = 1) with Dykstra projection + Frequency flip
Robust VGG19 Flipped CIFAR10 L2 PGD adversarial training (eps = 1) with Dykstra projection + Frequency flip
Robust DenseNet121 Flipped CIFAR10 L2 PGD adversarial training (eps = 1) with Dykstra projection + Frequency flip

Reference

If you use this code, or some of the attached models, please cite the following paper:

@InCollection{OrtizModasHMT2020,
  TITLE = {{Hold me tight! Influence of discriminative features on deep network boundaries}},
  AUTHOR = {{Ortiz-Jimenez}, Guillermo and {Modas}, Apostolos and {Moosavi-Dezfooli}, Seyed-Mohsen and Frossard, Pascal},
  BOOKTITLE = {Advances in Neural Information Processing Systems 34},
  MONTH = dec,
  YEAR = {2020}
}
Code repository for "Stable View Synthesis".

Stable View Synthesis Code repository for "Stable View Synthesis". Setup Install the following Python packages in your Python environment - numpy (1.1

Intelligent Systems Lab Org 195 Dec 24, 2022
PyTorch code for the paper: FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning

FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning This is the PyTorch implementation of our paper: FeatMatch: Feature-Based Augmentat

43 Nov 19, 2022
TaCL: Improving BERT Pre-training with Token-aware Contrastive Learning

TaCL: Improving BERT Pre-training with Token-aware Contrastive Learning Authors: Yixuan Su, Fangyu Liu, Zaiqiao Meng, Lei Shu, Ehsan Shareghi, and Nig

Yixuan Su 79 Nov 04, 2022
Label Hallucination for Few-Shot Classification

Label Hallucination for Few-Shot Classification This repo covers the implementation of the following paper: Label Hallucination for Few-Shot Classific

Yiren Jian 13 Nov 13, 2022
(JMLR'19) A Python Toolbox for Scalable Outlier Detection (Anomaly Detection)

Python Outlier Detection (PyOD) Deployment & Documentation & Stats Build Status & Coverage & Maintainability & License PyOD is a comprehensive and sca

Yue Zhao 6.6k Jan 03, 2023
Pytorch implementation for our ICCV 2021 paper "TRAR: Routing the Attention Spans in Transformers for Visual Question Answering".

TRAnsformer Routing Networks (TRAR) This is an official implementation for ICCV 2021 paper "TRAR: Routing the Attention Spans in Transformers for Visu

Ren Tianhe 49 Nov 10, 2022
You are AllSet: A Multiset Function Framework for Hypergraph Neural Networks.

AllSet This is the repo for our paper: You are AllSet: A Multiset Function Framework for Hypergraph Neural Networks. We prepared all codes and a subse

Jianhao 51 Dec 24, 2022
Experiment about Deep Person Re-identification with EfficientNet-v2

We evaluated the baseline with Resnet50 and Efficienet-v2 without using pretrained models. Also Resnet50-IBN-A and Efficientnet-v2 using pretrained on ImageNet. We used two datasets: Market-1501 and

lan.nguyen2k 77 Jan 03, 2023
A simple consistency training framework for semi-supervised image semantic segmentation

PseudoSeg: Designing Pseudo Labels for Semantic Segmentation PseudoSeg is a simple consistency training framework for semi-supervised image semantic s

Google Interns 143 Dec 13, 2022
A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch

Introduction This is a Python package available on PyPI for NVIDIA-maintained utilities to streamline mixed precision and distributed training in Pyto

Artit 'Art' Wangperawong 5 Sep 29, 2021
Simple Pose: Rethinking and Improving a Bottom-up Approach for Multi-Person Pose Estimation

SimplePose Code and pre-trained models for our paper, “Simple Pose: Rethinking and Improving a Bottom-up Approach for Multi-Person Pose Estimation”, a

Jia Li 256 Dec 24, 2022
A framework for attentive explainable deep learning on tabular data

🧠 kendrite A framework for attentive explainable deep learning on tabular data 💨 Quick start kedro run 🧱 Built upon Technology Description Links ke

Marnix Koops 3 Nov 06, 2021
A TensorFlow implementation of FCN-8s

FCN-8s implementation in TensorFlow Contents Overview Examples and demo video Dependencies How to use it Download pre-trained VGG-16 Overview This is

Pierluigi Ferrari 50 Aug 08, 2022
A Comprehensive Empirical Study of Vision-Language Pre-trained Model for Supervised Cross-Modal Retrieval

CLIP4CMR A Comprehensive Empirical Study of Vision-Language Pre-trained Model for Supervised Cross-Modal Retrieval The original data and pre-calculate

24 Dec 26, 2022
Code for the paper: Hierarchical Reinforcement Learning With Timed Subgoals, published at NeurIPS 2021

Hierarchical reinforcement learning with Timed Subgoals (HiTS) This repository contains code for reproducing experiments from our paper "Hierarchical

Autonomous Learning Group 21 Dec 03, 2022
[CVPR 2022] Unsupervised Image-to-Image Translation with Generative Prior

GP-UNIT - Official PyTorch Implementation This repository provides the official PyTorch implementation for the following paper: Unsupervised Image-to-

Shuai Yang 125 Jan 03, 2023
Analyzes your GitHub Profile and presents you with a report on how likely you are to become the next MLH Fellow!

Fellowship Prediction GitHub Profile Comparative Analysis Tool Built with BentoML Table of Contents: Features Disclaimer Technologies Used Contributin

Damir Temir 51 Dec 29, 2022
ThunderSVM: A Fast SVM Library on GPUs and CPUs

What's new We have recently released ThunderGBM, a fast GBDT and Random Forest library on GPUs. add scikit-learn interface, see here Overview The miss

Xtra Computing Group 1.4k Dec 22, 2022
Automated Evidence Collection for Fake News Detection

Automated Evidence Collection for Fake News Detection This is the code repo for the Automated Evidence Collection for Fake News Detection paper accept

Mrinal Rawat 2 Apr 12, 2022
YOLOX-Paddle - A reproduction of YOLOX by PaddlePaddle

YOLOX-Paddle A reproduction of YOLOX by PaddlePaddle 数据集准备 下载COCO数据集,准备为如下路径 /ho

QuanHao Guo 6 Dec 18, 2022