Hierarchical-Bayesian-Defense - Towards Adversarial Robustness of Bayesian Neural Network through Hierarchical Variational Inference (Openreview)

Overview

Towards Adversarial Robustness of Bayesian Neural Network through Hierarchical Variational Inference [paper]

Baseline of this code is the official repository for this paper. We just replace the BNN regularizer from ELBO with enhanced Bayesian regularizer based on hierarchical-ELBO.

Alt text


Citation

If you find this work helpful, please cite it as:

@misc{
lee2021towards,
title={Towards Adversarial Robustness of Bayesian Neural Network through Hierarchical Variational Inference},
author={Byung-Kwan Lee and Youngjoon Yu and Yong Man Ro},
year={2021},
url={https://openreview.net/forum?id=Cue2ZEBf12}
}

Hierarchical-Bayeisan-Defense

Dataset

  • CIFAR10
  • STL10
  • CIFAR100
  • Tiny-ImageNet

Network

  • VGG16 (for CIFAR-10/CIFAR-100/Tiny-ImageNet)
  • Aaron (for STL10)
  • WideResNet (for CIFAR-10/100)

Attack (by torchattack)

  • PGD attack
  • EOT-PGD attack

Defense methods

  • adv: Adversarial training
  • adv_vi: Adversarial training with Bayesian neural network
  • adv_hvi: Adversarial training with Enhanced Bayesian neural network based on hierarchical-ELBO

How to Train

1. Adversarial training

Run train_adv.sh

lr=0.01
steps=10
max_norm=0.03
data=tiny # or `cifar10`, `stl10`, `cifar100`
root=./datasets
model=vgg # vgg for `cifar10` `stl10` `cifar100`, aaron for `stl10`, wide for `cifar10` or `cifar100`
model_out=./checkpoint/${data}_${model}_${max_norm}_adv
echo "Loading: " ${model_out}
CUDA_VISIBLE_DEVICES=0 python ./main_adv.py \
                        --lr ${lr} \
                        --step ${steps} \
                        --max_norm ${max_norm} \
                        --data ${data} \
                        --model ${model} \
                        --root ${root} \
                        --model_out ${model_out}.pth \

2. Adversarial training with BNN

Run train_adv_vi.sh

lr=0.01
steps=10
max_norm=0.03
sigma_0=0.1
init_s=0.1
data=tiny # or `cifar10`, `stl10`, `cifar100`
root=./datasets
model=vgg # vgg for `cifar10` `stl10` `cifar100`, aaron for `stl10`, wide for `cifar10` or `cifar100`
model_out=./checkpoint/${data}_${model}_${max_norm}_adv_vi
echo "Loading: " ${model_out}
CUDA_VISIBLE_DEVICES=0 python3 ./main_adv_vi.py \
                        --lr ${lr} \
                        --step ${steps} \
                        --max_norm ${max_norm} \
                        --sigma_0 ${sigma_0} \
                        --init_s ${init_s} \
                        --data ${data} \
                        --model ${model} \
                        --root ${root} \
                        --model_out ${model_out}.pth \

3. Adversarial training with enhanced Bayesian regularizer based on hierarchical-ELBO

Run train_adv_hvi.sh

lr=0.01
steps=10
max_norm=0.03
sigma_0=0.1
init_s=0.1
data=tiny # or `cifar10`, `stl10`, `cifar100`
root=./datasets
model=vgg # vgg for `cifar10` `stl10` `cifar100`, aaron for `stl10`, wide for `cifar10` or `cifar100`
model_out=./checkpoint/${data}_${model}_${max_norm}_adv_hvi
echo "Loading: " ${model_out}
CUDA_VISIBLE_DEVICES=0 python3 ./main_adv_hvi.py \
                        --lr ${lr} \
                        --step ${steps} \
                        --max_norm ${max_norm} \
                        --sigma_0 ${sigma_0} \
                        --init_s ${init_s} \
                        --data ${data} \
                        --model ${model} \
                        --root ${root} \
                        --model_out ${model_out}.pth \

How to Test

Testing adversarial robustness

Run acc_under_attack.sh

model=vgg # vgg for `cifar10` `stl10` `cifar100`, aaron for `stl10`, wide for `cifar10` or `cifar100`
defense=adv_hvi # or `adv_vi`, `adv`
data=tiny-imagenet # or `cifar10`, `stl10`, `cifar100`
root=./datasets
n_ensemble=50
step=10
max_norm=0.03
echo "Loading" ./checkpoint/${data}_${model}_${max_norm}_${defense}.pth

CUDA_VISIBLE_DEVICES=0 python3 acc_under_attack.py \
    --model $model \
    --defense $defense \
    --data $data \
    --root $root \
    --n_ensemble $n_ensemble \
    --step $step \
    --max_norm $max_norm

How to check the learning parameters and KL divergence

Run check_parameters.sh

model=vgg # vgg for `cifar10` `stl10` `cifar100`, aaron for `stl10`, wide for `cifar10` or `cifar100`
defense=adv_hvi # or `adv_vi`
data=tiny-imagenet # or `cifar10`, `stl10`, `cifar100`
max_norm=0.03
echo "Loading" ./checkpoint/${data}_${model}_${max_norm}_${defense}.pth

CUDA_VISIBLE_DEVICES=0 python3 check_parameters.py \
    --model $model \
    --defense $defense \
    --data $data \
    --max_norm $max_norm \

How to check uncertainty by predictive entropy

Run uncertainty.sh

model=vgg # vgg for `cifar10` `stl10` `cifar100`, aaron for `stl10`, wide for `cifar10` or `cifar100`
defense=adv_hvi # or `adv_vi`
data=tiny-imagenet # or `cifar10`, `stl10`, `cifar100`
root=./datasets
n_ensemble=50
step=10
max_norm=0.03
echo "Loading" ./checkpoint/${data}_${model}_${max_norm}_${defense}.pth

CUDA_VISIBLE_DEVICES=0 python3 uncertainty.py \
    --model $model \
    --defense $defense \
    --data $data \
    --root $root \
    --n_ensemble $n_ensemble \
    --step $step \
    --max_norm $max_norm
Owner
LBK
Ph.D Candidate, KAIST EE
LBK
PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)

English | 简体中文 Welcome to the PaddlePaddle GitHub. PaddlePaddle, as the only independent R&D deep learning platform in China, has been officially open

19.4k Jan 04, 2023
Yolov5-opencv-cpp-python - Example of using ultralytics YOLO V5 with OpenCV 4.5.4, C++ and Python

yolov5-opencv-cpp-python Example of performing inference with ultralytics YOLO V

183 Jan 09, 2023
RaftMLP: How Much Can Be Done Without Attention and with Less Spatial Locality?

RaftMLP RaftMLP: How Much Can Be Done Without Attention and with Less Spatial Locality? By Yuki Tatsunami and Masato Taki (Rikkyo University) [arxiv]

Okojo 20 Aug 31, 2022
Code for paper "Multi-level Disentanglement Graph Neural Network"

Multi-level Disentanglement Graph Neural Network (MD-GNN) This is a PyTorch implementation of the MD-GNN, and the code includes the following modules:

Lirong Wu 6 Dec 29, 2022
Lung Pattern Classification for Interstitial Lung Diseases Using a Deep Convolutional Neural Network

ild-cnn This is supplementary material for the manuscript: "Lung Pattern Classification for Interstitial Lung Diseases Using a Deep Convolutional Neur

22 Nov 05, 2022
Applications using the GTN library and code to reproduce experiments in "Differentiable Weighted Finite-State Transducers"

gtn_applications An applications library using GTN. Current examples include: Offline handwriting recognition Automatic speech recognition Installing

Facebook Research 68 Dec 29, 2022
A Python module for parallel optimization of expensive black-box functions

blackbox: A Python module for parallel optimization of expensive black-box functions What is this? A minimalistic and easy-to-use Python module that e

Paul Knysh 426 Dec 08, 2022
The source code for 'Noisy-Labeled NER with Confidence Estimation' accepted by NAACL 2021

Kun Liu*, Yao Fu*, Chuanqi Tan, Mosha Chen, Ningyu Zhang, Songfang Huang, Sheng Gao. Noisy-Labeled NER with Confidence Estimation. NAACL 2021. [arxiv]

30 Nov 12, 2022
The description of FMFCC-A (audio track of FMFCC) dataset and Challenge resluts.

FMFCC-A This project is the description of FMFCC-A (audio track of FMFCC) dataset and Challenge resluts. The FMFCC-A dataset is shared through BaiduCl

18 Dec 24, 2022
A simple, clean TensorFlow implementation of Generative Adversarial Networks with a focus on modeling illustrations.

IllustrationGAN A simple, clean TensorFlow implementation of Generative Adversarial Networks with a focus on modeling illustrations. Generated Images

268 Nov 27, 2022
Official repository for the CVPR 2021 paper "Learning Feature Aggregation for Deep 3D Morphable Models"

Deep3DMM Official repository for the CVPR 2021 paper Learning Feature Aggregation for Deep 3D Morphable Models. Requirements This code is tested on Py

38 Dec 27, 2022
meProp: Sparsified Back Propagation for Accelerated Deep Learning (ICML 2017)

meProp The codes were used for the paper meProp: Sparsified Back Propagation for Accelerated Deep Learning with Reduced Overfitting (ICML 2017) [pdf]

LancoPKU 107 Nov 18, 2022
Code for the paper "Attention Approximates Sparse Distributed Memory"

Attention Approximates Sparse Distributed Memory - Codebase This is all of the code used to run analyses in the paper "Attention Approximates Sparse D

Trenton Bricken 14 Dec 05, 2022
Wav2Vec for speech recognition, classification, and audio classification

Soxan در زبان پارسی به نام سخن This repository consists of models, scripts, and notebooks that help you to use all the benefits of Wav2Vec 2.0 in your

Mehrdad Farahani 140 Dec 15, 2022
Modeling Temporal Concept Receptive Field Dynamically for Untrimmed Video Analysis

Modeling Temporal Concept Receptive Field Dynamically for Untrimmed Video Analysis This is a PyTorch implementation of the model described in our pape

qzhb 6 Jul 08, 2021
Harmonious Textual Layout Generation over Natural Images via Deep Aesthetics Learning

Harmonious Textual Layout Generation over Natural Images via Deep Aesthetics Learning Code for the paper Harmonious Textual Layout Generation over Nat

7 Aug 09, 2022
This tool uses Deep Learning to help you draw and write with your hand and webcam.

This tool uses Deep Learning to help you draw and write with your hand and webcam. A Deep Learning model is used to try to predict whether you want to have 'pencil up' or 'pencil down'.

lmagne 169 Dec 10, 2022
A baseline code for VSPW

A baseline code for VSPW Preparation Download VSPW dataset The VSPW dataset with extracted frames and masks is available here.

28 Aug 22, 2022
Implementation for Shape from Polarization for Complex Scenes in the Wild

sfp-wild Implementation for Shape from Polarization for Complex Scenes in the Wild project website | paper Code and dataset will be released soon. Int

Chenyang LEI 41 Dec 23, 2022
A python package for generating, analyzing and visualizing building shadows

pybdshadow Introduction pybdshadow is a python package for generating, analyzing and visualizing building shadows from large scale building geographic

Qing Yu 13 Nov 30, 2022