An Empirical Investigation of Model-to-Model Distribution Shifts in Trained Convolutional Filters

Overview

CNN-Filter-DB

An Empirical Investigation of Model-to-Model Distribution Shifts in Trained Convolutional Filters
Paul Gavrikov, Janis Keuper

Distribution shifts of trained 3x3 convolution filters

Paper: https://openreview.net/forum?id=2st0AzxC3mh

Abstract: We present first empirical results from our ongoing investigation of distribution shifts in image data used for various computer vision tasks. Instead of analyzing the original training and test data, we propose to study shifts in the learned weights of trained models. In this work, we focus on the properties of the distributions of dominantly used 3x3 convolution filter kernels. We collected and publicly provide a data set with over half a billion filters from hundreds of trained CNNs, using a wide range of data sets, architectures, and vision tasks. Our analysis shows interesting distribution shifts (or the lack thereof) between trained filters along different axes of meta-parameters, like data type, task, architecture, or layer depth. We argue, that the observed properties are a valuable source for further investigation into a better understanding of the impact of shifts in the input data to the generalization abilities of CNN models and novel methods for more robust transfer-learning in this domain.

Versions

Number Changes
v1.0 Initial dataset as presented in the NeurIPS 2021 DistShift Workshop

Environment

We have executed this with Python 3.8.8 on Linux 3.10.0-1160.24.1.el7.x86_64. The scripts should however work with most python3 versions and OS.

To install all necessary modules please run:

pip install -r requirements.txt

or install these modules manually with your desired package manager:

numpy==1.21.2
scipy
scikit-learn==0.24.1
matplotlib==3.4.1
pandas==1.1.4
fast-histogram==0.10
KDEpy==1.1.0
tqdm==4.53.0
colorcet==2.0.6
h5py==3.1.0
tables==3.6.1

Prepare

Download dataset.h5 from https://kaggle.com/paulgavrikov/cnn-filter-db. This file contains the filters and meta information as individual datasets.

The filters are linked as a Nx9 numpy.float32 array under the /filter dataset. Every row is one filter and the row number is also the filter ID (i.e. the first row is filter ID 0). To reshape a filter f back to its original shape use f.reshape(3, 3).

The meta information is stored as a pandas.DataFrame under /meta. Following is an out of order list of column keys with a short description. Other column keys can and should be ignored. The table has a Multiindex on [model_id, conv_depth, conv_depth].

Column Description
model_id Unique int ID of the model.
conv_depth Convolution depth of the extracted filter i.e. how many convolution layers were hierarchically below the layer this filter was extracted from.
conv_depth_norm Similar to conv_depth but normalized by the maximum conv_depth. Will be a flaot betwenn 0 (first layers) .. 1 (towards head).
filter_ids List of Filter IDs that belong to this record. These can directly be mapped to the rows of the filter array.
model Unique string ID of the model. Typically, but not reliably in the format {name}{trainingset}{onnx opset}.
producer Producer of the ONNX export. Typically various versions of PyTorch.
op_set Version of the ONNX operator set used for export.
depth Total hierarchical depth of the model including all layers.
Name Name of the model. Not necessarily unique.
Paper Link to the Paper. Not always populated.
Pretraining-Dataset Name of the pretraining dataset(s) if pretrained. Multiple datr sets are seperated by commas.
Training-Dataset Name of the training dataset(s). Multiple datr sets are seperated by commas.
Datatype Visual, manual categorization of the training datatsets.
Task Task of the model.
Accessible Represents where the model can be found. Typically this is a link to GitHub.
Dataset URL URL of the training dataset. Usually only entered for exotic datasets.
total_filters Total number of convolution filters in this model.
3x3_filter_share The share of 3x3 filters compared to all other conv filters.
(X, Y) filters Represents how often filters of shape (X, Y) were found in the source model.
Conv, Add, Relu, MaxPool, Reshape, MatMul, Transpose, BatchNormalization, Concat, Shape, Gather, Softmax, Slice, Unsqueeze, Mul, Exp, Sub, Div, Pad, InstanceNormalization, Upsample, Cast, Floor, Clip, ReduceMean, LeakyRelu, ConvTranspose, Tanh, GlobalAveragePool, Gemm, ConstantOfShape, Flatten, Squeeze, Less, Loop, Split, Min, Tile, Sigmoid, NonMaxSuppression, TopK, ReduceMin, AveragePool, Dropout, Where, Equal, Expand, Pow, Sqrt, Erf, Neg, Resize, LRN, LogSoftmax, Identity, Ceil, Round, Elu, Log, Range, GatherElements, ScatterND, RandomNormalLike, PRelu, Sum, ReduceSum, NonZero, Not Represents how often this ONNX operator was found in the original model. Please note that individual operators may have been fused in later ONNX opsets.

Run

Adjust dataset_path in https://github.com/paulgavrikov/CNN-Filter-DB/blob/main/main.ipynb and run the cells.

Citation

If you find our work useful in your research, please consider citing:

@inproceedings{
gavrikov2021an,
title={An Empirical Investigation of Model-to-Model Distribution Shifts in Trained Convolutional Filters},
author={Gavrikov, Paul and Keuper, Janis},
booktitle={NeurIPS 2021 Workshop on Distribution Shifts: Connecting Methods and Applications},
year={2021},
url={https://openreview.net/forum?id=2st0AzxC3mh}
}
Owner
Paul Gavrikov
Paul Gavrikov
This is the replication package for paper submission: Towards Training Reproducible Deep Learning Models.

This is the replication package for paper submission: Towards Training Reproducible Deep Learning Models.

0 Feb 02, 2022
Finding Donors for CharityML

Finding-Donors-for-CharityML - Investigated factors that affect the likelihood of charity donations being made based on real census data.

Moamen Abdelkawy 1 Dec 30, 2021
A fast model to compute optical flow between two input images.

DCVNet: Dilated Cost Volumes for Fast Optical Flow This repository contains our implementation of the paper: @InProceedings{jiang2021dcvnet, title={

Huaizu Jiang 8 Sep 27, 2021
This repo contains the official code of our work SAM-SLR which won the CVPR 2021 Challenge on Large Scale Signer Independent Isolated Sign Language Recognition.

Skeleton Aware Multi-modal Sign Language Recognition By Songyao Jiang, Bin Sun, Lichen Wang, Yue Bai, Kunpeng Li and Yun Fu. Smile Lab @ Northeastern

Isen (Songyao Jiang) 128 Dec 08, 2022
Machine Translation Implement By Bi-GRU And Transformer

Seq2Seq Translation Implement By Bidirectional GRU And Transformer In Pytorch Before You Run The Code You should download the data through the link be

He Wang 2 Oct 27, 2021
Automatic learning-rate scheduler

AutoLRS This is the PyTorch code implementation for the paper AutoLRS: Automatic Learning-Rate Schedule by Bayesian Optimization on the Fly published

Yuchen Jin 33 Nov 18, 2022
Source code for Adaptively Calibrated Critic Estimates for Deep Reinforcement Learning

Adaptively Calibrated Critic Estimates for Deep Reinforcement Learning Official implementation of ACC, described in the paper "Adaptively Calibrated C

3 Sep 16, 2022
Tutorial for the PERFECTING FACTORY 5.0 WITH EDGE-POWERED AI workshop

Workshop Advantech Jetson Nano This tutorial has been designed for the PERFECTING FACTORY 5.0 WITH EDGE-POWERED AI workshop in collaboration with Adva

Edge Impulse 18 Nov 22, 2022
Multimodal commodity image retrieval 多模态商品图像检索

Multimodal commodity image retrieval 多模态商品图像检索 Not finished yet... introduce explain:The specific description of the project and the product image dat

hongjie 8 Nov 25, 2022
Code and data of the ACL 2021 paper: Few-Shot Text Ranking with Meta Adapted Synthetic Weak Supervision

MetaAdaptRank This repository provides the implementation of meta-learning to reweight synthetic weak supervision data described in the paper Few-Shot

THUNLP 5 Jun 16, 2022
RoMa: A lightweight library to deal with 3D rotations in PyTorch.

RoMa: A lightweight library to deal with 3D rotations in PyTorch. RoMa (which stands for Rotation Manipulation) provides differentiable mappings betwe

NAVER 90 Dec 27, 2022
ArcaneGAN by Alex Spirin

ArcaneGAN by Alex Spirin

Alex 617 Dec 28, 2022
Official repository for the paper "Self-Supervised Models are Continual Learners" (CVPR 2022)

Self-Supervised Models are Continual Learners This is the official repository for the paper: Self-Supervised Models are Continual Learners Enrico Fini

Enrico Fini 73 Dec 18, 2022
DeepLM: Large-scale Nonlinear Least Squares on Deep Learning Frameworks using Stochastic Domain Decomposition (CVPR 2021)

DeepLM DeepLM: Large-scale Nonlinear Least Squares on Deep Learning Frameworks using Stochastic Domain Decomposition (CVPR 2021) Run Please install th

Jingwei Huang 130 Dec 02, 2022
HistoSeg : Quick attention with multi-loss function for multi-structure segmentation in digital histology images

HistoSeg : Quick attention with multi-loss function for multi-structure segmentation in digital histology images Histological Image Segmentation This

Saad Wazir 11 Dec 16, 2022
Code for paper "ASAP-Net: Attention and Structure Aware Point Cloud Sequence Segmentation"

ASAP-Net This project implements ASAP-Net of paper ASAP-Net: Attention and Structure Aware Point Cloud Sequence Segmentation (BMVC2020). Overview We i

Hanwen Cao 26 Aug 25, 2022
Yolox-bytetrack-sample - Python sample of MOT (Multiple Object Tracking) using YOLOX and ByteTrack

yolox-bytetrack-sample YOLOXとByteTrackを用いたMOT(Multiple Object Tracking)のPythonサン

KazuhitoTakahashi 12 Nov 09, 2022
Fre-GAN: Adversarial Frequency-consistent Audio Synthesis

Fre-GAN Vocoder Fre-GAN: Adversarial Frequency-consistent Audio Synthesis Training: python train.py --config config.json Citation: @misc{kim2021frega

Rishikesh (ऋषिकेश) 93 Dec 17, 2022
Minimal deep learning library written from scratch in Python, using NumPy/CuPy.

SmallPebble Project status: experimental, unstable. SmallPebble is a minimal/toy automatic differentiation/deep learning library written from scratch

Sidney Radcliffe 92 Dec 30, 2022
Deeprl - Standard DQN and dueling network for simple games

DeepRL This code implements the standard deep Q-learning and dueling network with experience replay (memory buffer) for playing simple games. DQN algo

Yao Zhou 6 Apr 12, 2020