Pytorch implementation of our paper under review — Lottery Jackpots Exist in Pre-trained Models

Overview

Lottery Jackpots Exist in Pre-trained Models (Paper Link)

Requirements

  • Python >= 3.7.4
  • Pytorch >= 1.6.1
  • Torchvision >= 0.4.1

Reproduce the Experiment Results

  1. Download the pre-trained models from this link and place them in the pre-train folder.

  2. Select a configuration file in configs to reproduce the experiment results reported in the paper. For example, to find a lottery jackpot with 30 epochs for pruning 95% parameters of ResNet-32 on CIFAR-10, run:

    python cifar.py --config configs/resnet32_cifar10/90sparsity30epoch.yaml --gpus 0

    To find a lottery jackpot with 30 epochs for pruning 90% parameters of ResNet-50 on ImageNet, run:

    python imagenet.py --config configs/resnet50_imagenet/90sparsity30epoch.yaml --gpus 0

    Note that the data_path in the yaml file should be changed to the data

Evaluate Our Pruned Models

We provide configuration, training logs, and pruned models reported in the paper. They can be downloaded from the provided links in the following table:

Model Dataset Sparsity Epoch Top-1 Acc. Link
VGGNet-19 CIFAR-10 90% 30 93.88% link
VGGNet-19 CIFAR-10 90% 160 93.94% link
VGGNet-19 CIFAR-10 95% 30 93.49% link
VGGNet-19 CIFAR-10 95% 160 93.74% link
VGGNet-19 CIFAR-100 90% 30 72.59% link
VGGNet-19 CIFAR-100 90% 160 74.61% link
VGGNet-19 CIFAR-100 95% 30 71.76% link
VGGNet-19 CIFAR-100 95% 160 73.35% link
ResNet-32 CIFAR-10 90% 30 93.70% link
ResNet-32 CIFAR-10 90% 160 94.39% link
ResNet-32 CIFAR-10 95% 30 92.90% link
ResNet-32 CIFAR-10 95% 160 93.41% link
ResNet-32 CIFAR-100 90% 30 72.22% link
ResNet-32 CIFAR-100 90% 160 73.43% link
ResNet-32 CIFAR-100 95% 30 69.38% link
ResNet-32 CIFAR-100 95% 160 70.31% link
ResNet-50 ImageNet 80% 30 74.53% link
ResNet-50 ImageNet 80% 60 75.26% link
ResNet-50 ImageNet 90% 30 72.17% link
ResNet-50 ImageNet 90% 60 72.46% link

To test the our pruned models, download the pruned models and place them in the ckpt folder.

  1. Select a configuration file in configs to test the pruned models. For example, to evaluate a lottery jackpot for pruning ResNet-32 on CIFAR-10, run:

    python evaluate.py --config configs/resnet32_cifar10/evaluate.yaml --gpus 0

    To evaluate a lottery jackpot for pruning ResNet-50 on ImageNet, run:

    python evaluate.py --config configs/resnet50_imagenet/evaluate.yaml --gpus 0

Owner
Yuxin Zhang
Deep Neural Network Compression & Acceleration
Yuxin Zhang
Pytorch implementation for RelTransformer

RelTransformer Our Architecture This is a Pytorch implementation for RelTransformer The implementation for Evaluating on VG200 can be found here Requi

Vision CAIR Research Group, KAUST 21 Nov 22, 2022
Image-Scaling Attacks and Defenses

Image-Scaling Attacks & Defenses This repository belongs to our publication: Erwin Quiring, David Klein, Daniel Arp, Martin Johns and Konrad Rieck. Ad

Erwin Quiring 163 Nov 21, 2022
ESGD-M - A stochastic non-convex second order optimizer, suitable for training deep learning models, for PyTorch

ESGD-M - A stochastic non-convex second order optimizer, suitable for training deep learning models, for PyTorch

Katherine Crowson 53 Dec 29, 2022
Code accompanying the paper "Wasserstein GAN"

Wasserstein GAN Code accompanying the paper "Wasserstein GAN" A few notes The first time running on the LSUN dataset it can take a long time (up to an

3.1k Jan 01, 2023
ACV is a python library that provides explanations for any machine learning model or data.

ACV is a python library that provides explanations for any machine learning model or data. It gives local rule-based explanations for any model or data and different Shapley Values for tree-based mod

Salim Amoukou 85 Dec 27, 2022
HistoSeg : Quick attention with multi-loss function for multi-structure segmentation in digital histology images

HistoSeg : Quick attention with multi-loss function for multi-structure segmentation in digital histology images Histological Image Segmentation This

Saad Wazir 11 Dec 16, 2022
Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks

Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks (SDPoint) This repository contains the cod

Jason Kuen 17 Jul 04, 2022
MTA:SA Server Configer.

MTAConfiger MTA:SA Server Configer. Hi šŸ‘‹ , I'm Alireza A Python Developer Boy šŸ”­ I’m currently working on my C# projects 🌱 I’m currently Learning CS

3 Jun 07, 2022
Exact Pareto Optimal solutions for preference based Multi-Objective Optimization

Exact Pareto Optimal solutions for preference based Multi-Objective Optimization

Debabrata Mahapatra 40 Dec 24, 2022
Semi-supervised Adversarial Learning to Generate Photorealistic Face Images of New Identities from 3D Morphable Model

Semi-supervised Adversarial Learning to Generate Photorealistic Face Images of New Identities from 3D Morphable Model Baris Gecer 1, Binod Bhattarai 1

Baris Gecer 190 Dec 29, 2022
DCSAU-Net: A Deeper and More Compact Split-Attention U-Net for Medical Image Segmentation

DCSAU-Net: A Deeper and More Compact Split-Attention U-Net for Medical Image Segmentation By Qing Xu, Wenting Duan and Na He Requirements pytorch==1.1

Qing Xu 20 Dec 09, 2022
Includes PyTorch -> Keras model porting code for ConvNeXt family of models with fine-tuning and inference notebooks.

ConvNeXt-TF This repository provides TensorFlow / Keras implementations of different ConvNeXt [1] variants. It also provides the TensorFlow / Keras mo

Sayak Paul 87 Dec 06, 2022
Implementation of Auto-Conditioned Recurrent Networks for Extended Complex Human Motion Synthesis

acLSTM_motion This folder contains an implementation of acRNN for the CMU motion database written in Pytorch. See the following links for more backgro

Yi_Zhou 61 Sep 07, 2022
A SAT-based sudoku solver

SAT Sudoku solver A SAT-based Sudoku solver made in the context of a small project in the "Logic Problem Solving" class in the first year at the Polyt

Alexandre Malfreyt 5 Apr 15, 2022
ANEA: Distant Supervision for Low-Resource Named Entity Recognition

ANEA: Distant Supervision for Low-Resource Named Entity Recognition ANEA is a tool to automatically annotate named entities in unlabeled text based on

Saarland University Spoken Language Systems Group 15 Mar 30, 2022
Code for Universal Semi-Supervised Semantic Segmentation models paper accepted in ICCV 2019

USSS_ICCV19 Code for Universal Semi Supervised Semantic Segmentation accepted to ICCV 2019. Full Paper available at https://arxiv.org/abs/1811.10323.

Tarun K 68 Nov 24, 2022
Python script for performing depth completion from sparse depth and rgb images using the msg_chn_wacv20. model in ONNX

ONNX msg_chn_wacv20 depth completion Python script for performing depth completion from sparse depth and rgb images using the msg_chn_wacv20 model in

Ibai Gorordo 19 Oct 22, 2022
Multi-atlas segmentation (MAS) is a promising framework for medical image segmentation

Multi-atlas segmentation (MAS) is a promising framework for medical image segmentation. Generally, MAS methods register multiple atlases, i.e., medical images with corresponding labels, to a target i

NanYoMy 13 Oct 09, 2022
Dirty Pixels: Towards End-to-End Image Processing and Perception

Dirty Pixels: Towards End-to-End Image Processing and Perception This repository contains the code for the paper Dirty Pixels: Towards End-to-End Imag

50 Nov 18, 2022
Junction Tree Variational Autoencoder for Molecular Graph Generation (ICML 2018)

Junction Tree Variational Autoencoder for Molecular Graph Generation Official implementation of our Junction Tree Variational Autoencoder https://arxi

Wengong Jin 418 Jan 07, 2023