🔪 Elimination based Lightweight Neural Net with Pretrained Weights

Overview

ELimNet

ELimNet: Eliminating Layers in a Neural Network Pretrained with Large Dataset for Downstream Task

  • Removed top layers from pretrained EfficientNetB0 and ResNet18 to construct lightweight CNN model with less than 1M #params.
  • Assessed on Trash Annotations in Context(TACO) Dataset sampled for 6 classes with 20,851 images.
  • Compared performance with lightweight models generated with Optuna's Neural Architecture Search(NAS) constituted with same convolutional blocks.

Quickstart

Installation

# clone the repository
git clone https://github.com/snoop2head/elimnet

# fetch image dataset and unzip
!wget -cq https://aistages-prod-server-public.s3.amazonaws.com/app/Competitions/000081/data/data.zip
!unzip ./data.zip -d ./

Train

# finetune on the dataset with pretrained model
python train.py --model ./model/efficientnet_b0.yaml

# finetune on the dataset with ElimNet
python train.py --model ./model/efficientnet_b0_elim_3.yaml

Inference

# inference with the lastest ran model
python inference.py --model_dir ./exp/latest/

Performance

Performance is compared with (1) original pretrained model and (2) Optuna NAS constructed models with no pretrained weights.

  • Indicates that top convolutional layers eliminated pretrained CNN models outperforms empty Optuna NAS models generated with same convolutional blocks.
  • Suggests that eliminating top convolutional layers creates lightweight model that shows similar(or better) classifcation performance with original pretrained model.
  • Reduces parameters to 7%(or less) of its original parameters while maintaining(or improving) its performance. Saves inference time by 20% or more by eliminating top convolutional layters.

ELimNet vs Pretrained Models (Train)

[100 epochs] # of Parameters # of Layers Train Validation Test F1
Pretrained EfficientNet B0 4.0M 352 Loss: 0.43
Acc: 81.23%
F1: 0.84
Loss: 0.469
Acc: 82.17%
F1: 0.76
0.7493
EfficientNet B0 Elim 2 0.9M 245 Loss:0.652
Acc: 87.22%
F1: 0.84
Loss: 0.622
Acc: 87.22%
F1: 0.77
0.7603
EfficientNet B0 Elim 3 0.30M 181 Loss: 0.602
Acc: 78.17%
F1: 0.74
Loss: 0.661
Acc: 77.41%
F1: 0.74
0.7349
Resnet18 11.17M 69 Loss: 0.578
Acc: 78.90%
F1: 0.76
Loss: 0.700
Acc: 76.17%
F1: 0.719
-
Resnet18 Elim 2 0.68M 37 Loss: 0.447
Acc: 83.73%
F1: 0.71
Loss: 0.712
Acc: 75.42%
F1: 0.71
-

ELimNet vs Pretrained Models (Inference)

# of Parameters # of Layers CPU times (sec) CUDA time (sec) Test Inference Time (sec)
Pretrained EfficientNet B0 4.0M 352 3.9s 4.0s 105.7s
EfficientNet B0 Elim 2 0.9M 245 4.1s 13.0s 83.4s
EfficientNet B0 Elim 3 0.30M 181 3.0s 9.0s 73.5s
Resnet18 11.17M 69 - - -
Resnet18 Elim 2 0.68M 37 - - -

ELimNet vs Empty Optuna NAS Models (Train)

[100 epochs] # of Parameters # of Layers Train Valid Test F1
Empty MobileNet V3 4.2M 227 Loss 0.925
Acc: 65.18%
F1: 0.58
Loss 0.993
Acc: 62.83%
F1: 0.56
-
Empty EfficientNet B0 1.3M 352 Loss 0.867
Acc: 67.28%
F1: 0.61
Loss 0.898
Acc: 66.80%
F1: 0.61
0.6337
Empty DWConv & InvertedResidualv3 NAS 0.08M 66 - Loss: 0.766
Acc: 71.71%
F1: 0.68
0.6740
Empty MBConv NAS 0.33M 141 Loss: 0.786
Acc: 70.72%
F1: 0.66
Loss: 0.866
Acc: 68.09%
F1: 0.62
0.6245
Resnet18 Elim 2 0.68M 37 Loss: 0.447
Acc: 83.73%
F1: 0.71
Loss: 0.712
Acc: 75.42%
F1: 0.71
-
EfficientNet B0 Elim 3 0.30M 181 Loss: 0.602
Acc: 78.17%
F1: 0.74
Loss: 0.661
Acc: 77.41%
F1: 0.74
0.7603

ELimNet vs Empty Optuna NAS Models (Inference)

# of Parameters # of Layers CPU times (sec) CUDA time (sec) Test Inference Time (sec)
Empty MobileNet V3 4.2M 227 4 13 -
Empty EfficientNet B0 1.3M 352 3.780 3.782 68.4s
Empty DWConv &
InvertedResidualv3 NAS
0.08M 66 1 3.5 61.1s
Empty MBConv NAS 0.33M 141 2.14 7.201 67.1s
Resnet18 Elim 2 0.68M 37 - - -
EfficientNet B0 Elim 3 0.30M 181 3.0s 9s 73.5s

Background & WiP

Background

Work in Progress

  • Will test the performance of replacing convolutional blocks with pretrained weights with a single convolutional layer without pretrained weights.
  • Will add ResNet18's inference time data and compare with Optuna's NAS constructed lightweight model.
  • Will test on pretrained MobileNetV3, MnasNet on torchvision with elimination based lightweight model architecture search.
  • Will be applied on other small datasets such as Fashion MNIST dataset and Plant Village dataset.

Others

  • "Empty" stands for model with no pretrained weights.
  • "EfficientNet B0 Elim 2" means 2 convolutional blocks have been eliminated from pretrained EfficientNet B0. Number next to "Elim" annotates how many convolutional blocks have been removed.
  • Table's performance illustrates best performance out of 100 epochs of finetuning on TACO Dataset.

Authors

Owner
snoop2head
break, compose, display
snoop2head
Pytorch Implementation of rpautrat/SuperPoint

SuperPoint-Pytorch (A Pure Pytorch Implementation) SuperPoint: Self-Supervised Interest Point Detection and Description Thanks This work is based on:

76 Dec 27, 2022
Metric learning algorithms in Python

metric-learn: Metric Learning in Python metric-learn contains efficient Python implementations of several popular supervised and weakly-supervised met

1.3k Dec 28, 2022
A treasure chest for visual recognition powered by PaddlePaddle

简体中文 | English PaddleClas 简介 飞桨图像识别套件PaddleClas是飞桨为工业界和学术界所准备的一个图像识别任务的工具集,助力使用者训练出更好的视觉模型和应用落地。 近期更新 2021.11.1 发布PP-ShiTu技术报告,新增饮料识别demo 2021.10.23 发

4.6k Dec 31, 2022
2D Time independent Schrodinger equation solver for arbitrary shape of well

Schrodinger Well Python Python solver for timeless Schrodinger equation for well with arbitrary shape https://imgur.com/a/jlhK7OZ Pictures of circular

WeightAn 24 Nov 18, 2022
CCPD: a diverse and well-annotated dataset for license plate detection and recognition

CCPD (Chinese City Parking Dataset, ECCV) UPdate on 10/03/2019. CCPD Dataset is now updated. We are confident that images in subsets of CCPD is much m

detectRecog 1.8k Dec 30, 2022
Code for "Optimizing risk-based breast cancer screening policies with reinforcement learning"

Tempo: Optimizing risk-based breast cancer screening policies with reinforcement learning Introduction This repository was used to develop Tempo, as d

Adam Yala 12 Oct 11, 2022
Efficiently computes derivatives of numpy code.

Note: Autograd is still being maintained but is no longer actively developed. The main developers (Dougal Maclaurin, David Duvenaud, Matt Johnson, and

Formerly: Harvard Intelligent Probabilistic Systems Group -- Now at Princeton 6.1k Jan 08, 2023
CUDA Python Low-level Bindings

CUDA Python Low-level Bindings

NVIDIA Corporation 529 Jan 03, 2023
A stock generator that assess a list of stocks and returns the best stocks for investing and money allocations based on users choices of volatility, duration and number of stocks

Stock-Generator Please visit "Stock Generator.ipynb" for a clearer view and "Stock Generator.py" for scripts. The stock generator is designed to allow

jmengnyay 1 Aug 02, 2022
Source code for the paper: Variance-Aware Machine Translation Test Sets (NeurIPS 2021 Datasets and Benchmarks Track)

Variance-Aware-MT-Test-Sets Variance-Aware Machine Translation Test Sets License See LICENSE. We follow the data licensing plan as the same as the WMT

NLP2CT Lab, University of Macau 5 Dec 21, 2021
The official implementation of ELSA: Enhanced Local Self-Attention for Vision Transformer

ELSA: Enhanced Local Self-Attention for Vision Transformer By Jingkai Zhou, Pich

DamoCV 87 Dec 19, 2022
A chemical analysis of lipophilicities & molecule drawings including ML

A chemical analysis of lipophilicity & molecule drawings including a bit of ML analysis. This is a simple project that includes two Jupyter files (one

Aurimas A. Nausėdas 7 Nov 22, 2022
Learning View Priors for Single-view 3D Reconstruction (CVPR 2019)

Learning View Priors for Single-view 3D Reconstruction (CVPR 2019) This is code for a paper Learning View Priors for Single-view 3D Reconstruction by

Hiroharu Kato 38 Aug 17, 2022
RTS3D: Real-time Stereo 3D Detection from 4D Feature-Consistency Embedding Space for Autonomous Driving

RTS3D: Real-time Stereo 3D Detection from 4D Feature-Consistency Embedding Space for Autonomous Driving (AAAI2021). RTS3D is efficiency and accuracy s

71 Nov 29, 2022
ShinRL: A Library for Evaluating RL Algorithms from Theoretical and Practical Perspectives

Status: Under development (expect bug fixes and huge updates) ShinRL: A Library for Evaluating RL Algorithms from Theoretical and Practical Perspectiv

37 Dec 28, 2022
SporeAgent: Reinforced Scene-level Plausibility for Object Pose Refinement

SporeAgent: Reinforced Scene-level Plausibility for Object Pose Refinement This repository implements the approach described in SporeAgent: Reinforced

Dominik Bauer 5 Jan 02, 2023
NanoDet-Plus⚡Super fast and lightweight anchor-free object detection model. 🔥Only 980 KB(int8) / 1.8MB (fp16) and run 97FPS on cellphone🔥

NanoDet-Plus⚡Super fast and lightweight anchor-free object detection model. 🔥Only 980 KB(int8) / 1.8MB (fp16) and run 97FPS on cellphone🔥

4.8k Jan 07, 2023
FairFuzz: AFL extension targeting rare branches

FairFuzz An AFL extension to increase code coverage by targeting rare branches. FairFuzz has a particular advantage on programs with highly nested str

Caroline Lemieux 222 Nov 16, 2022