A library for researching neural networks compression and acceleration methods.

Overview

Model Compression Research Package

This package was developed to enable scalable, reusable and reproducable research of weight pruning, quantization and distillation methods with ease.

Installation

To install the library clone the repository and install using pip

git clone https://github.com/IntelLabs/Model-Compression-Research-Package
cd Model-Compression-Research-Package
pip install [-e] .

Add -e flag to install an editable version of the library.

Quick Tour

This package contains implementations of several weight pruning methods, knowledge distillation and quantization-aware training. Here we will show how to easily use those implementations with your existing model implementation and training loop. It is also possible to combine several methods together in the same training process. Please refer to the packages examples.

Weight Pruning

Weight pruning is a method to induce zeros in a models weight while training. There are several methods to prune a model and it is a widely explored research field.

To list the existing weight pruning implemtations in the package use model_compression_research.list_methods(). For example, applying unstructured magnitude pruning while training your model can be done with a few single lines of code

from model_compression_research import IterativePruningConfig, IterativePruningScheduler

training_args = get_training_args()
model = get_model()
dataloader = get_dataloader()
criterion = get_criterion()

# Initialize a pruning configuration and a scheduler and apply it on the model
pruning_config = IterativePruningConfig(
    pruning_fn="unstructured_magnitude",
    pruning_fn_default_kwargs={"target_sparsity": 0.9}
)
pruning_scheduler = IterativePruningScheduler(model, pruning_config)

# Initialize optimizer after initializing the pruning scheduler
optimizer = get_optimizer()

# Training loop
for e in range(training_args.epochs):
    for batch in dataloader:
        inputs, labels = 
        model.train()
        outputs = model(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()
        # Call pruning scheduler step
        pruning_schduler.step()
        optimizer.zero_grad()

# At the end of training rmeove the pruning parts and get the resulted pruned model
pruning_scheduler.remove_pruning()

For using knowledge distillation with HuggingFace/transformers dedicated transformers Trainer see the implementation of HFTrainerPruningCallback in api_utils.py.

Knowledge Distillation

Model distillation is a method to distill the knowledge learned by a teacher to a smaller student model. A method to do that is to compute the difference between the student's and teacher's output distribution using KL divergence. In this package you can find a simple implementation that does just that.

Assuming that your teacher and student models' outputs are of the same dimension, you can use the implementation in this package as follows:

from model_compression_research import TeacherWrapper, DistillationModelWrapper

training_args = get_training_args()
teacher = get_teacher_trained_model()
student = get_student_model()
dataloader = get_dataloader()
criterion = get_criterion()

# Wrap teacher model with TeacherWrapper and set loss scaling factor and temperature
teacher = TeacherWrapper(teacher, ce_alpha=0.5, ce_temperature=2.0)
# Initialize the distillation model with the student and teacher
distillation_model = DistillationModelWrapper(student, teacher, alpha_student=0.5)

optimizer = get_optimizer()

# Training loop
for e in range(training_args.epochs):
    for batch in dataloader:
        inputs, labels = batch
        distillation_model.train()
        # Calculate student loss w.r.t labels as you usually do
        student_outputs = distillation_model(inputs)
        loss_wrt_labels = criterion(student_outputs, labels)
        # Add knowledge distillation term
        loss = distillation_model.compute_loss(loss_wrt_labels, student_outputs)
        loss.backward()
        optimizer.step()
        optimizer.zero_grad()

For using knowledge distillation with HuggingFace/transformers see the implementation of HFTeacherWrapper and hf_add_teacher_to_student in api_utils.py.

Quantization-Aware Training

Quantization-Aware Training is a method for training models that will be later quantized at the inference stage, as opposed to other post-training quantization methods where models are trained without any adaptation to the error caused by model quantization.

A similar quantization-aware training method to the one introduced in Q8BERT: Quantized 8Bit BERT generelized to custom models is implemented in this package:

from model_compression_research import QuantizerConfig, convert_model_for_qat

training_args = get_training_args()
model = get_model()
dataloader = get_dataloader()
criterion = get_criterion()

# Initialize quantizer configuration
qat_config = QuantizerConfig()
# Convert model to quantization-aware training model
qat_model = convert_model_for_qat(model, qat_config)

optimizer = get_optimizer()

# Training loop
for e in range(training_args.epochs):
    for batch in dataloader:
        inputs, labels = 
        model.train()
        outputs = model(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()
        optimizer.zero_grad()

Papers Implemented in Model Compression Research Package

Methods from the following papers were implemented in this package and are ready for use:

Citation

If you want to cite our paper and library, you can use the following:

@article{zafrir2021prune,
  title={Prune Once for All: Sparse Pre-Trained Language Models},
  author={Zafrir, Ofir and Larey, Ariel and Boudoukh, Guy and Shen, Haihao and Wasserblat, Moshe},
  journal={arXiv preprint arXiv:2111.05754},
  year={2021}
}
@software{zafrir_ofir_2021_5721732,
  author       = {Zafrir, Ofir},
  title        = {Model-Compression-Research-Package by Intel Labs},
  month        = nov,
  year         = 2021,
  publisher    = {Zenodo},
  version      = {v0.1.0},
  doi          = {10.5281/zenodo.5721732},
  url          = {https://doi.org/10.5281/zenodo.5721732}
}
Comments
  • Uniform magnitude pruning implementation problem

    Uniform magnitude pruning implementation problem

    Hello, when the uniform magnitude pruning method is set to "pruning_fn_default_kwargs": { "block_size": 8, "target_sparsity": 0.85 }, The model ends up retaining the parameter 0.75, why?

    opened by LYF915 13
  • Difference between end_pruning_step and policy_end_step

    Difference between end_pruning_step and policy_end_step

    Hi, Could you please clarify the difference between end_pruning_step and policy_end_step in the pruning config file (for example: https://github.com/IntelLabs/Model-Compression-Research-Package/blob/main/examples/transformers/language-modeling/config/iterative_unstructured_magnitude_90_config.json)?

    opened by eldarkurtic 6
  • Issue of max_seq_length in MLM pretraining data preprocessing

    Issue of max_seq_length in MLM pretraining data preprocessing

    Hi, I find that in the functions segment_pair_nsp_process and doc_sentences_process in examples/transformers/language-modeling/dataset_processing.py, the sequence length of the processed data is actually max_seq_length - tokenizer.num_special_tokens_to_add(pair=False) since variable max_seq_length is replaced by this value and have been passed to the tokenizer.prepare_for_model function. Such as user set max_seq_length=128, and the processed data will have a sequence length of 125. I'm not sure is it the standard way of pretraining data preprocessing?

    opened by XinyuYe-Intel 5
  • How to save QAT quantized model?

    How to save QAT quantized model?

    Hi, thank you for your model compression package. I am a little confused about how to save QAT quantized model. Do you have an official website or documentation for this package?

    opened by OctoberKat 4
  • LR scheduler clarification

    LR scheduler clarification

    Hi, Running the Language Modelling example (https://github.com/IntelLabs/Model-Compression-Research-Package/tree/main/examples/transformers/language-modeling) ends with a slightly different LR schedule compared to the one presented in the Figure 2.b of the "Prune Once For All" paper. (particularly the warmup phase seems to be a bit different)

    train/learning_rate logged by Weights&Biases: Screenshot 2021-12-20 at 11 25 39

    Learning rate in the paper, Figure 2.b: Screenshot 2021-12-20 at 11 31 35

    opened by eldarkurtic 4
  • Sparse models available for download?

    Sparse models available for download?

    Hello :-)

    I found your Prune-Once-For-All paper very interesting and would like to play with the sparse models that it produced. Are you going to open-source them soon?

    I have noticed you have open-sourced the sparse-pretrained models, but I couldn't find the corresponding models finetuned on downstream tasks (SQuAD, MNLI, QQP, etc.).

    opened by eldarkurtic 2
  • How to interpret hyperparams?

    How to interpret hyperparams?

    Hi, I have a few questions about hyperparams in the Table 6:

    1. Since there are three models: {BERT-Base, BERT-Large, DistilBERT}, how to interpret learning rate for SQuAD with only two values: {1.5e-4, 1.8e-4}?
    2. I assume that for GLUE {1e-4, 1.2e-4, 1.5e-5} are learning rate values for each model respectively. Is this correct?
    3. Since weight decay row has only two values {0, 0.01}, I assume 0 is for all models on SQuAD and 0.01 is for all models on GLUE?
    4. Since warmup ratio row has three values {0, 0.01, 0.1}, I assume these are for each model respectively, no matter which dataset is used?
    5. Does "Epochs {3, 6, 9}" for GLUE mean BERT-base tuned for 3 epochs, BERT-Large for 6 and DistilBERT for 9 epochs?
    opened by eldarkurtic 2
  • Upstream pruning

    Upstream pruning

    Hi! First of all, thanks for open-sourcing your code for the "Prune Once for All" paper. I would like to ask a few questions:

    1. Are you planning to release your teacher model for upstream task? I have noticed that at https://huggingface.co/Intel , only the sparse checkpoints have been released. I would like to run some experiments with your compression package.
    2. From the published scripts, I have noticed that you have been using only English Wikipedia dataset for pruning at upstream tasks (MLM and NSP) but the bert-base-uncased model you use as a starting point is pre-trained on BookCorpus and English Wikipedia. Is there any specific reason why you haven't included BookCorpus dataset too?
    opened by eldarkurtic 1
  • Code analysis identified several places where objects were either not

    Code analysis identified several places where objects were either not

    declared or were declared as None which could result in an unsupported operation error from python.

    Change descriptions:

    • added forward declarations of 4 variables in both the modeling_bert and modeling_roberta
    • removed assignment of all_hidden_states to None if output_hidden_states is none
    • removed assignment of all_attentions to None if output_attentions is none
    • removed assignment of all_self_attentions to None if output_attentions is None
    • removed assignment of all_cross_attentions to Non if output_attentions is None
    opened by michaelbeale-IL 0
  • Fix distillation of different HF/transformers models

    Fix distillation of different HF/transformers models

    Until now, if the teacher had a different signature than the student, transformers.trainer would delete the input that is not matching to the student's signature leading to the teacher not getting all the input it needs.

    For example, training a DistilBERT student with a BERT-Base teacher will not work properly since BERT-Base requires token_type_ids which DistilBERT doesn't require. The trainer deletes the token_type_ids from the input and BERT teacher would get an all zeros token type ids leading to wrong predictions.

    This PR fixes this issue.

    opened by ofirzaf 0
  • Small optimizations

    Small optimizations

    • Implement fast threshold compute: Execute best threshold compute according to target hardware (cpu/cuda) and implement fast compute using histogram
    • Refactor block pruning computation: move computation to utils and reuse in the rest of the pruning methods
    opened by ofirzaf 0
Releases(v0.1.0)
  • v0.1.0(Nov 23, 2021)

    First release of Intel Labs' Model Compression Research Package, the current version includes model compression methods from previous published papers and our own research papers implementations:

    • Pruning, quantization and knowledge distillation methods and schedulers that may fit various PyTorch models out-of-the-box
    • Integration to HuggingFace/transformers library for most of the available methods
    • Various examples showing how to use the library
    • Prune Once for All: Sparse Pre-Trained Language Models reproduction guide and scripts
    Source code(tar.gz)
    Source code(zip)
Owner
Intel Labs
Intel Labs
Price-Prediction-For-a-Dream-Home - A machine learning based linear regression trained model for house price prediction.

Price-Prediction-For-a-Dream-Home ROADMAP TO THIS LINEAR REGRESSION BASED HOUSE PRICE PREDICTION PREDICTION MODEL Import all the dependencies of the p

DIKSHA DESWAL 1 Dec 29, 2021
The code repository for "PyCIL: A Python Toolbox for Class-Incremental Learning" in PyTorch.

PyCIL: A Python Toolbox for Class-Incremental Learning Introduction • Methods Reproduced • Reproduced Results • How To Use • License • Acknowledgement

Fu-Yun Wang 258 Dec 31, 2022
[NeurIPS 2021] Large Scale Learning on Non-Homophilous Graphs: New Benchmarks and Strong Simple Methods

Large Scale Learning on Non-Homophilous Graphs: New Benchmarks and Strong Simple Methods Large Scale Learning on Non-Homophilous Graphs: New Benchmark

60 Jan 03, 2023
Video Instance Segmentation using Inter-Frame Communication Transformers (NeurIPS 2021)

Video Instance Segmentation using Inter-Frame Communication Transformers (NeurIPS 2021) Paper Video Instance Segmentation using Inter-Frame Communicat

Sukjun Hwang 81 Dec 29, 2022
Useful materials and tutorials for 110-1 NTU DBME5028 (Application of Deep Learning in Medical Imaging)

Useful materials and tutorials for 110-1 NTU DBME5028 (Application of Deep Learning in Medical Imaging)

7 Jun 22, 2022
ICLR 2021, Fair Mixup: Fairness via Interpolation

Fair Mixup: Fairness via Interpolation Training classifiers under fairness constraints such as group fairness, regularizes the disparities of predicti

Ching-Yao Chuang 49 Nov 22, 2022
A vision library for performing sliced inference on large images/small objects

SAHI: Slicing Aided Hyper Inference A vision library for performing sliced inference on large images/small objects Overview Object detection and insta

Open Business Software Solutions 2.3k Jan 04, 2023
Code for 'Single Image 3D Shape Retrieval via Cross-Modal Instance and Category Contrastive Learning', ICCV 2021

CMIC-Retrieval Code for Single Image 3D Shape Retrieval via Cross-Modal Instance and Category Contrastive Learning. ICCV 2021. Introduction In this wo

42 Nov 17, 2022
Single cell current best practices tutorial case study for the paper:Luecken and Theis, "Current best practices in single-cell RNA-seq analysis: a tutorial"

Scripts for "Current best-practices in single-cell RNA-seq: a tutorial" This repository is complementary to the publication: M.D. Luecken, F.J. Theis,

Theis Lab 968 Dec 28, 2022
A list of awesome PyTorch scholarship articles, guides, blogs, courses and other resources.

Awesome PyTorch Scholarship Resources A collection of awesome PyTorch and Python learning resources. Contributions are always welcome! Course Informat

Arnas Gečas 302 Dec 03, 2022
Fake videos detection by tracing the source using video hashing retrieval.

Vision Transformer Based Video Hashing Retrieval for Tracing the Source of Fake Videos 🎉️ 📜 Directory Introduction VTL Trace Samples and Acc of Hash

56 Dec 22, 2022
Code for Blind Image Decomposition (BID) and Blind Image Decomposition network (BIDeN).

arXiv, porject page, paper Blind Image Decomposition (BID) Blind Image Decomposition is a novel task. The task requires separating a superimposed imag

64 Dec 20, 2022
Codebase for Image Classification Research, written in PyTorch.

pycls pycls is an image classification codebase, written in PyTorch. It was originally developed for the On Network Design Spaces for Visual Recogniti

Facebook Research 2k Jan 01, 2023
Source code for "Pack Together: Entity and Relation Extraction with Levitated Marker"

PL-Marker Source code for Pack Together: Entity and Relation Extraction with Levitated Marker. Quick links Overview Setup Install Dependencies Data Pr

THUNLP 173 Dec 30, 2022
MediaPipeで姿勢推定を行い、Tokyo2020オリンピック風のピクトグラムを表示するデモ

Tokyo2020-Pictogram-using-MediaPipe MediaPipeで姿勢推定を行い、Tokyo2020オリンピック風のピクトグラムを表示するデモです。 Tokyo2020Pictgram02.mp4 Requirement mediapipe 0.8.6 or later O

KazuhitoTakahashi 295 Dec 26, 2022
Automatically measure the facial Width-To-Height ratio and get facial analysis results provided by Microsoft Azure

fwhr-calc-website This project is to automatically measure the facial Width-To-Height ratio and get facial analysis results provided by Microsoft Azur

SoohyunPark 1 Feb 07, 2022
This is the official implementation of 3D-CVF: Generating Joint Camera and LiDAR Features Using Cross-View Spatial Feature Fusion for 3D Object Detection, built on SECOND.

3D-CVF This is the official implementation of 3D-CVF: Generating Joint Camera and LiDAR Features Using Cross-View Spatial Feature Fusion for 3D Object

YecheolKim 97 Dec 20, 2022
A program to recognize fruits on pictures or videos using yolov5

Yolov5 Fruits Detector Requirements Either Linux or Windows. We recommend Linux for better performance. Python 3.6+ and PyTorch 1.7+. Installation To

Fateme Zamanian 30 Jan 06, 2023
Code for paper "Extract, Denoise and Enforce: Evaluating and Improving Concept Preservation for Text-to-Text Generation" EMNLP 2021

The repo provides the code for paper "Extract, Denoise and Enforce: Evaluating and Improving Concept Preservation for Text-to-Text Generation" EMNLP 2

Yuning Mao 18 May 24, 2022