PyTorch source code for Distilling Knowledge by Mimicking Features

Overview

LSHFM.detection

This is the PyTorch source code for Distilling Knowledge by Mimicking Features. And this project contains code for object detection with mimicking features. For image classification, please visit LSHFM.classification.

dependence

  • python
  • pytorch 1.7.1
  • torchvision 0.8.2

Prepare the dataset

Please prepare the COCO and VOC datasets by youself. Then you need to fix the get_data_path function in src/dataset/coco_utils.py and src/dataset/voc_utils.py.

Run

You can run the experiments by

PORT=4444 bash experiments/[script name].sh 0,1,2,3 

the training set contains VOC2007 trainval and VOC2012 trainval, while the testing set is VOC2007 test.

We train all models by 24 epochs while the learning rate decays at the 18th and 22th epoch.

Faster R-CNN

Before you run the KD experiments, please make sure the teacher model weight have been saved in pretrained. You can first run ResNet101 baseline and VGG16 baseline to train the teacher model, and then move the model to pretrained and edit --teacher-ckpt in the training shell scripts. You can also download voc0712_fasterrcnn_r101_83.6 and voc0712_fasterrcnn_vgg16fpn_79.0 directly, and move them to pretrained.

[email protected] [email protected]
Teacher 83.6 79.0
Student 82.0 75.1
L2 83.0 76.8
LSH 82.6 76.7
LSHL2 83.0 77.2

RetinaNet

As mentioned in Faster R-CNN, please make sure there are teacher models in pretrained. You can download the teacher models in voc0712_retinanet_r101_83.0.ckpt and voc0712_retinanet_vgg16fpn_76.6.ckpt.

[email protected] [email protected]
Teacher 83.0 76.6
Student 82.5 73.2
L2 82.6 74.8
LSHL2 83.0 75.2

We find that it is easy to get NaN loss when training by LSH KD.

visualize

visualize the ground truth label

python src/visual.py --dataset voc07 --idx 1 --gt

visualize the model prediction

python src/visual.py --dataset voc07 --idx 2 --model fasterrcnn_resnet50_fpn --checkpoint results/voc0712/fasterrcnn_resnet50_fpn/2020-12-11_20\:14\:09/model_13.pth

Citing this repository

If you find this code useful in your research, please consider citing us:

@article{LSHFM,
  title={Distilling knowledge by mimicking features},
  author={Wang, Guo-Hua and Ge, Yifan and Wu, Jianxin},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2021},
}

Acknowledgement

This project is based on https://github.com/pytorch/vision/tree/master/references/detection. This project aims at object detection, so I remove the code about segmentation and keypoint detection.

Owner
Guo-Hua Wang
Guo-Hua Wang
Collections for the lasted paper about multi-view clustering methods (papers, codes)

Multi-View Clustering Papers Collections for the lasted paper about multi-view clustering methods (papers, codes). There also exists some repositories

Andrew Guan 10 Sep 20, 2022
duralava is a neural network which can simulate a lava lamp in an infinite loop.

duralava duralava is a neural network which can simulate a lava lamp in an infinite loop. Example This is not a real lava lamp but a "fake" one genera

Maximilian Bachl 87 Dec 20, 2022
Official implementation for the paper: Multi-label Classification with Partial Annotations using Class-aware Selective Loss

Multi-label Classification with Partial Annotations using Class-aware Selective Loss Paper | Pretrained models Official PyTorch Implementation Emanuel

99 Dec 27, 2022
Large-Scale Unsupervised Object Discovery

Large-Scale Unsupervised Object Discovery Huy V. Vo, Elena Sizikova, Cordelia Schmid, Patrick Pérez, Jean Ponce [PDF] We propose a novel ranking-based

17 Sep 19, 2022
Training a Resilient Q-Network against Observational Interference, Causal Inference Q-Networks

Obs-Causal-Q-Network AAAI 2022 - Training a Resilient Q-Network against Observational Interference Preprint | Slides | Colab Demo | Environment Setup

23 Nov 21, 2022
Huawei Hackathon 2021 - Sweden (Stockholm)

huawei-hackathon-2021 Contributors DrakeAxelrod Challenge Requirements: python=3.8.10 Standard libraries (no importing) Important factors: Data depend

Drake Axelrod 32 Nov 08, 2022
This repository contains the re-implementation of our paper deSpeckNet: Generalizing Deep Learning Based SAR Image Despeckling

deSpeckNet-TF-GEE This repository contains the re-implementation of our paper deSpeckNet: Generalizing Deep Learning Based SAR Image Despeckling publi

Adugna Mullissa 16 Sep 07, 2022
Implémentation en pyhton de l'article Depixelizing pixel art de Johannes Kopf et Dani Lischinski

Implémentation en pyhton de l'article Depixelizing pixel art de Johannes Kopf et Dani Lischinski

TableauBits 3 May 29, 2022
Deep Image Search is an AI-based image search engine that includes deep transfor learning features Extraction and tree-based vectorized search.

Deep Image Search - AI-Based Image Search Engine Deep Image Search is an AI-based image search engine that includes deep transfer learning features Ex

139 Jan 01, 2023
This is a pytorch implementation for the BST model from Alibaba https://arxiv.org/pdf/1905.06874.pdf

Behavior-Sequence-Transformer-Pytorch This is a pytorch implementation for the BST model from Alibaba https://arxiv.org/pdf/1905.06874.pdf This model

Jaime Ferrando Huertas 83 Jan 05, 2023
A universal memory dumper using Frida

Fridump Fridump (v0.1) is an open source memory dumping tool, primarily aimed to penetration testers and developers. Fridump is using the Frida framew

551 Jan 07, 2023
EMNLP 2021: Single-dataset Experts for Multi-dataset Question-Answering

MADE (Multi-Adapter Dataset Experts) This repository contains the implementation of MADE (Multi-adapter dataset experts), which is described in the pa

Princeton Natural Language Processing 68 Jul 18, 2022
A Python library for Deep Probabilistic Modeling

Abstract DeeProb-kit is a Python library that implements deep probabilistic models such as various kinds of Sum-Product Networks, Normalizing Flows an

DeeProb-org 46 Dec 26, 2022
A denoising diffusion probabilistic model (DDPM) tailored for conditional generation of protein distograms

Denoising Diffusion Probabilistic Model for Proteins Implementation of Denoising Diffusion Probabilistic Model in Pytorch. It is a new approach to gen

Phil Wang 108 Nov 23, 2022
Punctuation Restoration using Transformer Models for High-and Low-Resource Languages

Punctuation Restoration using Transformer Models This repository contins official implementation of the paper Punctuation Restoration using Transforme

Tanvirul Alam 142 Jan 01, 2023
EPSANet:An Efficient Pyramid Split Attention Block on Convolutional Neural Network

EPSANet:An Efficient Pyramid Split Attention Block on Convolutional Neural Network This repo contains the official Pytorch implementaion code and conf

Hu Zhang 175 Jan 07, 2023
PyTorch implementation of "Dataset Knowledge Transfer for Class-Incremental Learning Without Memory" (WACV2022)

Dataset Knowledge Transfer for Class-Incremental Learning Without Memory [Paper] [Slides] Summary Introduction Installation Reproducing results Citati

Habib Slim 5 Dec 05, 2022
Natural Intelligence is still a pretty good idea.

Human Learn Machine Learning models should play by the rules, literally. Project Goal Back in the old days, it was common to write rule-based systems.

vincent d warmerdam 641 Dec 26, 2022
Code for 2021 NeurIPS --- Towards Multi-Grained Explainability for Graph Neural Networks

ReFine: Multi-Grained Explainability for GNNs This is the official code for Towards Multi-Grained Explainability for Graph Neural Networks (NeurIPS 20

Shirley (Ying-Xin) Wu 47 Dec 16, 2022
A PyTorch implementation of "Multi-Scale Contrastive Siamese Networks for Self-Supervised Graph Representation Learning", IJCAI-21

MERIT A PyTorch implementation of our IJCAI-21 paper Multi-Scale Contrastive Siamese Networks for Self-Supervised Graph Representation Learning. Depen

Graph Analysis & Deep Learning Laboratory, GRAND 32 Jan 02, 2023