CV backbones including GhostNet, TinyNet and TNT, developed by Huawei Noah's Ark Lab.

Overview

CV Backbones

including GhostNet, TinyNet, TNT (Transformer in Transformer) developed by Huawei Noah's Ark Lab.

News

2022/01/05 PyramidTNT: An improved TNT baseline is released.

2021/09/28 The paper of TNT (Transformer in Transformer) is accepted by NeurIPS 2021.

2021/09/18 The extended version of Versatile Filters is accepted by T-PAMI.

2021/08/30 GhostNet paper is selected as the Most Influential CVPR 2020 Papers.

2021/08/26 The codes of LegoNet and Versatile Filters has been merged into this repo.

2021/06/15 The code of TNT (Transformer in Transformer) has been released in this repo.

2020/10/31 GhostNet+TinyNet achieves better performance. See details in our NeurIPS 2020 paper: arXiv.

2020/06/10 GhostNet is included in PyTorch Hub.


GhostNet Code

This repo provides GhostNet pretrained models and inference code for TensorFlow and PyTorch:

For training, please refer to tinynet or timm.

TinyNet Code

This repo provides TinyNet pretrained models and inference code for PyTorch:

TNT Code

This repo provides training code and pretrained models of TNT (Transformer in Transformer) for PyTorch:

The code of PyramidTNT is also released:

LegoNet Code

This repo provides the implementation of paper LegoNet: Efficient Convolutional Neural Networks with Lego Filters (ICML 2019)

Versatile Filters Code

This repo provides the implementation of paper Learning Versatile Filters for Efficient Convolutional Neural Networks (NeurIPS 2018)

Citation

@inproceedings{ghostnet,
  title={GhostNet: More Features from Cheap Operations},
  author={Han, Kai and Wang, Yunhe and Tian, Qi and Guo, Jianyuan and Xu, Chunjing and Xu, Chang},
  booktitle={CVPR},
  year={2020}
}
@inproceedings{tinynet,
  title={Model Rubik’s Cube: Twisting Resolution, Depth and Width for TinyNets},
  author={Han, Kai and Wang, Yunhe and Zhang, Qiulin and Zhang, Wei and Xu, Chunjing and Zhang, Tong},
  booktitle={NeurIPS},
  year={2020}
}
@inproceedings{tnt,
  title={Transformer in transformer},
  author={Han, Kai and Xiao, An and Wu, Enhua and Guo, Jianyuan and Xu, Chunjing and Wang, Yunhe},
  booktitle={NeurIPS},
  year={2021}
}
@inproceedings{legonet,
    title={LegoNet: Efficient Convolutional Neural Networks with Lego Filters},
    author={Yang, Zhaohui and Wang, Yunhe and Liu, Chuanjian and Chen, Hanting and Xu, Chunjing and Shi, Boxin and Xu, Chao and Xu, Chang},
    booktitle={ICML},
    year={2019}
  }
@inproceedings{wang2018learning,
  title={Learning versatile filters for efficient convolutional neural networks},
  author={Wang, Yunhe and Xu, Chang and Chunjing, XU and Xu, Chao and Tao, Dacheng},
  booktitle={NeurIPS},
  year={2018}
}

Other versions of GhostNet

This repo provides the TensorFlow/PyTorch code of GhostNet. Other versions and applications can be found in the following:

  1. timm: code with pretrained model
  2. Darknet: cfg file, and description
  3. Gluon/Keras/Chainer: code
  4. Paddle: code
  5. Bolt inference framework: benckmark
  6. Human pose estimation: code
  7. YOLO with GhostNet backbone: code
  8. Face recognition: cavaface, FaceX-Zoo, TFace
Comments
  • TypeError: __init__() got an unexpected keyword argument 'bn_tf'

    TypeError: __init__() got an unexpected keyword argument 'bn_tf'

    Hello, I want to ask what caused the following error when running the train.py file? thank you “ TypeError: init() got an unexpected keyword argument 'bn_tf' ”

    opened by ModeSky 16
  • Counting ReLU vs HardSwish FLOPs

    Counting ReLU vs HardSwish FLOPs

    Thank you very much for sharing the source code. I have a question related to FLOPs counting for ReLU and HardSwish. I saw in the paper the flops are the same in ReLU and HardSwish. Can you explain this situation? image

    opened by jahongir7174 10
  • kernel size in primary convolution of Ghost module

    kernel size in primary convolution of Ghost module

    Hi, It is said in your paper that the primary convolution in Ghost module can have customized kernel size, which is a major difference from existing efficient convolution schemes. However, it seems that in this code all the kernel size of primary convolution in Ghost module are set to [1, 1], and the kernel set in _CONV_DEFS_0 are only used in blocks of stride=2. Is it set intentionally?

    opened by YUHAN666 9
  • 用GhostModule替换Conv2d,loss降的很慢?

    用GhostModule替换Conv2d,loss降的很慢?

    我直接将efficientnet里面的MBConvBlock中的Conv2d替换为GhostModule: Conv2d(in_channels=inp, out_channels=oup, kernel_size=1, bias=False) 替换为 GhostModule(inp, oup), 其他参数不变,为什么损失比以前收敛的更慢了,一直降不下来?请问需要修改其他什么参数吗?

    opened by yc-cui 8
  • Training hyperparams on ImageNet

    Training hyperparams on ImageNet

    Hi, thanks for sharing such a wonderful work, I'd like to reproduce your results on ImageNet, could you please specify training parameters such as initial learning rate, how to decay it, batch size, etc. It would be even better if you can provide tricks to train GhostNet, such as label smoothing and data augmentation. Thx!

    good first issue 
    opened by sean-zhuh 8
  • Why did you exclude EfficientNetB0 from Accuracy-Latency chart?

    Why did you exclude EfficientNetB0 from Accuracy-Latency chart?

    @iamhankai Hi,

    Great work!

    1. Why did you exclude EfficientNetB0 (0.390 BFlops - 76.3% Top1) from Accuracy-Latency chart?

    2. Also what mini_batch_size did you use for training GhostNet?

    flops_latency

    opened by AlexeyAB 8
  • VIG pretrained weights

    VIG pretrained weights

    @huawei-noah-admin cna you please share the VIG pretraiend model on google drive or one drive as baidu is not accessible from our end

    THank in advance

    opened by abhigoku10 7
  • The implementation of Isotropic architecture

    The implementation of Isotropic architecture

    Hi, thanks for sharing this impressive work. The paper mentioned two architectures, Isotropic one and pyramid one. I noticed that in the code, this is a reduce_ratios, and this reduce_ratios are used by a avg_pooling operation to calculate before building the graph. I am wondering whether all I need to do is setting this reduce_ratios to [1,1,1,1] if I want to implement the Isotropic architecture. Thanks.

    self.n_blocks = sum(blocks) channels = opt.channels reduce_ratios = [4, 2, 1, 1] dpr = [x.item() for x in torch.linspace(0, drop_path, self.n_blocks)] num_knn = [int(x.item()) for x in torch.linspace(k, k, self.n_blocks)]

    opened by buptxiaofeng 6
  • Gradient overflow occurs while training tnt-ti model

    Gradient overflow occurs while training tnt-ti model

    ^@^@Train: 41 [ 0/625 ( 0%)] Loss: 4.564162 (4.5642) Time: 96.744s, 21.17/s (96.744s, 21.17/s) LR: 8.284e-04 Data: 94.025 (94.025) ^@^@^@^@Train: 41 [ 50/625 ( 8%)] Loss: 4.395192 (4.4797) Time: 2.742s, 746.96/s (7.383s, 277.38/s) LR: 8.284e-04 Data: 0.057 (4.683) ^@^@^@^@Train: 41 [ 100/625 ( 16%)] Loss: 4.424296 (4.4612) Time: 2.741s, 747.15/s (6.529s, 313.66/s) LR: 8.284e-04 Data: 0.056 (3.831) Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 16384.0 Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 16384.0 Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 16384.0 Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 16384.0

    And the top-1 acc is only 0.2 after 40 epochs.

    Any tips available here, dear @iamhankai @yitongh

    opened by jimmyflycv 6
  • Bloated model

    Bloated model

    Hi, I am using Ghostnet backbone for training YoloV3 model in Tensorflow, but I am getting a bloated model. The checkpoint data size is approx. 68MB, but the checkpoint given here is of approx 20MB https://github.com/huawei-noah/ghostnet/blob/master/tensorflow/models/ghostnet_checkpoint.data-00000-of-00001

    I am also training EfficientNet model with YoloV3 and that seems to be working fine, without any bloated size.

    Could anyone or the author please confirm if this is the correct architecture or anything seems weird? I have attached the Ghostnet architecture file out of the code.

    Thanks. ghostnet_model_arch.txt

    opened by ghost 6
  • Replace Conv2d in my network, however it becomes slower, why?

    Replace Conv2d in my network, however it becomes slower, why?

    Above all, thanks for your great work! It really inspires me a lot! But now I have a question.

    I replace all the Conv2d operations in my network except the final ones, the model parameters really becomes much more less. However, when testing, I found that the average forward time decreases a lot by the replacement (from 428FPS down to 354FPS). So, is this a normal phenomenon? Or is this because of the concat operation?

    opened by FunkyKoki 6
  • VIG for segmenation

    VIG for segmenation

    @iamhankai thanks for open-sourcing the code base . Can you please let me knw how to use the pvig for segmentation related activities its really helpful

    THanks in advance

    opened by abhigoku10 0
  • higher performance of ViG

    higher performance of ViG

    I try to train ViG-S on ImageNet and get 80.54% top1 accuracy, which is higher than that in paper, 80.4%. I wonder if 80.4 is the average of multiple trainings? If yes, how many reps do you use?

    opened by tdzdog 9
Releases(GhostNetV2)
Owner
HUAWEI Noah's Ark Lab
Working with and contributing to the open source community in data mining, artificial intelligence, and related fields.
HUAWEI Noah's Ark Lab
[NeurIPS-2020] Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive Object Re-ID.

Self-paced Contrastive Learning (SpCL) The official repository for Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive Object Re-ID

Yixiao Ge 286 Dec 21, 2022
Unbiased Learning To Rank Algorithms (ULTRA)

This is an Unbiased Learning To Rank Algorithms (ULTRA) toolbox, which provides a codebase for experiments and research on learning to rank with human annotated or noisy labels.

71 Dec 01, 2022
Manage the availability of workspaces within Frappe/ ERPNext (sidebar) based on user-roles

Workspace Permissions Manage the availability of workspaces within Frappe/ ERPNext (sidebar) based on user-roles. Features Configure foreach workspace

Patrick.St. 18 Sep 26, 2022
Pipeline code for Sequential-GAM(Genome Architecture Mapping).

Sequential-GAM Pipeline code for Sequential-GAM(Genome Architecture Mapping). mapping whole_preprocess.sh include the whole processing of mapping. usa

3 Nov 03, 2022
Spherical Confidence Learning for Face Recognition, accepted to CVPR2021.

Sphere Confidence Face (SCF) This repository contains the PyTorch implementation of Sphere Confidence Face (SCF) proposed in the CVPR2021 paper: Shen

Maths 70 Dec 09, 2022
Self-Regulated Learning for Egocentric Video Activity Anticipation

Self-Regulated Learning for Egocentric Video Activity Anticipation Introduction This is a Pytorch implementation of the model described in our paper:

qzhb 13 Sep 23, 2022
Use of Attention Gates in a Convolutional Neural Network / Medical Image Classification and Segmentation

Attention Gated Networks (Image Classification & Segmentation) Pytorch implementation of attention gates used in U-Net and VGG-16 models. The framewor

Ozan Oktay 1.6k Dec 30, 2022
Code reproduce for paper "Vehicle Re-identification with Viewpoint-aware Metric Learning"

VANET Code reproduce for paper "Vehicle Re-identification with Viewpoint-aware Metric Learning" Introduction This is the implementation of article VAN

EMDATA-AILAB 23 Dec 26, 2022
OverFeat is a Convolutional Network-based image classifier and feature extractor.

OverFeat OverFeat is a Convolutional Network-based image classifier and feature extractor. OverFeat was trained on the ImageNet dataset and participat

593 Dec 08, 2022
Fast Neural Style for Image Style Transform by Pytorch

FastNeuralStyle by Pytorch Fast Neural Style for Image Style Transform by Pytorch This is famous Fast Neural Style of Paper Perceptual Losses for Real

Bengxy 81 Sep 03, 2022
Wider-Yolo Kütüphanesi ile Yüz Tespit Uygulamanı Yap

WIDER-YOLO : Yüz Tespit Uygulaması Yap Wider-Yolo Kütüphanesinin Kullanımı 1. Wider Face Veri Setini İndir Train Dataset Val Dataset Test Dataset Not:

Kadir Nar 6 Aug 22, 2022
Spiking Neural Network for Computer Vision using SpikingJelly framework and Pytorch-Lightning

Spiking Neural Network for Computer Vision using SpikingJelly framework and Pytorch-Lightning

Sami BARCHID 2 Oct 20, 2022
Create time-series datacubes for supervised machine learning with ICEYE SAR images.

ICEcube is a Python library intended to help organize SAR images and annotations for supervised machine learning applications. The library generates m

ICEYE Ltd 65 Jan 03, 2023
Multi-modal Content Creation Model Training Infrastructure including the FACT model (AI Choreographer) implementation.

AI Choreographer: Music Conditioned 3D Dance Generation with AIST++ [ICCV-2021]. Overview This package contains the model implementation and training

Google Research 365 Dec 30, 2022
A tensorflow model that predicts if the image is of a cat or of a dog.

Quick intro Hello and thank you for your interest in my project! This is the backend part of a two-repo application. The other part can be found here

Tudor Matei 0 Mar 08, 2022
HyperSeg: Patch-wise Hypernetwork for Real-time Semantic Segmentation Official PyTorch Implementation

: We present a novel, real-time, semantic segmentation network in which the encoder both encodes and generates the parameters (weights) of the decoder. Furthermore, to allow maximal adaptivity, the w

Yuval Nirkin 182 Dec 14, 2022
PyTorch implementation of "Representing Shape Collections with Alignment-Aware Linear Models" paper.

deep-linear-shapes PyTorch implementation of "Representing Shape Collections with Alignment-Aware Linear Models" paper. If you find this code useful i

Romain Loiseau 27 Sep 24, 2022
App for identification of various objects. Based on YOLO v4 tiny architecture

Object_detection Repository containing trained model yolo v4 tiny, which is capable of identification 80 different classes Default feed is set to be a

Mateusz Kurdziel 0 Jun 22, 2022
PyTorch implementation of "Dataset Knowledge Transfer for Class-Incremental Learning Without Memory" (WACV2022)

Dataset Knowledge Transfer for Class-Incremental Learning Without Memory [Paper] [Slides] Summary Introduction Installation Reproducing results Citati

Habib Slim 5 Dec 05, 2022
The official homepage of the (outdated) COCO-Stuff 10K dataset.

COCO-Stuff 10K dataset v1.1 (outdated) Holger Caesar, Jasper Uijlings, Vittorio Ferrari Overview Welcome to official homepage of the COCO-Stuff [1] da

Holger Caesar 263 Dec 11, 2022