Pytorch Implementations of large number classical backbone CNNs, data enhancement, torch loss, attention, visualization and some common algorithms.

Overview

Torch-template-for-deep-learning

Pytorch implementations of some **classical backbone CNNs, data enhancement, torch loss, attention, visualization and some common algorithms **.

Requirements

· torch, torch-vision

· torchsummary

· other necessary

usage

A training script is supplied in “train_baseline.py”, the arguments are in “args.py

autoaug: Data enhancement and CNNs regularization

- StochDepth
- label smoothing
- Cutout
- DropBlock
- Mixup
- Manifold Mixup
- ShakeDrop
- cutmix

dataset_loader: Loaders for various datasets

from dataloder.scoliosis_dataloder import ScoliosisDataset
from dataloder.facial_attraction_dataloder import FacialAttractionDataset
from dataloder.fa_and_sco_dataloder import ScoandFaDataset
from dataloder.scofaNshot_dataloder import ScoandFaNshotDataset
from dataloder.age_dataloder import MegaAsiaAgeDataset
def load_dataset(data_config):
    if data_config.dataset == 'cifar10':
        training_transform=training_transforms()
        if data_config.autoaug:
            print('auto Augmentation the data !')
            training_transform.transforms.insert(0, Augmentation(fa_reduced_cifar10()))
        train_dataset = torchvision.datasets.CIFAR10(root=data_config.data_path,
                                                     train=True,
                                                     transform=training_transform,
                                                     download=True)
        val_dataset = torchvision.datasets.CIFAR10(root=data_config.data_path,
                                                   train=False,
                                                   transform=validation_transforms(),
                                                   download=True)
        return train_dataset,val_dataset
    elif data_config.dataset == 'cifar100':
        train_dataset = torchvision.datasets.CIFAR100(root=data_config.data_path,
                                                     train=True,
                                                     transform=training_transforms(),
                                                     download=True)
        val_dataset = torchvision.datasets.CIFAR100(root=data_config.data_path,
                                                   train=False,
                                                   transform=validation_transforms(),
                                                   download=True)
        return train_dataset, val_dataset

deployment: Deployment mode of pytorch model

Two deployment modes of pytorch model are provided, one is web deployment and the other is C + + deployment

Store the training weight file in ` flash_ Deployment ` folder

Then modify ' server.py '  path

Leverage ' client.Py ' call

models: Various classical deep learning models

Classical network
- **AlexNet**
- **VGG**
- **ResNet** 
- **ResNext** 
- **InceptionV1**
- **InceptionV2 and InceptionV3**
- **InceptionV4 and Inception-ResNet**
- **GoogleNet**
- **EfficienNet**
- **MNasNet**
- **DPN**
Attention network
- **SE Attention**
- **External Attention**
- **Self Attention**
- **SK Attention**
- **CBAM Attention**
- **BAM Attention**
- **ECA Attention**
- **DANet Attention**
- **Pyramid Split Attention(PSA)**
- **EMSA Attention**
- **A2Attention**
- **Non-Local Attention**
- **CoAtNet**
- **CoordAttention**
- **HaloAttention**
- **MobileViTAttention**
- **MUSEAttention**  
- **OutlookAttention**
- **ParNetAttention**
- **ParallelPolarizedSelfAttention**
- **residual_attention**
- **S2Attention**
- **SpatialGroupEnhance Attention**
- **ShuffleAttention**
- **GFNet Attention**
- **TripletAttention**
- **UFOAttention**
- **VIPAttention**
Lightweight network
- **MobileNets:**
- **MobileNetV2:**
- **MobileNetV3:**
- **ShuffleNet:**
- **ShuffleNet V2:**
- **SqueezeNet**
- **Xception**
- **MixNet**
- **GhostNet**
GAN
- **Auxiliary Classifier GAN**
- **Adversarial Autoencoder**
- **BEGAN**
- **BicycleGAN**
- **Boundary-Seeking GAN**
- **Cluster GAN**
- **Conditional GAN**
- **Context-Conditional GAN**
- **Context Encoder**
- **Coupled GAN**
- **CycleGAN**
- **Deep Convolutional GAN**
- **DiscoGAN**
- **DRAGAN**
- **DualGAN**
- **Energy-Based GAN**
- **Enhanced Super-Resolution GAN**  
- **Least Squares GAN**
- **Enhanced Super-Resolution GAN**
- **GAN**
- **InfoGAN**
- **Pix2Pix**
- **Relativistic GAN**
- **Semi-Supervised GAN**
- **StarGAN**
- **Wasserstein GAN**
- **Wasserstein GAN GP**
- **Wasserstein GAN DIV**
ObjectDetection-network
- **SSD:**
- **YOLO:**
- **YOLOv2:**
- **YOLOv3:**
- **FCOS:**
- **FPN:**
- **RetinaNet**
- **Objects as Points:**
- **FSAF:**
- **CenterNet**
- **FoveaBox**
Semantic Segmentation
- **FCN**
- **Fast-SCNN**
- **LEDNet:**
- **LRNNet**
- **FisheyeMODNet:**
Instance Segmentation
- **PolarMask** 
FaceDetectorAndRecognition
- **FaceBoxes**
- **LFFD**
- **VarGFaceNet**
HumanPoseEstimation
- **Stacked Hourglass Networks**
- **Simple Baselines**
- **LPN**

pytorch_loss: loss for training

- label-smooth
- amsoftmax
- focal-loss
- dual-focal-loss 
- triplet-loss
- giou-loss
- affinity-loss
- pc_softmax_cross_entropy
- ohem-loss(softmax based on line hard mining loss)
- large-margin-softmax(bmvc2019)
- lovasz-softmax-loss
- dice-loss(both generalized soft dice loss and batch soft dice loss)

tf_to_pytorch: TensorFlow to PyTorch Conversion

This directory is used to convert TensorFlow weights to PyTorch. 
It was hacked together fairly quickly, so the code is not the most 
beautiful (just a warning!), but it does the job. I will be refactoring it soon.

TorchCAM: Class Activation Mapping

Simple way to leverage the class-specific activation of convolutional layers in PyTorch.

- CAM
- ScoreCAM
- SSCAM
- ISCAM
- GradCAM
- Grad-CAM++
- Smooth Grad-CAM++
- XGradCAM
- LayerCAM

Note

Write at the end

At present, the work organized by this project is indeed not comprehensive enough. As the amount of reading increases, we will continue to improve this project. Welcome everyone star to support. If there are incorrect statements or incorrect code implementations in the article, you are welcome to point out~

Owner
Li Shengyan
Li Shengyan
ReferFormer - Official Implementation of ReferFormer

The official implementation of the paper: Language as Queries for Referring Vide

Jonas Wu 232 Dec 29, 2022
Hybrid CenterNet - Hybrid-supervised object detection / Weakly semi-supervised object detection

Hybrid-Supervised Object Detection System Object detection system trained by hybrid-supervision/weakly semi-supervision (HSOD/WSSOD): This project is

5 Dec 10, 2022
Complementary Patch for Weakly Supervised Semantic Segmentation, ICCV21 (poster)

CPN (ICCV2021) This is an implementation of Complementary Patch for Weakly Supervised Semantic Segmentation, which is accepted by ICCV2021 poster. Thi

Ferenas 20 Dec 12, 2022
SCALE: Modeling Clothed Humans with a Surface Codec of Articulated Local Elements (CVPR 2021)

SCALE: Modeling Clothed Humans with a Surface Codec of Articulated Local Elements (CVPR 2021) This repository contains the official PyTorch implementa

Qianli Ma 133 Jan 05, 2023
AI-based, context-driven network device ranking

Batea A batea is a large shallow pan of wood or iron traditionally used by gold prospectors for washing sand and gravel to recover gold nuggets. Batea

Secureworks Taegis VDR 269 Nov 26, 2022
Head and Neck Tumour Segmentation and Prediction of Patient Survival Project

Head-and-Neck-Tumour-Segmentation-and-Prediction-of-Patient-Survival Welcome to the Head and Neck Tumour Segmentation and Prediction of Patient Surviv

5 Oct 20, 2022
The 1st place solution of track2 (Vehicle Re-Identification) in the NVIDIA AI City Challenge at CVPR 2021 Workshop.

AICITY2021_Track2_DMT The 1st place solution of track2 (Vehicle Re-Identification) in the NVIDIA AI City Challenge at CVPR 2021 Workshop. Introduction

Hao Luo 91 Dec 21, 2022
Official implementation of NeurIPS 2021 paper "Contextual Similarity Aggregation with Self-attention for Visual Re-ranking"

CSA: Contextual Similarity Aggregation with Self-attention for Visual Re-ranking PyTorch training code for CSA (Contextual Similarity Aggregation). We

Hui Wu 19 Oct 21, 2022
Lolviz - A simple Python data-structure visualization tool for lists of lists, lists, dictionaries; primarily for use in Jupyter notebooks / presentations

lolviz By Terence Parr. See Explained.ai for more stuff. A very nice looking javascript lolviz port with improvements by Adnan M.Sagar. A simple Pytho

Terence Parr 785 Dec 30, 2022
Explainable Medical ImageSegmentation via GenerativeAdversarial Networks andLayer-wise Relevance Propagation

MedAI: Transparency in Medical Image Segmentation What is this repo This repo contains the code and experiments that are implemented to contribute in

Awadelrahman M. A. Ahmed 1 Nov 22, 2021
Code for "Diffusion is All You Need for Learning on Surfaces"

Source code for "Diffusion is All You Need for Learning on Surfaces", by Nicholas Sharp Souhaib Attaiki Keenan Crane Maks Ovsjanikov NOTE: the linked

Nick Sharp 247 Dec 28, 2022
Lecture materials for Cornell CS5785 Applied Machine Learning (Fall 2021)

Applied Machine Learning (Cornell CS5785, Fall 2021) This repo contains executable course notes and slides for the Applied ML course at Cornell and Co

Volodymyr Kuleshov 103 Dec 31, 2022
PyTorch module to use OpenFace's nn4.small2.v1.t7 model

OpenFace for Pytorch Disclaimer: This codes require the input face-images that are aligned and cropped in the same way of the original OpenFace. * I m

Pete Tae-hoon Kim 176 Dec 12, 2022
AFL binary instrumentation

E9AFL --- Binary AFL E9AFL inserts American Fuzzy Lop (AFL) instrumentation into x86_64 Linux binaries. This allows binaries to be fuzzed without the

242 Dec 12, 2022
Towards Part-Based Understanding of RGB-D Scans

Towards Part-Based Understanding of RGB-D Scans (CVPR 2021) We propose the task of part-based scene understanding of real-world 3D environments: from

26 Nov 23, 2022
PyTorch implementation of Anomaly Transformer: Time Series Anomaly Detection with Association Discrepancy

Anomaly Transformer in PyTorch This is an implementation of Anomaly Transformer: Time Series Anomaly Detection with Association Discrepancy. This pape

spencerbraun 160 Dec 19, 2022
Genetic feature selection module for scikit-learn

sklearn-genetic Genetic feature selection module for scikit-learn Genetic algorithms mimic the process of natural selection to search for optimal valu

Manuel Calzolari 260 Dec 14, 2022
Instance-level Image Retrieval using Reranking Transformers

Instance-level Image Retrieval using Reranking Transformers Fuwen Tan, Jiangbo Yuan, Vicente Ordonez, ICCV 2021. Abstract Instance-level image retriev

UVA Computer Vision 87 Jan 03, 2023
CSPML (crystal structure prediction with machine learning-based element substitution)

CSPML (crystal structure prediction with machine learning-based element substitution) CSPML is a unique methodology for the crystal structure predicti

8 Dec 20, 2022
Atif Hassan 103 Dec 14, 2022