Pytorch Implementations of large number classical backbone CNNs, data enhancement, torch loss, attention, visualization and some common algorithms.

Overview

Torch-template-for-deep-learning

Pytorch implementations of some **classical backbone CNNs, data enhancement, torch loss, attention, visualization and some common algorithms **.

Requirements

· torch, torch-vision

· torchsummary

· other necessary

usage

A training script is supplied in “train_baseline.py”, the arguments are in “args.py

autoaug: Data enhancement and CNNs regularization

- StochDepth
- label smoothing
- Cutout
- DropBlock
- Mixup
- Manifold Mixup
- ShakeDrop
- cutmix

dataset_loader: Loaders for various datasets

from dataloder.scoliosis_dataloder import ScoliosisDataset
from dataloder.facial_attraction_dataloder import FacialAttractionDataset
from dataloder.fa_and_sco_dataloder import ScoandFaDataset
from dataloder.scofaNshot_dataloder import ScoandFaNshotDataset
from dataloder.age_dataloder import MegaAsiaAgeDataset
def load_dataset(data_config):
    if data_config.dataset == 'cifar10':
        training_transform=training_transforms()
        if data_config.autoaug:
            print('auto Augmentation the data !')
            training_transform.transforms.insert(0, Augmentation(fa_reduced_cifar10()))
        train_dataset = torchvision.datasets.CIFAR10(root=data_config.data_path,
                                                     train=True,
                                                     transform=training_transform,
                                                     download=True)
        val_dataset = torchvision.datasets.CIFAR10(root=data_config.data_path,
                                                   train=False,
                                                   transform=validation_transforms(),
                                                   download=True)
        return train_dataset,val_dataset
    elif data_config.dataset == 'cifar100':
        train_dataset = torchvision.datasets.CIFAR100(root=data_config.data_path,
                                                     train=True,
                                                     transform=training_transforms(),
                                                     download=True)
        val_dataset = torchvision.datasets.CIFAR100(root=data_config.data_path,
                                                   train=False,
                                                   transform=validation_transforms(),
                                                   download=True)
        return train_dataset, val_dataset

deployment: Deployment mode of pytorch model

Two deployment modes of pytorch model are provided, one is web deployment and the other is C + + deployment

Store the training weight file in ` flash_ Deployment ` folder

Then modify ' server.py '  path

Leverage ' client.Py ' call

models: Various classical deep learning models

Classical network
- **AlexNet**
- **VGG**
- **ResNet** 
- **ResNext** 
- **InceptionV1**
- **InceptionV2 and InceptionV3**
- **InceptionV4 and Inception-ResNet**
- **GoogleNet**
- **EfficienNet**
- **MNasNet**
- **DPN**
Attention network
- **SE Attention**
- **External Attention**
- **Self Attention**
- **SK Attention**
- **CBAM Attention**
- **BAM Attention**
- **ECA Attention**
- **DANet Attention**
- **Pyramid Split Attention(PSA)**
- **EMSA Attention**
- **A2Attention**
- **Non-Local Attention**
- **CoAtNet**
- **CoordAttention**
- **HaloAttention**
- **MobileViTAttention**
- **MUSEAttention**  
- **OutlookAttention**
- **ParNetAttention**
- **ParallelPolarizedSelfAttention**
- **residual_attention**
- **S2Attention**
- **SpatialGroupEnhance Attention**
- **ShuffleAttention**
- **GFNet Attention**
- **TripletAttention**
- **UFOAttention**
- **VIPAttention**
Lightweight network
- **MobileNets:**
- **MobileNetV2:**
- **MobileNetV3:**
- **ShuffleNet:**
- **ShuffleNet V2:**
- **SqueezeNet**
- **Xception**
- **MixNet**
- **GhostNet**
GAN
- **Auxiliary Classifier GAN**
- **Adversarial Autoencoder**
- **BEGAN**
- **BicycleGAN**
- **Boundary-Seeking GAN**
- **Cluster GAN**
- **Conditional GAN**
- **Context-Conditional GAN**
- **Context Encoder**
- **Coupled GAN**
- **CycleGAN**
- **Deep Convolutional GAN**
- **DiscoGAN**
- **DRAGAN**
- **DualGAN**
- **Energy-Based GAN**
- **Enhanced Super-Resolution GAN**  
- **Least Squares GAN**
- **Enhanced Super-Resolution GAN**
- **GAN**
- **InfoGAN**
- **Pix2Pix**
- **Relativistic GAN**
- **Semi-Supervised GAN**
- **StarGAN**
- **Wasserstein GAN**
- **Wasserstein GAN GP**
- **Wasserstein GAN DIV**
ObjectDetection-network
- **SSD:**
- **YOLO:**
- **YOLOv2:**
- **YOLOv3:**
- **FCOS:**
- **FPN:**
- **RetinaNet**
- **Objects as Points:**
- **FSAF:**
- **CenterNet**
- **FoveaBox**
Semantic Segmentation
- **FCN**
- **Fast-SCNN**
- **LEDNet:**
- **LRNNet**
- **FisheyeMODNet:**
Instance Segmentation
- **PolarMask** 
FaceDetectorAndRecognition
- **FaceBoxes**
- **LFFD**
- **VarGFaceNet**
HumanPoseEstimation
- **Stacked Hourglass Networks**
- **Simple Baselines**
- **LPN**

pytorch_loss: loss for training

- label-smooth
- amsoftmax
- focal-loss
- dual-focal-loss 
- triplet-loss
- giou-loss
- affinity-loss
- pc_softmax_cross_entropy
- ohem-loss(softmax based on line hard mining loss)
- large-margin-softmax(bmvc2019)
- lovasz-softmax-loss
- dice-loss(both generalized soft dice loss and batch soft dice loss)

tf_to_pytorch: TensorFlow to PyTorch Conversion

This directory is used to convert TensorFlow weights to PyTorch. 
It was hacked together fairly quickly, so the code is not the most 
beautiful (just a warning!), but it does the job. I will be refactoring it soon.

TorchCAM: Class Activation Mapping

Simple way to leverage the class-specific activation of convolutional layers in PyTorch.

- CAM
- ScoreCAM
- SSCAM
- ISCAM
- GradCAM
- Grad-CAM++
- Smooth Grad-CAM++
- XGradCAM
- LayerCAM

Note

Write at the end

At present, the work organized by this project is indeed not comprehensive enough. As the amount of reading increases, we will continue to improve this project. Welcome everyone star to support. If there are incorrect statements or incorrect code implementations in the article, you are welcome to point out~

Owner
Li Shengyan
Li Shengyan
Unified learning approach for egocentric hand gesture recognition and fingertip detection

Unified Gesture Recognition and Fingertip Detection A unified convolutional neural network (CNN) algorithm for both hand gesture recognition and finge

Mohammad 227 Dec 25, 2022
Official repo for SemanticGAN https://nv-tlabs.github.io/semanticGAN/

SemanticGAN This is the official code for: Semantic Segmentation with Generative Models: Semi-Supervised Learning and Strong Out-of-Domain Generalizat

151 Dec 28, 2022
​ This is the Pytorch implementation of Progressive Attentional Manifold Alignment.

PAMA This is the Pytorch implementation of Progressive Attentional Manifold Alignment. Requirements python 3.6 pytorch 1.2.0+ PIL, numpy, matplotlib C

98 Nov 15, 2022
Spatio-Temporal Entropy Model (STEM) for end-to-end leaned video compression.

Spatio-Temporal Entropy Model A Pytorch Reproduction of Spatio-Temporal Entropy Model (STEM) for end-to-end leaned video compression. More details can

16 Nov 28, 2022
[CVPR2022] Bridge-Prompt: Towards Ordinal Action Understanding in Instructional Videos

Bridge-Prompt: Towards Ordinal Action Understanding in Instructional Videos Created by Muheng Li, Lei Chen, Yueqi Duan, Zhilan Hu, Jianjiang Feng, Jie

58 Dec 23, 2022
PyZebrascope - an open-source Python platform for brain-wide neural activity imaging in behaving zebrafish

PyZebrascope - an open-source Python platform for brain-wide neural activity imaging in behaving zebrafish

1 May 31, 2022
Implementation of U-Net and SegNet for building segmentation

Specialized project Created by Katrine Nguyen and Martin Wangen-Eriksen as a part of our specialized project at Norwegian University of Science and Te

Martin.w-e 3 Dec 07, 2022
Tiny-NewsRec: Efficient and Effective PLM-based News Recommendation

Tiny-NewsRec The source codes for our paper "Tiny-NewsRec: Efficient and Effective PLM-based News Recommendation". Requirements PyTorch == 1.6.0 Tensor

Yang Yu 3 Dec 07, 2022
Patient-Survival - Using Python, I developed a Machine Learning model using classification techniques such as Random Forest and SVM classifiers to predict a patient's survival status that have undergone breast cancer surgery.

Patient-Survival - Using Python, I developed a Machine Learning model using classification techniques such as Random Forest and SVM classifiers to predict a patient's survival status that have underg

Nafis Ahmed 1 Dec 28, 2021
ServiceX Transformer that converts flat ROOT ntuples into columnwise data

ServiceX_Uproot_Transformer ServiceX Transformer that converts flat ROOT ntuples into columnwise data Usage You can invoke the transformer from the co

Vis 0 Jan 20, 2022
DTCN IJCAI - Sequential prediction learning framework and algorithm

DTCN This is the implementation of our paper "Sequential Prediction of Social Me

Bobby 2 Jan 24, 2022
RGB-stacking 🛑 🟩 🔷 for robotic manipulation

RGB-stacking 🛑 🟩 🔷 for robotic manipulation BLOG | PAPER | VIDEO Beyond Pick-and-Place: Tackling Robotic Stacking of Diverse Shapes, Alex X. Lee*,

DeepMind 95 Dec 23, 2022
Img-process-manual - Utilize Python Numpy and Matplotlib to realize OpenCV baisc image processing function

Img-process-manual - Opencv Library basic graphic processing algorithm coding reproduction based on Numpy and Matplotlib library

Jack_Shaw 2 Dec 12, 2022
A new benchmark for Icon Question Answering (IconQA) and a large-scale icon dataset Icon645.

IconQA About IconQA is a new diverse abstract visual question answering dataset that highlights the importance of abstract diagram understanding and c

Pan Lu 24 Dec 30, 2022
The second project in Python course on FCC

Assignment Write a function named add_time that takes in two required parameters and one optional parameter: a start time in the 12-hour clock format

Denise T 1 Dec 13, 2021
RipsNet: a general architecture for fast and robust estimation of the persistent homology of point clouds

RipsNet: a general architecture for fast and robust estimation of the persistent homology of point clouds This repository contains the code asscoiated

Felix Hensel 14 Dec 12, 2022
Deep Unsupervised 3D SfM Face Reconstruction Based on Massive Landmark Bundle Adjustment.

(ACMMM 2021 Oral) SfM Face Reconstruction Based on Massive Landmark Bundle Adjustment This repository shows two tasks: Face landmark detection and Fac

BoomStar 51 Dec 13, 2022
Sequential Model-based Algorithm Configuration

SMAC v3 Project Copyright (C) 2016-2018 AutoML Group Attention: This package is a reimplementation of the original SMAC tool (see reference below). Ho

AutoML-Freiburg-Hannover 778 Jan 05, 2023
PyTorch implementation of Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation.

ALiBi PyTorch implementation of Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation. Quickstart Clone this reposit

Jake Tae 4 Jul 27, 2022
using STGCN to achieve egg classification task

EEG Classification   The task requires us to classify electroencephalography(EEG) into six categories, including human body, human face, animal body,

4 Jun 13, 2022