SuperGradients
Introduction
Welcome to SuperGradients, a free open-source training library for PyTorch-based deep learning models. SuperGradients allows you to train models of any computer vision tasks or import pre-trained SOTA models, such as object detection, classification of images, and semantic segmentation for videos and images.
Whether you are a beginner or an expert it is likely that you already have your own training script, model, loss function implementation, etc., and thus you experienced with how difficult it is to develop a production ready deep learning model, the overhead of integrating with existing training tools with very different and stiff formats and conventions, how much effort it is to find a suitable architecture for your needs when every repo is focusing on just one task.
With SuperGradients you can:
- Train models for any Computer Vision task or import production-ready pre-trained SOTA models (detection, segmentation, and classification - YOLOv5, DDRNet, EfficientNet, RegNet, ResNet, MobileNet, etc.)
- Shorten the training process using tested and proven recipes & code examples
- Easily configure your own or use plug&play training, dataset, and architecture parameters.
- Save time and easily integrate it into your codebase.
Table of Content:
- Getting Started
- Installation Methods
- Computer Vision Models' Pretrained Checkpoints
- Contributing
- Citation
- Community
- License
Getting Started
Quick Start Notebook
Get started with our quick start notebook on Google Colab for a quick and easy start using free GPU hardware
SuperGradients Quick Start in Google Colab | Download notebook | View source on GitHub |
SuperGradients Walkthrough Notebook
Learn more about SuperGradients training components with our walkthrough notebook on Google Colab for an easy to use tutorial using free GPU hardware
SuperGradients Walkthrough in Google Colab | Download notebook | View source on GitHub |
Installation Methods
Prerequisites
General requirements:
- Python 3.7, 3.8 or 3.9 installed.
- torch>=1.9.0
- The python packages that are specified in requirements.txt;
To train on nvidia GPUs:
- Nvidia CUDA Toolkit >= 11.2
- CuDNN >= 8.1.x
- Nvidia Driver with CUDA >= 11.2 support (≥460.x)
Quick Installation of stable version
Not yet avilable in PyPi
pip install super-gradients
That's it !
Installing from GitHub
pip install git+https://github.com/Deci-AI/[email protected]
Computer Vision Models' Pretrained Checkpoints
Pretrained Classification PyTorch Checkpoints
Model | Dataset | Resolution | Top-1 | Top-5 | Latency b1T4 | Throughout b1T4 |
---|---|---|---|---|---|---|
EfficientNet B0 | ImageNet | 224x224 | 77.62 | 93.49 | 1.16ms | 862fps |
RegNetY200 | ImageNet | 224x224 | 70.88 | 89.35 | 1.07ms | 928.3fps |
RegNetY400 | ImageNet | 224x224 | 74.74 | 91.46 | 1.22ms | 816.5fps |
RegNetY600 | ImageNet | 224x224 | 76.18 | 92.34 | 1.19ms | 838.5fps |
RegNetY800 | ImageNet | 224x224 | 77.07 | 93.26 | 1.18ms | 841.4fps |
ResNet18 | ImageNet | 224x224 | 70.6 | 89.64 | 0.599ms | 1669fps |
ResNet34 | ImageNet | 224x224 | 74.13 | 91.7 | 0.89ms | 1123fps |
ResNet50 | ImageNet | 224x224 | 76.3 | 93.0 | 0.94ms | 1063fps |
MobileNetV3_large-150 epochs | ImageNet | 224x224 | 73.79 | 91.54 | 0.87ms | 1149fps |
MobileNetV3_large-300 epochs | ImageNet | 224x224 | 74.52 | 91.92 | 0.87ms | 1149fps |
MobileNetV3_small | ImageNet | 224x224 | 67.45 | 87.47 | 0.75ms | 1333fps |
MobileNetV2_w1 | ImageNet | 224x224 | 73.08 | 91.1 | 0.58ms | 1724fps |
NOTE: Performance measured on T4 GPU with TensorRT, using FP16 precision and batch size 1
Pretrained Object Detection PyTorch Checkpoints
Model | Dataset | Resolution | mAPval 0.5:0.95 |
Latency b1T4 | Throughout b64T4 |
---|---|---|---|---|---|
YOLOv5 nano | COCO | 640x640 | 27.7 | 6.55ms | 177.62fps |
YOLOv5 small | COCO | 640x640 | 37.3 | 7.13ms | 159.44fps |
YOLOv5 medium | COCO | 640x640 | 45.2 | 8.95ms | 121.78fps |
YOLOv5 large | COCO | 640x640 | 48.0 | 11.49ms | 95.99fps |
NOTE: Performance measured on T4 GPU with TensorRT, using FP16 precision and batch size 1 (latency) and batch size 64 (througput)
Pretrained Semantic Segmentation PyTorch Checkpoints
Model | Dataset | Resolution | mIoU | Latency b1T4 | Throughout b64T4 |
---|---|---|---|---|---|
DDRNet23 | Cityscapes | 1024x2048 | 78.65 | 25.48ms | 37.4fps |
DDRNet23 slim | Cityscapes | 1024x2048 | 76.6 | 22.24ms | 45.7fps |
ShelfNet_LW_34 | COCO Segmentation (21 classes from PASCAL including background) | 512x512 | 65.1 | - | - |
NOTE: Performance measured on T4 GPU with TensorRT, using FP16 precision and batch size 1 (latency) and batch size 64 (througput)
Contributing
To learn about making a contribution to SuperGradients, please see our Contribution page.
Our awesome contributors:
Made with contrib.rocks.
Citation
If you use SuperGradients library or benchmark in your research, please cite SuperGradients deep learning training library.
Community
If you want to be a part of SuperGradients growing community, hear about all the exciting news and updates, need help, request for advanced features, or want to file a bug or issue report, we would love to welcome you aboard!
- Slack is the place to be and ask questions about SuperGradients and get support. Click here to join our Slack
- To report a bug, file an issue on GitHub.
- You can also join the community mailing list to ask questions about the project and receive announcements.
License
This project is released under the Apache 2.0 license.