FasterAI: A library to make smaller and faster models with FastAI.

Overview

Fasterai

header

fasterai is a library created to make neural network smaller and faster. It essentially relies on common compression techniques for networks such as pruning, knowledge distillation, ...

Project Documentation

Visit Read The Docs Project Page or read following README to know more about using Fasterai

Available Methods

1. Pruning

Make your model sparse (i.e. prune it) according to a:

  • Sparsity: the amount of weights that will be replaced by 0
  • Granularity: the granularity at which you operate the pruning (removing weights, vectors, kernels, filters)
  • Method: prune either each layer independantly (local pruning) or the whole model (global pruning)
  • Criteria: the criteria used to select the weights to remove (magnitude, movement, ...)
  • Schedule: which schedule you want to use for pruning (one shot, iterative, gradual, ...)

2. Knowledge Distillation

Distill the knowledge acquired by a big model into a smaller one.

3. Lottery Ticket Hypothesis

Find the winning ticket in you network, i.e. the initial subnetwork able to attain at least similar performances than the network as a whole.

Quick Start

1. Create your model with fastai

learn = cnn_learner(dls, model)

2. Get you Fasterai Callback

sp_cb=SparsifyCallback(end_sparsity, granularity, method, criteria, sched_func)

3. Train you model to make it sparse !

learn.fit_one_cycle(n_epochs, cbs=sp_cb)

More about other methods in the tutorials section

Installation

pip install git+https://github.com/nathanhubens/fasterai.git

or

pip install fasterai

Citing

@misc{Hubens:2020,
  Author = {Nathan Hubens},
  Title = {Fasterai},
  Year = {2020},
  Publisher = {GitHub},
  Journal = {GitHub repository},
  Howpublished = {\url{https://github.com/nathanhubens/fasterai}}
}

footer

You might also like...
shufflev2-yolov5๏ผšlighter, faster and easier to deploy
shufflev2-yolov5๏ผšlighter, faster and easier to deploy

shufflev2-yolov5: lighter, faster and easier to deploy. Evolved from yolov5 and the size of model is only 1.7M (int8) and 3.3M (fp16). It can reach 10+ FPS on the Raspberry Pi 4B when the input size is 320ร—320~

LF-YOLO (Lighter and Faster YOLO) is used to detect defect of X-ray weld image.
LF-YOLO (Lighter and Faster YOLO) is used to detect defect of X-ray weld image.

This project is based on ultralytics/yolov3. LF-YOLO (Lighter and Faster YOLO) is used to detect defect of X-ray weld image. Download $ git clone http

A Python package for faster, safer, and simpler ML processes

Bender ๐Ÿค– A Python package for faster, safer, and simpler ML processes. Why use bender? Bender will make your machine learning processes, faster, safe

Much faster than SORT(Simple Online and Realtime Tracking), a little worse than SORT

QSORT QSORT(Quick + Simple Online and Realtime Tracking) is a simple online and realtime tracking algorithm for 2D multiple object tracking in video s

This project is based on RIFE and aims to make RIFE more practical for users by adding various features and design new models
This project is based on RIFE and aims to make RIFE more practical for users by adding various features and design new models

This project is based on RIFE and aims to make RIFE more practical for users by adding various features and design new models. Because improving the PSNR index is not compatible with subjective effects, we hope this part of work and our academic research are independent of each other.

LeViT a Vision Transformer in ConvNet's Clothing for Faster Inference
LeViT a Vision Transformer in ConvNet's Clothing for Faster Inference

LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference This repository contains PyTorch evaluation code, training code and pretrained

Code to run experiments in SLOE: A Faster Method for Statistical Inference in High-Dimensional Logistic Regression.

Code to run experiments in SLOE: A Faster Method for Statistical Inference in High-Dimensional Logistic Regression. Not an official Google product. Me

PyTorch implementation of the Quasi-Recurrent Neural Network - up to 16 times faster than NVIDIA's cuDNN LSTM
PyTorch implementation of the Quasi-Recurrent Neural Network - up to 16 times faster than NVIDIA's cuDNN LSTM

Quasi-Recurrent Neural Network (QRNN) for PyTorch Updated to support multi-GPU environments via DataParallel - see the the multigpu_dataparallel.py ex

A pytorch implementation of faster RCNN detection framework (Use detectron2, it's a masterpiece)
A pytorch implementation of faster RCNN detection framework (Use detectron2, it's a masterpiece)

Notice(2019.11.2) This repo was built back two years ago when there were no pytorch detection implementation that can achieve reasonable performance.

Releases(v0.1.9)
Image Matching Evaluation

Image Matching Evaluation (IME) IME provides to test any feature matching algorithm on datasets containing ground-truth homographies. Also, one can re

32 Nov 17, 2022
SSD-based Object Detection in PyTorch

SSD-based Object Detection in PyTorch ์„œ๊ฐ•๋Œ€ํ•™๊ต ํ˜„๋Œ€๋ชจ๋น„์Šค SW ํ”„๋กœ๊ทธ๋žจ์—์„œ ์ง„ํ–‰ํ•œ ์ธ๊ณต์ง€๋Šฅ ํ”„๋กœ์ ํŠธ์ž…๋‹ˆ๋‹ค. Jetson nano๋ฅผ ์ด์šฉํ•ด pre-trained network๋ฅผ fine tuning์‹œ์ผœ ์ฐจ๋Ÿ‰ ๋ฐ ์‹ ํ˜ธ๋“ฑ ์ธ์‹์„ ๊ตฌํ˜„ํ•˜์˜€์Šต๋‹ˆ๋‹ค

Haneul Kim 1 Nov 16, 2021
PyTorch common framework to accelerate network implementation, training and validation

pytorch-framework PyTorch common framework to accelerate network implementation, training and validation. This framework is inspired by works from MML

Dongliang Cao 3 Dec 19, 2022
Anonymize BLM Protest Images

Anonymize BLM Protest Images This repository automates @BLMPrivacyBot, a Twitter bot that shows the anonymized images to help keep protesters safe. Us

Stanford Machine Learning Group 40 Oct 13, 2022
Deep Learning with PyTorch made easy ๐Ÿš€ !

Deep Learning with PyTorch made easy ๐Ÿš€ ! Carefree? carefree-learn aims to provide CAREFREE usages for both users and developers. It also provides a c

381 Dec 22, 2022
Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity

[ICLR 2022] Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity by Shiwei Liu, Tianlong Chen, Zahra Atashgahi, Xiaohan Chen, Ghada Sokar, Elen

VITA 18 Dec 31, 2022
PyTorch implementation of "MLP-Mixer: An all-MLP Architecture for Vision" Tolstikhin et al. (2021)

mlp-mixer-pytorch PyTorch implementation of "MLP-Mixer: An all-MLP Architecture for Vision" Tolstikhin et al. (2021) Usage import torch from mlp_mixer

isaac 27 Jul 09, 2022
MVSDF - Learning Signed Distance Field for Multi-view Surface Reconstruction

MVSDF - Learning Signed Distance Field for Multi-view Surface Reconstruction This is the official implementation for the ICCV 2021 paper Learning Sign

110 Dec 20, 2022
Code for "CloudAAE: Learning 6D Object Pose Regression with On-line Data Synthesis on Point Clouds" @ICRA2021

CloudAAE This is an tensorflow implementation of "CloudAAE: Learning 6D Object Pose Regression with On-line Data Synthesis on Point Clouds" Files log:

Gee 35 Nov 14, 2022
Listing arxiv - Personalized list of today's articles from ArXiv

Personalized list of today's articles from ArXiv Print and/or send to your gmail

Lilianne Nakazono 5 Jun 17, 2022
The implementation of the algorithm in the paper "Safe Deep Semi-Supervised Learning for Unseen-Class Unlabeled Data" published in ICML 2020.

DS3L This is the code for paper "Safe Deep Semi-Supervised Learning for Unseen-Class Unlabeled Data" published in ICML 2020. Setups The code is implem

Guolz 36 Oct 19, 2022
FairyTailor: Multimodal Generative Framework for Storytelling

FairyTailor: Multimodal Generative Framework for Storytelling

Eden Bens 172 Dec 30, 2022
Finetuner allows one to tune the weights of any deep neural network for better embeddings on search tasks

Finetuner allows one to tune the weights of any deep neural network for better embeddings on search tasks

Jina AI 794 Dec 31, 2022
๐Ÿ€ Pytorch implementation of various Attention Mechanisms, MLP, Re-parameter, Convolution, which is helpful to further understand papers.โญโญโญ

๐Ÿ€ Pytorch implementation of various Attention Mechanisms, MLP, Re-parameter, Convolution, which is helpful to further understand papers.โญโญโญ

xmu-xiaoma66 7.7k Jan 05, 2023
NICE-GAN โ€” Official PyTorch Implementation Reusing Discriminators for Encoding: Towards Unsupervised Image-to-Image Translation

NICE-GAN-pytorch - Official PyTorch implementation of NICE-GAN: Reusing Discriminators for Encoding: Towards Unsupervised Image-to-Image Translation

Runfa Chen 208 Nov 25, 2022
[ICCV 2021] Encoder-decoder with Multi-level Attention for 3D Human Shape and Pose Estimation

MAED: Encoder-decoder with Multi-level Attention for 3D Human Shape and Pose Estimation Getting Started Our codes are implemented and tested with pyth

ZiNiU WaN 176 Dec 15, 2022
Code for SIMMC 2.0: A Task-oriented Dialog Dataset for Immersive Multimodal Conversations

The Second Situated Interactive MultiModal Conversations (SIMMC 2.0) Challenge 2021 Welcome to the Second Situated Interactive Multimodal Conversation

Facebook Research 81 Nov 22, 2022
Text to image synthesis using thought vectors

Text To Image Synthesis Using Thought Vectors This is an experimental tensorflow implementation of synthesizing images from captions using Skip Though

Paarth Neekhara 2.1k Jan 05, 2023
KGDet: Keypoint-Guided Fashion Detection (AAAI 2021)

KGDet: Keypoint-Guided Fashion Detection (AAAI 2021) This is an official implementation of the AAAI-2021 paper "KGDet: Keypoint-Guided Fashion Detecti

Qian Shenhan 35 Dec 29, 2022
PyTorch implementation of our ICCV 2019 paper: Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer and Novel View Synthesis

Impersonator PyTorch implementation of our ICCV 2019 paper: Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer an

SVIP Lab 1.7k Jan 06, 2023