I will implement Fastai in each projects present in this repository.

Overview

DEEP LEARNING FOR CODERS WITH FASTAI AND PYTORCH

The repository contains a list of the projects which I have worked on while reading the book Deep Learning For Coders with Fastai and PyTorch.

📚 NOTEBOOKS:

1. INTRODUCTION

  • The Introduction notebook is a comprehensive notebook as it contains a list of projects such as Cat and Dog Classification, Semantic Segmentation, Sentiment Classification, Tabular Classification and Recommendation System.

2. MODEL PRODUCTION

  • The BearDetector notebook contains all the dependencies for a complete Image Classification project.

3. TRAINING A CLASSIFIER

  • The DigitClassifier notebook contains all the dependencies required for Image Classification project from scratch.

4. IMAGE CLASSIFICATION

  • The Image Classification notebook contains all the dependencies for Image Classification such as getting image data ready for modeling i.e presizing and data block summary and for fitting the model i.e learning rate finder, unfreezing, discriminative learning rates, setting the number of epochs and using deeper architectures. It has explanations of cross entropy loss function as well.

5. MULTILABEL CLASSIFICATION AND REGRESSION

  • The Multilabel Classification notebook contains all the dependencies required to understand Multilabel Classification. It contains the explanations of initializing DataBlock and DataLoaders. The Regression notebook contains all the dependencies required to understand Image Regression.

6. ADVANCED CLASSIFICATION

  • The Imagenette Classification notebook contains all the dependencies required to train a state of art machine learning model in computer vision whether from scratch or using transfer learning. It contains explanations and implementation of Normalization, Progressive Resizing, Test Time Augmentation, Mixup Augmentation and Label Smoothing.

7. COLLABORATIVE FILTERING

  • The Collaborative Filtering notebook contains all the dependencies required to build a Recommendation System. It presents how gradient descent can learn intrinsic factors or biases about items from a history of ratings which then gives information about the data.

8. TABULAR MODELING

  • The Tabular Model notebook contains all the dependencies required for Tabular Modeling. It presents the detailed explanations of two approaches to Tabular Modeling: Decision Tree Ensembles and Neural Networks.

9. NATURAL LANGUAGE PROCESSING

  • The NLP notebook contains all the dependencies required build Language Model that can generate texts and a Classifier Model that determines whether a review is positive or negative. It presents the state of art Classifier Model which is build using a pretrained language model and fine tuned it to the corpus of task. Then the Encoder model is used for classification.

10. DATA MUNGING

  • The DataMunging notebook contains all the dependencies required to implement mid level API of Fast.ai in Natural Language Processing and Computer Vision which provides greater flexibility to apply transformations on data items.

11. LANGUAGE MODEL FROM SCRATCH

  • The LanguageModel notebook contains all the dependencies that is inside AWD-LSTM architecture for Text Classification. It presents the implementation of Language Model using simple Linear Model, Recurrent Neural Network, Long Short Term Memory, Dropout Regularization and Activation Regularization.

12. CONVOLUTIONAL NEURAL NETWORK

  • The CNN notebook contains all the dependencies required to understand Convolutional Neural Networks. Convolutions are just a type of matrix multiplication with two constraints on the weight matrix: some elements are always zero and some elements are tied or forced to always have the same value.

13. RESIDUAL NETWORKS

  • The ResNets notebook contains all the dependencies required to understand the implementation of skip connections which allow deeper models to be trained. ResNet is the pretrained model when using Transfer Learning.

14. ARCHITECTURE DETAILS

  • The Architecture Details notebook contains all the dependencies required to create a complete state of art computer vision models. It presents some aspects of natural language processing as well.

15. TRAINING PROCESS

  • The Training notebook contains all the dependencies required to create a training loop and explored variants of Stochastic Gradient Descent.

16. NEURAL NETWORK FOUNDATIONS

  • The Neural Foundations notebook contains all the dependencies required to understand the foundations of deep learning, begining with matrix multiplication and moving on to implementing the forward and backward passes of a neural net from scratch.

17. CNN INTERPRETATION WITH CAM

  • The CNN Interpretation notebook presents the implementation of Class Activation Maps in model interpretation. Class activation maps give insights into why a model predicted a certain result by showing the areas of images that were most responsible for a given prediction.

18. FASTAI LEARNER FROM SCRATCH

  • The Fastai Learner notebook contains all the dependencies to understand the key concepts of Fastai.

19. CHEST X-RAYS CLASSIFICATION

20. TRANSFORMERS MODEL

Owner
Thinam Tamang
Machine Learning and Deep Learning
Thinam Tamang
Joint Detection and Identification Feature Learning for Person Search

Person Search Project This repository hosts the code for our paper Joint Detection and Identification Feature Learning for Person Search. The code is

712 Dec 17, 2022
Learning Lightweight Low-Light Enhancement Network using Pseudo Well-Exposed Images

Learning Lightweight Low-Light Enhancement Network using Pseudo Well-Exposed Images This repository contains the implementation of the following paper

Seonggwan Ko 9 Jul 30, 2022
CR-Fill: Generative Image Inpainting with Auxiliary Contextual Reconstruction. ICCV 2021

crfill Usage | Web App | | Paper | Supplementary Material | More results | code for paper ``CR-Fill: Generative Image Inpainting with Auxiliary Contex

182 Dec 20, 2022
MDMM - Learning multi-domain multi-modality I2I translation

Multi-Domain Multi-Modality I2I translation Pytorch implementation of multi-modality I2I translation for multi-domains. The project is an extension to

Hsin-Ying Lee 107 Nov 04, 2022
EigenGAN Tensorflow, EigenGAN: Layer-Wise Eigen-Learning for GANs

Gender Bangs Body Side Pose (Yaw) Lighting Smile Face Shape Lipstick Color Painting Style Pose (Yaw) Pose (Pitch) Zoom & Rotate Flush & Eye Color Mout

Zhenliang He 321 Dec 01, 2022
Code for the paper "Learning-Augmented Algorithms for Online Steiner Tree"

Learning-Augmented Algorithms for Online Steiner Tree This is the code for the paper "Learning-Augmented Algorithms for Online Steiner Tree". Requirem

0 Dec 09, 2021
SurfEmb (CVPR 2022) - SurfEmb: Dense and Continuous Correspondence Distributions

SurfEmb SurfEmb: Dense and Continuous Correspondence Distributions for Object Pose Estimation with Learnt Surface Embeddings Rasmus Laurvig Haugard, A

Rasmus Haugaard 56 Nov 19, 2022
SparseML is a libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models

SparseML is a toolkit that includes APIs, CLIs, scripts and libraries that apply state-of-the-art sparsification algorithms such as pruning and quantization to any neural network. General, recipe-dri

Neural Magic 1.5k Dec 30, 2022
UV matrix decompostion using movielens dataset

UV-matrix-decompostion-with-kfold UV matrix decompostion using movielens dataset upload the 'ratings.dat' file install the following python libraries

2 Oct 18, 2022
Code base of object detection

rmdet code base of object detection. 环境安装: 1. 安装conda python环境 - `conda create -n xxx python=3.7/3.8` - `conda activate xxx` 2. 运行脚本,自动安装pytorch1

3 Mar 08, 2022
Few-NERD: Not Only a Few-shot NER Dataset

Few-NERD: Not Only a Few-shot NER Dataset This is the source code of the ACL-IJCNLP 2021 paper: Few-NERD: A Few-shot Named Entity Recognition Dataset.

THUNLP 319 Dec 30, 2022
Utility code for use with PyXLL

pyxll-utils There is no need to use this package as of PyXLL 5. All features from this package are now provided by PyXLL. If you were using this packa

PyXLL 10 Dec 18, 2021
Learning to Stylize Novel Views

Learning to Stylize Novel Views [Project] [Paper] Contact: Hsin-Ping Huang ([ema

34 Nov 27, 2022
Adversarial Graph Representation Adaptation for Cross-Domain Facial Expression Recognition (AGRA, ACM 2020, Oral)

Cross Domain Facial Expression Recognition Benchmark Implementation of papers: Cross-Domain Facial Expression Recognition: A Unified Evaluation Benchm

89 Dec 09, 2022
Reproduce ResNet-v2(Identity Mappings in Deep Residual Networks) with MXNet

Reproduce ResNet-v2 using MXNet Requirements Install MXNet on a machine with CUDA GPU, and it's better also installed with cuDNN v5 Please fix the ran

Wei Wu 531 Dec 04, 2022
Style transfer between images was performed using the VGG19 model

Style transfer between images was performed using the VGG19 model. The necessary codes, libraries and all other information of this project are available below

Onur yılmaz 2 May 09, 2022
Merlion: A Machine Learning Framework for Time Series Intelligence

Merlion: A Machine Learning Library for Time Series Table of Contents Introduction Installation Documentation Getting Started Anomaly Detection Foreca

Salesforce 2.8k Dec 30, 2022
Illuminated3D This project participates in the Nasa Space Apps Challenge 2021.

Illuminated3D This project participates in the Nasa Space Apps Challenge 2021.

Eleftheriadis Emmanouil 1 Oct 09, 2021
Implementation of SegNet: A Deep Convolutional Encoder-Decoder Architecture for Semantic Pixel-Wise Labelling

Caffe SegNet This is a modified version of Caffe which supports the SegNet architecture As described in SegNet: A Deep Convolutional Encoder-Decoder A

Alex Kendall 1.1k Jan 02, 2023
Neural network for stock price prediction

neural_network_for_stock_price_prediction Neural networks for stock price predic

2 Feb 04, 2022