Image Segmentation with U-Net Algorithm on Carvana Dataset using AWS Sagemaker

Overview

Image Segmentation with U-Net Algorithm on Carvana Dataset using AWS Sagemaker

This is a full project of image segmentation using the model built with U-Net Algorithm on Carvana competition Dataset from Kaggle using Sagemaker as Udacity's ML Nanodegree Capstone Project.

Image Segmentation with U-Net Algorithm

Use AWS Sagemaker to train the model built with U-Net algorithm/architecture that can perform image segmentation on Carvana Dataset from Kaggle Competition.

Project Set Up and Installation

Enter AWS through the gateway and create a Sagemaker notebook instance of your choice, ml.t2.medium is a sweet spot for this project as we will not use the GPU in the notebook and will use the Sagemaker Container to train the model. Wait for the instance to launch and then create a jupyter notebook with conda_pytorch_latest_p36 kernel, this comes preinstalled with the needed modules related to pytorch we will use along the project. Set up your sagemaker roles and regions.

Dataset

We use the Carvana Dataset from Kaggle Competition to use as data for the model training job. To get the Dataset. Register or Login to your Kaggle account, create new api in the user setting and get the api key and put it in the root of your sagemaker environment root location. After that !kaggle competitions download carvana-image-masking-challenge -f train.zip and !kaggle competitions download carvana-image-masking-challenge -f train_masks.zip will download the necessary files to your notebook environment. We will then unzip the data, upload it to S3 bucket with !aws s3 sync command.

Script Files used

  1. hpo.py for hyperparameter tuning jobs where we train the model for multiple time with different hyperparameters and search for the best combination based on loss metrics.
  2. training.py for the final training of the model with the best parameters getting from the previous tuning jobs, and put debug and profiler hooks for debugging purpose and get the tensors emits during training.
  3. inference.py for using the trained model as inference and pre-processing and serializing the data before it passes to the model for segmentaion. Now this can be used locally and user friendly
  4. Note at this time, the sagemaker endpoint has an error and can't make prediction, so I have managed to create a new instance in sagemaker(ml.g4dn.xlarge to utilize the GPU) and used endpoint_local.ipynb notebook to get the inference result.
  5. requirements.txt is use to install the dependencies in the training container, these include Albumentations, higher version of torch dependencies to utilize in the training script.

Hyperparameter Tuning

I used U-Net Algorithm to create an image segmentation model. The hyperparameter searchspaces are learning-rate, number of epochs and batchsize. Note The batch size over 128(inclusive) can't be used as the GPU memory may run out during the training. Deploy a hyperparameter tuning job on sagemaker and wait for the combination of hyperparameters turn out with best metric.

hyperparameter tuning job

We pick the hyperparameters from the best training job to train the final model.

best job's hyperparameters

Debugging and Profiling

The Debugger Hook is set to record the Loss Criterion of the process in both training and validation/testing. The Plot of the Dice Coefficient is shown below.

Dice Coefficient

we can see that the validation plot is high and this means that our model had entered a state of overtraining. We can reduce this by adding dropout or L1 L2 regularization, or added more different training data, or can early stop the model before it overfit. by adding the metric definition, I could also managed to get the average accuracy and loss dat during the validation phase in AWS Cloudwatch(a powerful too to monitor your metrics of any kind). Metrics

Results

Result is pretty good, as I was using ml.g4dn.xlarge to utilize the GPU of the instance, both the hpo jobs and training job did't take too much time.

Inferenceing your data

Sagemaker Endpoint got an 500 status code error so I tried using another sagemaker instance with GPU(ml.g4dn.xlarge) and running the endpoint_local.ipynb will get you the desired output of your choice. Result

Thank You So Much For Your Time! Please don't hesitate to contribute.

Ref: Github repo of neirinzaralwin

Owner
Htin Aung Lu
I am a Machine Learning enginner. I like to work on various machine learning projects. I have more experience on @AWS @Sagemaker platform than other.
Htin Aung Lu
A Pytorch loader for MVTecAD dataset.

MVTecAD A Pytorch loader for MVTecAD dataset. It strictly follows the code style of common Pytorch datasets, such as torchvision.datasets.CIFAR10. The

Jiyuan 1 Dec 27, 2021
[BMVC2021] The official implementation of "DomainMix: Learning Generalizable Person Re-Identification Without Human Annotations"

DomainMix [BMVC2021] The official implementation of "DomainMix: Learning Generalizable Person Re-Identification Without Human Annotations" [paper] [de

Wenhao Wang 17 Dec 20, 2022
Official Repsoitory for "Activate or Not: Learning Customized Activation." [CVPR 2021]

CVPR 2021 | Activate or Not: Learning Customized Activation. This repository contains the official Pytorch implementation of the paper Activate or Not

184 Dec 27, 2022
Official code of Team Yao at Multi-Modal-Fact-Verification-2022

Official code of Team Yao at Multi-Modal-Fact-Verification-2022 A Multi-Modal Fact Verification dataset released as part of the De-Factify workshop in

Wei-Yao Wang 11 Nov 15, 2022
GARCH and Multivariate LSTM forecasting models for Bitcoin realized volatility with potential applications in crypto options trading, hedging, portfolio management, and risk management

Bitcoin Realized Volatility Forecasting with GARCH and Multivariate LSTM Author: Chi Bui This Repository Repository Directory ├── README.md

Chi Bui 113 Dec 29, 2022
Automatic differentiation with weighted finite-state transducers.

GTN: Automatic Differentiation with WFSTs Quickstart | Installation | Documentation What is GTN? GTN is a framework for automatic differentiation with

100 Dec 29, 2022
A Bayesian cognition approach for belief updating of correlation judgement through uncertainty visualizations

Overview Code and supplemental materials for Karduni et al., 2020 IEEE Vis. "A Bayesian cognition approach for belief updating of correlation judgemen

Ryan Wesslen 1 Feb 08, 2022
A transformer model to predict pathogenic mutations

MutFormer MutFormer is an application of the BERT (Bidirectional Encoder Representations from Transformers) NLP (Natural Language Processing) model wi

Wang Genomics Lab 2 Nov 29, 2022
A Pytorch implementation of CVPR 2021 paper "RSG: A Simple but Effective Module for Learning Imbalanced Datasets"

RSG: A Simple but Effective Module for Learning Imbalanced Datasets (CVPR 2021) A Pytorch implementation of our CVPR 2021 paper "RSG: A Simple but Eff

120 Dec 12, 2022
Repository containing the PhD Thesis "Formal Verification of Deep Reinforcement Learning Agents"

Getting Started This repository contains the code used for the following publications: Probabilistic Guarantees for Safe Deep Reinforcement Learning (

Edoardo Bacci 5 Aug 31, 2022
Implementation of Perceiver, General Perception with Iterative Attention in TensorFlow

Perceiver This Python package implements Perceiver: General Perception with Iterative Attention by Andrew Jaegle in TensorFlow. This model builds on t

Rishit Dagli 84 Oct 15, 2022
Clustering is a popular approach to detect patterns in unlabeled data

Visual Clustering Clustering is a popular approach to detect patterns in unlabeled data. Existing clustering methods typically treat samples in a data

Tarek Naous 24 Nov 11, 2022
HomeAssitant custom integration for dyson

HomeAssistant Custom Integration for Dyson This custom integration is still under development. This is a HA custom integration for dyson. There are se

Xiaonan Shen 232 Dec 31, 2022
Code for the paper: Audio-Visual Scene Analysis with Self-Supervised Multisensory Features

[Paper] [Project page] This repository contains code for the paper: Andrew Owens, Alexei A. Efros. Audio-Visual Scene Analysis with Self-Supervised Mu

Andrew Owens 202 Dec 13, 2022
Code implementation of Data Efficient Stagewise Knowledge Distillation paper.

Data Efficient Stagewise Knowledge Distillation Table of Contents Data Efficient Stagewise Knowledge Distillation Table of Contents Requirements Image

IvLabs 112 Dec 02, 2022
HHP-Net: A light Heteroscedastic neural network for Head Pose estimation with uncertainty

HHP-Net: A light Heteroscedastic neural network for Head Pose estimation with uncertainty Giorgio Cantarini, Francesca Odone, Nicoletta Noceti, Federi

18 Aug 02, 2022
This is a Image aid classification software based on python TK library development

This is a Image aid classification software based on python TK library development.

EasonChan 1 Jan 17, 2022
Official implementation of Long-Short Transformer in PyTorch.

Long-Short Transformer (Transformer-LS) This repository hosts the code and models for the paper: Long-Short Transformer: Efficient Transformers for La

NVIDIA Corporation 198 Dec 29, 2022
Lacmus is a cross-platform application that helps to find people who are lost in the forest using computer vision and neural networks.

lacmus The program for searching through photos from the air of lost people in the forest using Retina Net neural nwtwork. The project is being develo

Lacmus Foundation 168 Dec 27, 2022
Auto-updating data to assist in investment to NEPSE

Symbol Ratios Summary Sector LTP Undervalued Bonus % MEGA Strong Commercial Banks 368 5 10 JBBL Strong Development Banks 568 5 10 SIFC Strong Finance

Amit Chaudhary 16 Nov 01, 2022