Kaggle Ultrasound Nerve Segmentation competition [Keras]

Overview

Ultrasound nerve segmentation using Keras (1.0.7)

Kaggle Ultrasound Nerve Segmentation competition [Keras]

#Install (Ubuntu {14,16}, GPU)

cuDNN required.

###Theano

In ~/.theanorc

[global]
device = gpu0
[dnn]
enabled = True

###Keras

  • sudo apt-get install libhdf5-dev
  • sudo pip install h5py
  • sudo pip install pydot
  • sudo pip install nose_parameterized
  • sudo pip install keras

In ~/.keras/keras.json (it's very important, the project was running on theano backend, and some issues are possible in case of TensorFlow)

{
    "image_dim_ordering": "th",
    "epsilon": 1e-07,
    "floatx": "float32",
    "backend": "theano"
}

###Python deps

  • sudo apt-get install python-opencv
  • sudo apt-get install python-sklearn

#Prepare

Place train and test data into '../train' and '../test' folders accordingly.

mkdir np_data
python data.py

#Training

Single model training.

python train.py

Results will be generatated in "res/" folder. res/unet.hdf5 - best model

Generate submission:

python submission.py

Generate predection with a model in res/unet.hdf5

python current.py

#Model

Motivation's explained in my internal pres (slides: http://www.slideshare.net/Eduardyantov/ultrasound-segmentation-kaggle-review)

I used U-net like architecture (http://arxiv.org/abs/1505.04597). Main differences:

  • inception blocks instead of VGG like
  • Conv with stride instead of MaxPooling
  • Dropout, p=0.5
  • skip connections from encoder to decoder layers with residual blocks
  • BatchNorm everywhere
  • 2 heads training: auxiliary branch for scoring nerve presence (in the middle of the network), one branch for segmentation
  • ELU activation
  • sigmoid activation in output
  • Adam optimizer, without weight regularization in layers
  • Dice coeff loss, average per batch, without smoothing
  • output layers - sigmoid activation
  • batch_size=64,128 (for GeForce 1080 and Titan X respectively)

Augmentation:

  • flip x,y
  • random zoom
  • random channel shift
  • elastic transormation didn't help in this configuration

Augmentation generator (generate augmented data on the fly for each epoch) didn't improve the score. For prediction augmented images were used.

Validation:

For some reason validation split by patient (which is proper in this competition) didn't work for me, probably due to bug in the code. So I used random split.

Final prediction uses probability of a nerve presence: p_nerve = (p_score + p_segment)/2, where p_segment based on number of output pixels in the mask.

#Results and technical aspects

  • On GPU Titan X an epoch took about 6 minutes. Training early stops at 15-30 epochs.
  • For batch_size=64 6Gb GPU memory is required.
  • Best single model achieved 0.694 LB score.
  • An ensemble of 6 different k-fold ensembles (k=5,6,8) scored 0.70399

#Credits This code was originally based on https://github.com/jocicmarko/ultrasound-nerve-segmentation/

EfficientNetV2 implementation using PyTorch

EfficientNetV2-S implementation using PyTorch Train Steps Configure imagenet path by changing data_dir in train.py python main.py --benchmark for mode

Jahongir Yunusov 86 Dec 29, 2022
A toolkit for making real world machine learning and data analysis applications in C++

dlib C++ library Dlib is a modern C++ toolkit containing machine learning algorithms and tools for creating complex software in C++ to solve real worl

Davis E. King 11.6k Jan 01, 2023
Split Variational AutoEncoder

Split-VAE Split Variational AutoEncoder Introduction This repository contains and implemementation of a Split Variational AutoEncoder (SVAE). In a SVA

Andrea Asperti 2 Sep 02, 2022
Tensorflow Implementation of SMU: SMOOTH ACTIVATION FUNCTION FOR DEEP NETWORKS USING SMOOTHING MAXIMUM TECHNIQUE

SMU A Tensorflow Implementation of SMU: SMOOTH ACTIVATION FUNCTION FOR DEEP NETWORKS USING SMOOTHING MAXIMUM TECHNIQUE arXiv https://arxiv.org/abs/211

Fuhang 5 Jan 18, 2022
Finetune the base 64 px GLIDE-text2im model from OpenAI on your own image-text dataset

Finetune the base 64 px GLIDE-text2im model from OpenAI on your own image-text dataset

Clay Mullis 82 Oct 13, 2022
πŸ‘¨β€πŸ’» run nanosaur in simulation with Gazebo/Ingnition

πŸ¦• πŸ‘¨β€πŸ’» nanosaur_gazebo nanosaur The smallest NVIDIA Jetson dinosaur robot, open-source, fully 3D printable, based on ROS2 & Isaac ROS. Designed & ma

nanosaur 9 Jul 19, 2022
PyTorch code for JEREX: Joint Entity-Level Relation Extractor

JEREX: "Joint Entity-Level Relation Extractor" PyTorch code for JEREX: "Joint Entity-Level Relation Extractor". For a description of the model and exp

LAVIS - NLP Working Group 50 Dec 01, 2022
This is the repo for Uncertainty Quantification 360 Toolkit.

UQ360 The Uncertainty Quantification 360 (UQ360) toolkit is an open-source Python package that provides a diverse set of algorithms to quantify uncert

International Business Machines 207 Dec 30, 2022
Creative Applications of Deep Learning w/ Tensorflow

Creative Applications of Deep Learning w/ Tensorflow This repository contains lecture transcripts and homework assignments as Jupyter Notebooks for th

Parag K Mital 1.5k Dec 30, 2022
Simple implementation of Mobile-Former on Pytorch

Simple-implementation-of-Mobile-Former At present, only the model but no trained. There may be some bug in the code, and some details may be different

Acheung 103 Dec 31, 2022
Official and maintained implementation of the paper "OSS-Net: Memory Efficient High Resolution Semantic Segmentation of 3D Medical Data" [BMVC 2021].

OSS-Net: Memory Efficient High Resolution Semantic Segmentation of 3D Medical Data Christoph Reich, Tim Prangemeier, Γ–zdemir Cetin & Heinz Koeppl | Pr

Christoph Reich 23 Sep 21, 2022
Resources related to our paper "CLIN-X: pre-trained language models and a study on cross-task transfer for concept extraction in the clinical domain"

CLIN-X (CLIN-X-ES) & (CLIN-X-EN) This repository holds the companion code for the system reported in the paper: "CLIN-X: pre-trained language models a

Bosch Research 4 Dec 05, 2022
A Distributional Approach To Controlled Text Generation

A Distributional Approach To Controlled Text Generation This is the repository code for the ICLR 2021 paper "A Distributional Approach to Controlled T

NAVER 102 Jan 07, 2023
PyTorch code for the paper: FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning

FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning This is the PyTorch implementation of our paper: FeatMatch: Feature-Based Augmentat

43 Nov 19, 2022
Official PyTorch implementation of N-ImageNet: Towards Robust, Fine-Grained Object Recognition with Event Cameras (ICCV 2021)

N-ImageNet: Towards Robust, Fine-Grained Object Recognition with Event Cameras Official PyTorch implementation of N-ImageNet: Towards Robust, Fine-Gra

32 Dec 26, 2022
official implemntation for "Contrastive Learning with Stronger Augmentations"

CLSA CLSA is a self-supervised learning methods which focused on the pattern learning from strong augmentations. Copyright (C) 2020 Xiao Wang, Guo-Jun

Lab for MAchine Perception and LEarning (MAPLE) 47 Nov 29, 2022
Gesture Volume Control v.2

Gesture volume control v.2 In this project I am going to learn how to use Gesture Control to change the volume of a computer. I first look into hand t

Pavel Dat 23 Dec 26, 2022
The repository offers the official implementation of our BMVC 2021 paper in PyTorch.

CrossMLP Cascaded Cross MLP-Mixer GANs for Cross-View Image Translation Bin Ren1, Hao Tang2, Nicu Sebe1. 1University of Trento, Italy, 2ETH, Switzerla

Bingoren 16 Jul 27, 2022
Blender add-on: Add to Cameras menu: View β†’ Camera, View β†’ Add Camera, Camera β†’ View, Previous Camera, Next Camera

Blender add-on: Camera additions In 3D view, it adds these actions to the View|Cameras menu: View β†’ Camera : set the current camera to the 3D view Vie

German Bauer 11 Feb 08, 2022