U-Net Brain Tumor Segmentation

Overview

U-Net Brain Tumor Segmentation

🚀 :Feb 2019 the data processing implementation in this repo is not the fastest way (code need update, contribution is welcome), you can use TensorFlow dataset API instead.

This repo show you how to train a U-Net for brain tumor segmentation. By default, you need to download the training set of BRATS 2017 dataset, which have 210 HGG and 75 LGG volumes, and put the data folder along with all scripts.

data
  -- Brats17TrainingData
  -- train_dev_all
model.py
train.py
...

About the data

Note that according to the license, user have to apply the dataset from BRAST, please do NOT contact me for the dataset. Many thanks.


Fig 1: Brain Image
  • Each volume have 4 scanning images: FLAIR、T1、T1c and T2.
  • Each volume have 4 segmentation labels:
Label 0: background
Label 1: necrotic and non-enhancing tumor
Label 2: edema 
Label 4: enhancing tumor

The prepare_data_with_valid.py split the training set into 2 folds for training and validating. By default, it will use only half of the data for the sake of training speed, if you want to use all data, just change DATA_SIZE = 'half' to all.

About the method


Fig 2: Data augmentation

Start training

We train HGG and LGG together, as one network only have one task, set the task to all, necrotic, edema or enhance, "all" means learn to segment all tumors.

python train.py --task=all

Note that, if the loss stick on 1 at the beginning, it means the network doesn't converge to near-perfect accuracy, please try restart it.

Citation

If you find this project useful, we would be grateful if you cite the TensorLayer paper:

@article{tensorlayer2017,
author = {Dong, Hao and Supratak, Akara and Mai, Luo and Liu, Fangde and Oehmichen, Axel and Yu, Simiao and Guo, Yike},
journal = {ACM Multimedia},
title = {{TensorLayer: A Versatile Library for Efficient Deep Learning Development}},
url = {http://tensorlayer.org},
year = {2017}
}
Comments
  • TypeError: zoom_multi() got an unexpected keyword argument 'is_random'

    TypeError: zoom_multi() got an unexpected keyword argument 'is_random'

    Lossy conversion from float64 to uint8. Range [-0.18539370596408844, 2.158207416534424]. Convert image to uint8 prior to saving to suppress this warning. Traceback (most recent call last): File "train.py", line 250, in main(args.task) File "train.py", line 106, in main X[:,:,2,np.newaxis], X[:,:,3,np.newaxis], y])#[:,:,np.newaxis]]) File "train.py", line 26, in distort_imgs fill_mode='constant') TypeError: zoom_multi() got an unexpected keyword argument 'is_random'

    opened by shenzeqi 8
  • MemoryError

    MemoryError

    @zsdonghao I am getting the memory error like this, What is the solution for this error?

    Traceback (most recent call last): File "train.py", line 279, in main(args.task) File "train.py", line 78, in main y_test = (y_test > 0).astype(int) MemoryError

    opened by PoonamZ 4
  • Error: Your CPU supports instructions that TensorFlow binary not compiled to use: AVX2

    Error: Your CPU supports instructions that TensorFlow binary not compiled to use: AVX2

    I am running run.py but gives error:

    (base) G:>cd BraTS_2018_U-Net-master

    (base) G:\BraTS_2018_U-Net-master>run.py [*] creates checkpoint ... [*] creates samples/all ... finished Brats18_2013_24_1 2019-06-15 22:05:45.959220: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 Traceback (most recent call last): File "G:\BraTS_2018_U-Net-master\run.py", line 154, in

    File "G:\BraTS_2018_U-Net-master\run.py", line 117, in main t_seg = tf.placeholder('float32', [1, nw, nh, 1], name='target_segment') NameError: name 'model' is not defined

    opened by sapnii2 2
  • TypeError: __init__() got an unexpected keyword argument 'out_size'

    TypeError: __init__() got an unexpected keyword argument 'out_size'

    • After conv: Tensor("u_net/conv8/leaky_relu:0", shape=(5, 1, 1, 512), dtype=float32, device=/device:CPU:0) Traceback (most re screenshot from 2019-02-19 18-02-42 cent call last): File "train.py", line 250, in main(args.task) File "train.py", line 121, in main net = model.u_net_bn(t_image, is_train=True, reuse=False, n_out=1) File "/home/achi/project/u-net-brain-tumor-master/model.py", line 179, in u_net_bn padding=pad, act=None, batch_size=batch_size, W_init=w_init, b_init=b_init, name='deconv7') File "/home/achi/anaconda3/lib/python3.6/site-packages/tensorlayer/decorators/deprecated_alias.py", line 24, in wrapper return f(*args, **kwargs) TypeError: init() got an unexpected keyword argument 'out_size'
    opened by achintacsgit 1
  • Pre-trained model

    Pre-trained model

    I was wondering if you would share a pre-trained model. I would need to run inference-only, and training the model is taking longer than expected.

    Thanks for sharing this project!

    opened by luisremis 1
  • TypeError: zoom_multi() got an unexpected keyword argument 'is_random'

    TypeError: zoom_multi() got an unexpected keyword argument 'is_random'

    [TL] [!] checkpoint exists ... [TL] [!] samples/all exists ... Lossy conversion from float64 to uint8. Range [-0.19753389060497284, 2.826017379760742]. Convert image to uint8 prior to saving to suppress this warning.

    TypeError Traceback (most recent call last) in 239 tl.files.save_npz(net.all_params, name=save_dir+'/u_net_{}.npz'.format(task), sess=sess) 240 --> 241 main(task='all') 242 243 ##if name == "main":

    in main(task) 103 for i in range(10): 104 x_flair, x_t1, x_t1ce, x_t2, label = distort_imgs([X[:,:,0,np.newaxis], X[:,:,1,np.newaxis], --> 105 X[:,:,2,np.newaxis], X[:,:,3,np.newaxis], y])#[:,:,np.newaxis]]) 106 # print(x_flair.shape, x_t1.shape, x_t1ce.shape, x_t2.shape, label.shape) # (240, 240, 1) (240, 240, 1) (240, 240, 1) (240, 240, 1) (240, 240, 1) 107 X_dis = np.concatenate((x_flair, x_t1, x_t1ce, x_t2), axis=2)

    in distort_imgs(data) 23 x1, x2, x3, x4, y = tl.prepro.zoom_multi([x1, x2, x3, x4, y], 24 zoom_range=[0.9, 1.1], is_random=True, ---> 25 fill_mode='constant') 26 return x1, x2, x3, x4, y 27

    TypeError: zoom_multi() got an unexpected keyword argument 'is_random'

    opened by BTapan 0
  • TensorFlow Implemetation

    TensorFlow Implemetation

    Do you have implementation of brain tumor segmentation code directly in tensorflow without using tensorlayer? If yes, can you share the same? Thank you.

    opened by rupalkapdi 0
  • What is checkpoint?

    What is checkpoint?

    When I run "python train.py" and then have a checkpoint folder is created. What function of checkpoint folder? Thank you

    And I also have another question. When we had the picture, as follows. Is that the end result? I mean we can submit them to the Brast_2018 challenge? image

    Thank you very much.

    opened by tphankr 0
  • Making sense

    Making sense

    Novice here, i noticed the shape of the X_train arrays ended with 4. (240,240,4) Does each of those channel represent the type of the scan ( T1, t2, flair, t1ce ) ?

    opened by guido-niku 1
  • Classification Layer - Activation & Shape?

    Classification Layer - Activation & Shape?

    Hi!

    I went through this repository after reading your paper. Architecture on page 6, shows the final classification layer to produce feature maps of shape (240, 240, 2) which may indicate the use of a Softmax activation (not specified in the paper). On the contrary, model used in code has a classification layer of shape (240, 240, 1) using Sigmoid activation.

    Kindly clarify this ambiguity.

    opened by stalhabukhari 2
Releases(0.1)
Owner
Hao
Assistant Professor @ Peking University
Hao
Cleaned test data list of DukeMTMC-reID, ICCV2021

Cleaned DukeMTMC-reID Cleaned data list of DukeMTMC-reID released with our paper accepted by ICCV 2021: Learning Instance-level Spatial-Temporal Patte

14 Feb 19, 2022
Unofficial Tensorflow 2 implementation of the paper Implicit Neural Representations with Periodic Activation Functions

Siren: Implicit Neural Representations with Periodic Activation Functions The unofficial Tensorflow 2 implementation of the paper Implicit Neural Repr

Seyma Yucer 2 Jun 27, 2022
Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners

DART Implementation for ICLR2022 paper Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners. Environment

ZJUNLP 83 Dec 27, 2022
A transformer which can randomly augment VOC format dataset (both image and bbox) online.

VocAug It is difficult to find a script which can augment VOC-format dataset, especially the bbox. Or find a script needs complex requirements so it i

Coder.AN 1 Mar 05, 2022
A Python package for generating concise, high-quality summaries of a probability distribution

GoodPoints A Python package for generating concise, high-quality summaries of a probability distribution GoodPoints is a collection of tools for compr

Microsoft 28 Oct 10, 2022
Predicting 10 different clothing types using Xception pre-trained model.

Predicting-Clothing-Types Predicting 10 different clothing types using Xception pre-trained model from Keras library. It is reimplemented version from

AbdAssalam Ahmad 3 Dec 29, 2021
[ICLR 2021] Is Attention Better Than Matrix Decomposition?

Enjoy-Hamburger 🍔 Official implementation of Hamburger, Is Attention Better Than Matrix Decomposition? (ICLR 2021) Under construction. Introduction T

Gsunshine 271 Dec 29, 2022
chainladder - Property and Casualty Loss Reserving in Python

chainladder (python) chainladder - Property and Casualty Loss Reserving in Python This package gets inspiration from the popular R ChainLadder package

Casualty Actuarial Society 130 Dec 07, 2022
TAUFE: Task-Agnostic Undesirable Feature DeactivationUsing Out-of-Distribution Data

A deep neural network (DNN) has achieved great success in many machine learning tasks by virtue of its high expressive power. However, its prediction can be easily biased to undesirable features, whi

KAIST Data Mining Lab 8 Dec 07, 2022
A Java implementation of the experiments for the paper "k-Center Clustering with Outliers in Sliding Windows"

OutliersSlidingWindows A Java implementation of the experiments for the paper "k-Center Clustering with Outliers in Sliding Windows" Dataset generatio

PaoloPellizzoni 0 Jan 05, 2022
Evaluation toolkit of the informative tracking benchmark comprising 9 scenarios, 180 diverse videos, and new challenges.

Informative-tracking-benchmark Informative tracking benchmark (ITB) higher diversity. It contains 9 representative scenarios and 180 diverse videos. m

Xin Li 15 Nov 26, 2022
Train an imgs.ai model on your own dataset

imgs.ai is a fast, dataset-agnostic, deep visual search engine for digital art history based on neural network embeddings.

Fabian Offert 5 Dec 21, 2021
🚀 PyTorch Implementation of "Progressive Distillation for Fast Sampling of Diffusion Models(v-diffusion)"

PyTorch Implementation of "Progressive Distillation for Fast Sampling of Diffusion Models(v-diffusion)" Unofficial PyTorch Implementation of Progressi

Vitaliy Hramchenko 58 Dec 19, 2022
Code for the paper Task Agnostic Morphology Evolution.

Task-Agnostic Morphology Optimization This repository contains code for the paper Task-Agnostic Morphology Evolution by Donald (Joey) Hejna, Pieter Ab

Joey Hejna 18 Aug 04, 2022
The implementation of 'Image synthesis via semantic composition'.

Image synthesis via semantic synthesis [Project Page] by Yi Wang, Lu Qi, Ying-Cong Chen, Xiangyu Zhang, Jiaya Jia. Introduction This repository gives

DV Lab 71 Jan 06, 2023
Multimodal commodity image retrieval 多模态商品图像检索

Multimodal commodity image retrieval 多模态商品图像检索 Not finished yet... introduce explain:The specific description of the project and the product image dat

hongjie 8 Nov 25, 2022
General purpose Slater-Koster tight-binding code for electronic structure calculations

tight-binder Introduction General purpose tight-binding code for electronic structure calculations based on the Slater-Koster approximation. The code

9 Dec 15, 2022
FairFuzz: AFL extension targeting rare branches

FairFuzz An AFL extension to increase code coverage by targeting rare branches. FairFuzz has a particular advantage on programs with highly nested str

Caroline Lemieux 222 Nov 16, 2022
Towards Flexible Blind JPEG Artifacts Removal (FBCNN, ICCV 2021)

Towards Flexible Blind JPEG Artifacts Removal (FBCNN, ICCV 2021)

Jiaxi Jiang 282 Jan 02, 2023
SysWhispers Shellcode Loader

Shhhloader Shhhloader is a SysWhispers Shellcode Loader that is currently a Work in Progress. It takes raw shellcode as input and compiles a C++ stub

icyguider 630 Jan 03, 2023