U-Net Brain Tumor Segmentation

Overview

U-Net Brain Tumor Segmentation

🚀 :Feb 2019 the data processing implementation in this repo is not the fastest way (code need update, contribution is welcome), you can use TensorFlow dataset API instead.

This repo show you how to train a U-Net for brain tumor segmentation. By default, you need to download the training set of BRATS 2017 dataset, which have 210 HGG and 75 LGG volumes, and put the data folder along with all scripts.

data
  -- Brats17TrainingData
  -- train_dev_all
model.py
train.py
...

About the data

Note that according to the license, user have to apply the dataset from BRAST, please do NOT contact me for the dataset. Many thanks.


Fig 1: Brain Image
  • Each volume have 4 scanning images: FLAIR、T1、T1c and T2.
  • Each volume have 4 segmentation labels:
Label 0: background
Label 1: necrotic and non-enhancing tumor
Label 2: edema 
Label 4: enhancing tumor

The prepare_data_with_valid.py split the training set into 2 folds for training and validating. By default, it will use only half of the data for the sake of training speed, if you want to use all data, just change DATA_SIZE = 'half' to all.

About the method


Fig 2: Data augmentation

Start training

We train HGG and LGG together, as one network only have one task, set the task to all, necrotic, edema or enhance, "all" means learn to segment all tumors.

python train.py --task=all

Note that, if the loss stick on 1 at the beginning, it means the network doesn't converge to near-perfect accuracy, please try restart it.

Citation

If you find this project useful, we would be grateful if you cite the TensorLayer paper:

@article{tensorlayer2017,
author = {Dong, Hao and Supratak, Akara and Mai, Luo and Liu, Fangde and Oehmichen, Axel and Yu, Simiao and Guo, Yike},
journal = {ACM Multimedia},
title = {{TensorLayer: A Versatile Library for Efficient Deep Learning Development}},
url = {http://tensorlayer.org},
year = {2017}
}
Comments
  • TypeError: zoom_multi() got an unexpected keyword argument 'is_random'

    TypeError: zoom_multi() got an unexpected keyword argument 'is_random'

    Lossy conversion from float64 to uint8. Range [-0.18539370596408844, 2.158207416534424]. Convert image to uint8 prior to saving to suppress this warning. Traceback (most recent call last): File "train.py", line 250, in main(args.task) File "train.py", line 106, in main X[:,:,2,np.newaxis], X[:,:,3,np.newaxis], y])#[:,:,np.newaxis]]) File "train.py", line 26, in distort_imgs fill_mode='constant') TypeError: zoom_multi() got an unexpected keyword argument 'is_random'

    opened by shenzeqi 8
  • MemoryError

    MemoryError

    @zsdonghao I am getting the memory error like this, What is the solution for this error?

    Traceback (most recent call last): File "train.py", line 279, in main(args.task) File "train.py", line 78, in main y_test = (y_test > 0).astype(int) MemoryError

    opened by PoonamZ 4
  • Error: Your CPU supports instructions that TensorFlow binary not compiled to use: AVX2

    Error: Your CPU supports instructions that TensorFlow binary not compiled to use: AVX2

    I am running run.py but gives error:

    (base) G:>cd BraTS_2018_U-Net-master

    (base) G:\BraTS_2018_U-Net-master>run.py [*] creates checkpoint ... [*] creates samples/all ... finished Brats18_2013_24_1 2019-06-15 22:05:45.959220: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 Traceback (most recent call last): File "G:\BraTS_2018_U-Net-master\run.py", line 154, in

    File "G:\BraTS_2018_U-Net-master\run.py", line 117, in main t_seg = tf.placeholder('float32', [1, nw, nh, 1], name='target_segment') NameError: name 'model' is not defined

    opened by sapnii2 2
  • TypeError: __init__() got an unexpected keyword argument 'out_size'

    TypeError: __init__() got an unexpected keyword argument 'out_size'

    • After conv: Tensor("u_net/conv8/leaky_relu:0", shape=(5, 1, 1, 512), dtype=float32, device=/device:CPU:0) Traceback (most re screenshot from 2019-02-19 18-02-42 cent call last): File "train.py", line 250, in main(args.task) File "train.py", line 121, in main net = model.u_net_bn(t_image, is_train=True, reuse=False, n_out=1) File "/home/achi/project/u-net-brain-tumor-master/model.py", line 179, in u_net_bn padding=pad, act=None, batch_size=batch_size, W_init=w_init, b_init=b_init, name='deconv7') File "/home/achi/anaconda3/lib/python3.6/site-packages/tensorlayer/decorators/deprecated_alias.py", line 24, in wrapper return f(*args, **kwargs) TypeError: init() got an unexpected keyword argument 'out_size'
    opened by achintacsgit 1
  • Pre-trained model

    Pre-trained model

    I was wondering if you would share a pre-trained model. I would need to run inference-only, and training the model is taking longer than expected.

    Thanks for sharing this project!

    opened by luisremis 1
  • TypeError: zoom_multi() got an unexpected keyword argument 'is_random'

    TypeError: zoom_multi() got an unexpected keyword argument 'is_random'

    [TL] [!] checkpoint exists ... [TL] [!] samples/all exists ... Lossy conversion from float64 to uint8. Range [-0.19753389060497284, 2.826017379760742]. Convert image to uint8 prior to saving to suppress this warning.

    TypeError Traceback (most recent call last) in 239 tl.files.save_npz(net.all_params, name=save_dir+'/u_net_{}.npz'.format(task), sess=sess) 240 --> 241 main(task='all') 242 243 ##if name == "main":

    in main(task) 103 for i in range(10): 104 x_flair, x_t1, x_t1ce, x_t2, label = distort_imgs([X[:,:,0,np.newaxis], X[:,:,1,np.newaxis], --> 105 X[:,:,2,np.newaxis], X[:,:,3,np.newaxis], y])#[:,:,np.newaxis]]) 106 # print(x_flair.shape, x_t1.shape, x_t1ce.shape, x_t2.shape, label.shape) # (240, 240, 1) (240, 240, 1) (240, 240, 1) (240, 240, 1) (240, 240, 1) 107 X_dis = np.concatenate((x_flair, x_t1, x_t1ce, x_t2), axis=2)

    in distort_imgs(data) 23 x1, x2, x3, x4, y = tl.prepro.zoom_multi([x1, x2, x3, x4, y], 24 zoom_range=[0.9, 1.1], is_random=True, ---> 25 fill_mode='constant') 26 return x1, x2, x3, x4, y 27

    TypeError: zoom_multi() got an unexpected keyword argument 'is_random'

    opened by BTapan 0
  • TensorFlow Implemetation

    TensorFlow Implemetation

    Do you have implementation of brain tumor segmentation code directly in tensorflow without using tensorlayer? If yes, can you share the same? Thank you.

    opened by rupalkapdi 0
  • What is checkpoint?

    What is checkpoint?

    When I run "python train.py" and then have a checkpoint folder is created. What function of checkpoint folder? Thank you

    And I also have another question. When we had the picture, as follows. Is that the end result? I mean we can submit them to the Brast_2018 challenge? image

    Thank you very much.

    opened by tphankr 0
  • Making sense

    Making sense

    Novice here, i noticed the shape of the X_train arrays ended with 4. (240,240,4) Does each of those channel represent the type of the scan ( T1, t2, flair, t1ce ) ?

    opened by guido-niku 1
  • Classification Layer - Activation & Shape?

    Classification Layer - Activation & Shape?

    Hi!

    I went through this repository after reading your paper. Architecture on page 6, shows the final classification layer to produce feature maps of shape (240, 240, 2) which may indicate the use of a Softmax activation (not specified in the paper). On the contrary, model used in code has a classification layer of shape (240, 240, 1) using Sigmoid activation.

    Kindly clarify this ambiguity.

    opened by stalhabukhari 2
Releases(0.1)
Owner
Hao
Assistant Professor @ Peking University
Hao
Graph Convolutional Neural Networks with Data-driven Graph Filter (GCNN-DDGF)

Graph Convolutional Gated Recurrent Neural Network (GCGRNN) Improved from Graph Convolutional Neural Networks with Data-driven Graph Filter (GCNN-DDGF

Lei Lin 21 Dec 18, 2022
VACA: Designing Variational Graph Autoencoders for Interventional and Counterfactual Queries

VACA Code repository for the paper "VACA: Designing Variational Graph Autoencoders for Interventional and Counterfactual Queries (arXiv)". The impleme

Pablo Sánchez-Martín 16 Oct 10, 2022
Transformers are Graph Neural Networks!

🚀 Gated Graph Transformers Gated Graph Transformers for graph-level property prediction, i.e. graph classification and regression. Associated article

Chaitanya Joshi 46 Jun 30, 2022
Unifying Global-Local Representations in Salient Object Detection with Transformer

GLSTR (Global-Local Saliency Transformer) This is the official implementation of paper "Unifying Global-Local Representations in Salient Object Detect

11 Aug 24, 2022
3D position tracking for soccer players with multi-camera videos

This repo contains a full pipeline to support 3D position tracking of soccer players, with multi-view calibrated moving/fixed video sequences as inputs.

Yuchang Jiang 72 Dec 27, 2022
QuanTaichi evaluation suite

QuanTaichi: A Compiler for Quantized Simulations (SIGGRAPH 2021) Yuanming Hu, Jiafeng Liu, Xuanda Yang, Mingkuan Xu, Ye Kuang, Weiwei Xu, Qiang Dai, W

Taichi Developers 120 Jan 04, 2023
This repository contains the source code for the paper Tutorial on amortized optimization for learning to optimize over continuous domains by Brandon Amos

Tutorial on Amortized Optimization This repository contains the source code for the paper Tutorial on amortized optimization for learning to optimize

Meta Research 144 Dec 26, 2022
Optimus: the first large-scale pre-trained VAE language model

Optimus: the first pre-trained Big VAE language model This repository contains source code necessary to reproduce the results presented in the EMNLP 2

314 Dec 19, 2022
image scene graph generation benchmark

Scene Graph Benchmark in PyTorch 1.7 This project is based on maskrcnn-benchmark Highlights Upgrad to pytorch 1.7 Multi-GPU training and inference Bat

Microsoft 303 Dec 27, 2022
Flaxformer: transformer architectures in JAX/Flax

Flaxformer is a transformer library for primarily NLP and multimodal research at Google.

Google 116 Jan 05, 2023
PyTorch implementation for our NeurIPS 2021 Spotlight paper "Long Short-Term Transformer for Online Action Detection".

Long Short-Term Transformer for Online Action Detection Introduction This is a PyTorch implementation for our NeurIPS 2021 Spotlight paper "Long Short

77 Dec 16, 2022
A Lighting Pytorch Framework for Recommendation System, Easy-to-use and Easy-to-extend.

Torch-RecHub A Lighting Pytorch Framework for Recommendation Models, Easy-to-use and Easy-to-extend. 安装 pip install torch-rechub 主要特性 scikit-learn风格易用

Mincai Lai 67 Jan 04, 2023
A fuzzing framework for SMT solvers

yinyang A fuzzing framework for SMT solvers. Given a set of seed SMT formulas, yinyang generates mutant formulas to stress-test SMT solvers. yinyang c

Project Yin-Yang for SMT Solver Testing 145 Jan 04, 2023
SW components and demos for visual kinship recognition. An emphasis is put on the FIW dataset-- data loaders, benchmarks, results in summary.

FIW Data Development Kit Table of Contents Introduction Families In the Wild Database Publications Organization To Do License Getting Involved Introdu

Joseph P. Robinson 12 Jun 04, 2022
Train an RL agent to execute natural language instructions in a 3D Environment (PyTorch)

Gated-Attention Architectures for Task-Oriented Language Grounding This is a PyTorch implementation of the AAAI-18 paper: Gated-Attention Architecture

Devendra Chaplot 234 Nov 05, 2022
Accepted at ICCV-2021: Workshop on Computer Vision for Automated Medical Diagnosis (CVAMD)

Is it Time to Replace CNNs with Transformers for Medical Images? Accepted at ICCV-2021: Workshop on Computer Vision for Automated Medical Diagnosis (C

Christos Matsoukas 80 Dec 27, 2022
The Empirical Investigation of Representation Learning for Imitation (EIRLI)

The Empirical Investigation of Representation Learning for Imitation (EIRLI)

Center for Human-Compatible AI 31 Nov 06, 2022
CSE-519---Project - Job Title Analysis (Project for CSE 519 - Data Science Fundamentals)

A Multifaceted Approach to Job Title Analysis CSE 519 - Data Science Fundamentals Project Description Project consists of three parts: Salary Predicti

Jimit Dholakia 1 Jan 04, 2022
Code and data to accompany the camera-ready version of "Cross-Attention is All You Need: Adapting Pretrained Transformers for Machine Translation" in EMNLP 2021

Code and data to accompany the camera-ready version of "Cross-Attention is All You Need: Adapting Pretrained Transformers for Machine Translation" in EMNLP 2021

Mozhdeh Gheini 16 Jul 16, 2022
Contextual Attention Localization for Offline Handwritten Text Recognition

CALText This repository contains the source code for CALText model introduced in "CALText: Contextual Attention Localization for Offline Handwritten T

0 Feb 17, 2022