LinkNet - This repository contains our Torch7 implementation of the network developed by us at e-Lab.

Related tags

Deep LearningLinkNet
Overview

LinkNet

This repository contains our Torch7 implementation of the network developed by us at e-Lab. You can go to our blogpost or read the article LinkNet: Exploiting Encoder Representations for Efficient Semantic Segmentation for further details.

Dependencies:

  • Torch7 : you can follow our installation step specified here
  • VideoDecoder : video decoder for torch that utilizes avcodec library.
  • Profiler : use it to calculate # of paramaters, operations and forward pass time of any network trained using torch.

Currently the network can be trained on two datasets:

Datasets Input Resolution # of classes
CamVid (cv) 768x576 11
Cityscapes (cs) 1024x512 19

To download both datasets, follow the link provided above. Both the datasets are first of all resized by the training script and if you want then you can cache this resized data using --cachepath option. In case of CamVid dataset, the available video data is first split into train/validate/test set. This is done using prepCamVid.lua file. dataDistributionCV.txt contains the detail about splitting of CamVid dataset. These things are automatically run before training of the network.

LinkNet performance on both of the above dataset:

Datasets Best IoU Best iIoU
Cityscapes 76.44 60.78
CamVid 69.10 55.83

Pretrained models and confusion matrices for both datasets can be found in the latest release.

Files/folders and their usage:

  • run.lua : main file
  • opts.lua : contains all the input options used by the tranining script
  • data : data loaders for loading datasets
  • [models] : all the model architectures are defined here
  • train.lua : loading of models and error calculation
  • test.lua : calculate testing error and save confusion matrices

There are three model files present in models folder:

  • model.lua : our LinkNet architecture
  • model-res-dec.lua : LinkNet with residual connection in each of the decoder blocks. This slightly improves the result but we had to use bilinear interpolation in residual connection because of which we were not able to run our trained model on TX1.
  • nobypass.lua : this architecture does not use any link between encoder and decoder. You can use this model to verify if connecting encoder and decoder modules actually improve performance.

A sample command to train network is given below:

th main.lua --datapath /Datasets/Cityscapes/ --cachepath /dataCache/cityscapes/ --dataset cs --model models/model.lua --save /Models/cityscapes/ --saveTrainConf --saveAll --plot

License

This software is released under a creative commons license which allows for personal and research use only. For a commercial license please contact the authors. You can view a license summary here: http://creativecommons.org/licenses/by-nc/4.0/

Comments
  • memory consuming

    memory consuming

    The model read all the dataset into the momory, this method is too memory consuming. Maybe it is better to read the dataset list and iterate the list when training .

    opened by mingminzhen 7
  • Training on camvid dataset

    Training on camvid dataset

    Hi. I can't reproduce your result on camvid dataset. What is the learning rate and number of training epoch you used in your training, is your published result on validate or test set?.

    opened by vietdoan 4
  • Torch: not enough memory (17GB)

    Torch: not enough memory (17GB)

    Hi, all

    When I run : th main.lua --datapath /data2/cityscapes_dataset/leftImg8bit/all_train_images/ --cachepath /data2/cityscapes_dataset/leftImg8bit/dataCache/ --dataset cs --model models/model.lua --save save_models/cityscapes/ --saveTrainConf --saveAll --plot

    I got "Torch: not enough memory: you tried to allocate 17GB" error (details)

    It's strange because the paper mentioned it is trained using Titan X which has 12GB memory. Why the network consumes 17GB in running?

    Any suggestion to fix this issue?

    Thanks!

    opened by amiltonwong 3
  • Fine Tuning

    Fine Tuning

    Hi,

    is there any possibility to fine-tune this model on a custom datase with different number of classes? The pre-trained weights must be exist also, as I know.

    opened by MyVanitar 3
  • Model input/output details?

    Model input/output details?

    Hi,

    I'm having a hell of a time trying to understand what the model is expecting in terms of input and output. I'm trying to use this model in an iOS project, so I need to convert the model to Apple's CoreML format.

    Image input questions:

    • For image pixel values: 0-255, 0-1, -1-1?
    • RGB or BGR?
    • Color bias?

    Prediction output:

    • Looks like the shape is # of classes, width, height?
    • Predictions are positive floats from 0-100?

    So far I'm having the best luck with these specifications:

    import torch
    from torch2coreml import convert
    from torch.utils.serialization import load_lua
    
    model = load_lua("model-cs-IoU-cpu.net")
    
    input_shape = (3, 512, 1024)
    coreml_model = convert(
            model,
            [input_shape],
            input_names=['inputImage'],
            output_names=['outputImage'],
            image_input_names=['inputImage'],
            preprocessing_args={
                'image_scale': 2/255.0
            }
        )
    coreml_model.save("/home/sean/Downloads/Final/model-cs-IoU.mlmodel")
    
    opened by seantempesta 2
  • About IoU

    About IoU

    Hi, @codeAC29
    I cannot obtain the high IoU in my training. I looked into your code and found that, the IoU is computed via averageValid. But this is actually computing the mean of class accuracy. The IoU should be the value of averageUnionValid. Do you notice the difference and obtain 76% IoU by averageUnionValid ?

    Sorry for the trouble. For convenience, I refer the definition of averageValid and averageUnionValid here.

    opened by qqning 2
  • Error while running linknet main file

    Error while running linknet main file

    Hii, I am getting this error while running main.py RuntimeError: Expected object of type torch.cuda.LongTensor but found type torch.cuda.FloatTensor for argument 2 'target'. Please help me out. Also when i try to run the trained models i am running into error. I am using pytorch to run .net files. I am not able to load them as it is showing error: name cs is not defined. It is a model. Why does it have a variable named cs(here cs represents cityscapes) in it?

    opened by Tharun98 0
  • Model fails for input size other than multiples of 32(for depth of 4)

    Model fails for input size other than multiples of 32(for depth of 4)

    Hi, If we give the input image size other than 32 multiples there is a size mismatch error when adding the output from encoder3 and decoder4. For example input image size is 1000x2000 output of encoder3 is 63x125 and decoder4 output size is 64x126. We need adjust parameters for spatialfullconvolution layer only if input image size is multiple of 2^(n+1) where n is encoder depth. For other image sizes adjust parameter depends on the image size. In this example network works if adjust parameter is zero in decoders 3 and 4. Please clarify if this network works only for 2^(n+1) sizes. Thanks.

    opened by Tharun98 1
  • How about the image resolution?

    How about the image resolution?

    Hi, I am reproducing the LinkNet. I have a doubt about the input image resolution and the output image resolution when you compute the FLOPS. I find my FLOPS and running speed are different your results reported on your paper.

    opened by ycszen 5
  • linknet  architecture

    linknet architecture

    iam trying to build linknet in caffe. Could you please help me in below qns: 1)Found that there are 5 downsampling and 6 updsampling by 2. if we have different no of up sampling and down sampling(6,5) how can we get the same output shape as input. Referred:https://arxiv.org/pdf/1707.03718.pdf 2)how many iterations you ran to get the proper results. 3)To match the encoder and decoder output shape i used crop layer before Eltwise instead of adding extra row or column. Will it make any difference?

    opened by vishnureghu007 7
  • Error while training

    Error while training

    I got the camVid dataset as specified in the in the read me file and installed video-decoder

    Ientered the following command to start training: th main.lua --datapath ./data/CamVid/ --cachepath ./dataCache/CamV/ --dataset cv --model ./models/model.lua --save ./Models/CamV/ --saveTrainConf --saveAll --plot

    And I got the following error,

    Preparing CamVid dataset for data loader Filenames and their role found in: ./misc/dataDistributionCV.txt

    Getting input images and labels for: 01TP_extract.avi /home/jayp/torch/install/bin/luajit: /home/jayp/torch/install/share/lua/5.1/trepl/init.lua:389: /home/jayp/torch/install/share/lua/5.1/trepl/init.lua:389: error loading module 'libvideo_decoder' from file '/home/jayp/torch/install/lib/lua/5.1/libvideo_decoder.so': /home/jayp/torch/install/lib/lua/5.1/libvideo_decoder.so: undefined symbol: avcodec_get_frame_defaults stack traceback: [C]: in function 'error' /home/jayp/torch/install/share/lua/5.1/trepl/init.lua:389: in function 'require' main.lua:34: in main chunk [C]: in function 'dofile' ...jayp/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk

    I would really appreciate if anyone would help me with this.

    Thank You!

    opened by jay98 4
Releases(v1.0)
Owner
e-Lab
e-Lab
Code and data for ACL2021 paper Cross-Lingual Abstractive Summarization with Limited Parallel Resources.

Multi-Task Framework for Cross-Lingual Abstractive Summarization (MCLAS) The code for ACL2021 paper Cross-Lingual Abstractive Summarization with Limit

Yu Bai 43 Nov 07, 2022
The implementation our EMNLP 2021 paper "Enhanced Language Representation with Label Knowledge for Span Extraction".

LEAR The implementation our EMNLP 2021 paper "Enhanced Language Representation with Label Knowledge for Span Extraction". See below for an overview of

杨攀 93 Jan 07, 2023
Decision Transformer: A brand new Offline RL Pattern

DecisionTransformer_StepbyStep Intro Decision Transformer: A brand new Offline RL Pattern. 这是关于NeurIPS 2021 热门论文Decision Transformer的复现。 👍 原文地址: Deci

Irving 14 Nov 22, 2022
3D Human Pose Machines with Self-supervised Learning

3D Human Pose Machines with Self-supervised Learning Keze Wang, Liang Lin, Chenhan Jiang, Chen Qian, and Pengxu Wei, “3D Human Pose Machines with Self

Chenhan Jiang 398 Dec 20, 2022
Reinforcement Learning for Portfolio Management

qtrader Reinforcement Learning for Portfolio Management Why Reinforcement Learning? Learns the optimal action, rather than models the market. Adaptive

Angelos Filos 406 Jan 01, 2023
LSTM model trained on a small dataset of 3000 names written in PyTorch

LSTM model trained on a small dataset of 3000 names. Model generates names from model by selecting one out of top 3 letters suggested by model at a time until an EOS (End Of Sentence) character is no

Sahil Lamba 1 Dec 20, 2021
LoFTR:Detector-Free Local Feature Matching with Transformers CVPR 2021

LoFTR-with-train-script LoFTR:Detector-Free Local Feature Matching with Transformers CVPR 2021 (with train script --- unofficial ---). About Megadepth

Nan Xiaohu 15 Nov 04, 2022
Winners of DrivenData's Overhead Geopose Challenge

Winners of DrivenData's Overhead Geopose Challenge

DrivenData 22 Aug 04, 2022
Just Randoms Cats with python

Random-Cat Just Randoms Cats with python.

OriCode 2 Dec 21, 2021
On the adaptation of recurrent neural networks for system identification

On the adaptation of recurrent neural networks for system identification This repository contains the Python code to reproduce the results of the pape

Marco Forgione 3 Jan 13, 2022
Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

CoProtector Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

Zhensu Sun 1 Oct 26, 2021
Streaming Anomaly Detection Framework in Python (Outlier Detection for Streaming Data)

Python Streaming Anomaly Detection (PySAD) PySAD is an open-source python framework for anomaly detection on streaming multivariate data. Documentatio

Selim Firat Yilmaz 181 Dec 18, 2022
BMVC 2021 Oral: code for BI-GCN: Boundary-Aware Input-Dependent Graph Convolution for Biomedical Image Segmentation

BMVC 2021 BI-GConv: Boundary-Aware Input-Dependent Graph Convolution for Biomedical Image Segmentation Necassary Dependencies: PyTorch 1.2.0 Python 3.

Yanda Meng 15 Nov 08, 2022
Semi-supervised Stance Detection of Tweets Via Distant Network Supervision

SANDS This is an annonymous repository containing code and data necessary to reproduce the results published in "Semi-supervised Stance Detection of T

2 Sep 22, 2022
Data and extra materials for the food safety publications classifier

Data and extra materials for the food safety publications classifier The subdirectories contain detailed descriptions of their contents in the README.

1 Jan 20, 2022
Deep Image Search is an AI-based image search engine that includes deep transfor learning features Extraction and tree-based vectorized search.

Deep Image Search - AI-Based Image Search Engine Deep Image Search is an AI-based image search engine that includes deep transfer learning features Ex

139 Jan 01, 2023
上海交通大学全自动抢课脚本,支持准点开抢与抢课后持续捡漏两种模式。2021/06/08更新。

Welcome to Course-Bullying-in-SJTU-v3.1! 2021/6/8 紧急更新v3.1 更新说明 为了更好地保护用户隐私,将原来用户名+密码的登录方式改为微信扫二维码+cookie登录方式,不再需要配置使用pytesseract。在使用扫码登录模式时,请稍等,二维码将马

87 Sep 13, 2022
A torch.Tensor-like DataFrame library supporting multiple execution runtimes and Arrow as a common memory format

TorchArrow (Warning: Unstable Prototype) This is a prototype library currently under heavy development. It does not currently have stable releases, an

Facebook Research 536 Jan 06, 2023
Implementation of RegretNet with Pytorch

Dependencies are Python 3, a recent PyTorch, numpy/scipy, tqdm, future and tensorboard. Plotting with Matplotlib. Implementation of the neural network

Horris zhGu 1 Nov 05, 2021