FastFCN: Rethinking Dilated Convolution in the Backbone for Semantic Segmentation.

Related tags

Deep LearningFastFCN
Overview

FastFCN: Rethinking Dilated Convolution in the Backbone for Semantic Segmentation

[Project] [Paper] [arXiv] [Home]

PWC

Official implementation of FastFCN: Rethinking Dilated Convolution in the Backbone for Semantic Segmentation.
A Faster, Stronger and Lighter framework for semantic segmentation, achieving the state-of-the-art performance and more than 3x acceleration.

@inproceedings{wu2019fastfcn,
  title     = {FastFCN: Rethinking Dilated Convolution in the Backbone for Semantic Segmentation},
  author    = {Wu, Huikai and Zhang, Junge and Huang, Kaiqi and Liang, Kongming and Yu Yizhou},
  booktitle = {arXiv preprint arXiv:1903.11816},
  year = {2019}
}

Contact: Hui-Kai Wu ([email protected])

Update

2020-04-15: Now support inference on a single image !!!

CUDA_VISIBLE_DEVICES=0,1,2,3 python -m experiments.segmentation.test_single_image --dataset [pcontext|ade20k] \
    --model [encnet|deeplab|psp] --jpu [JPU|JPU_X] \
    --backbone [resnet50|resnet101] [--ms] --resume {MODEL} --input-path {INPUT} --save-path {OUTPUT}

2020-04-15: New joint upsampling module is now available !!!

  • --jpu [JPU|JPU_X]: JPU is the original module in the arXiv paper; JPU_X is a pyramid version of JPU.

2020-02-20: FastFCN can now run on every OS with PyTorch>=1.1.0 and Python==3.*.*

  • Replace all C/C++ extensions with pure python extensions.

Version

  1. Original code, producing the results reported in the arXiv paper. [branch:v1.0.0]
  2. Pure PyTorch code, with torch.nn.DistributedDataParallel and torch.nn.SyncBatchNorm. [branch:latest]
  3. Pure Python code. [branch:master]

Overview

Framework

Joint Pyramid Upsampling (JPU)

Install

  1. PyTorch >= 1.1.0 (Note: The code is test in the environment with python=3.6, cuda=9.0)
  2. Download FastFCN
    git clone https://github.com/wuhuikai/FastFCN.git
    cd FastFCN
    
  3. Install Requirements
    nose
    tqdm
    scipy
    cython
    requests
    

Train and Test

PContext

python -m scripts.prepare_pcontext
Method Backbone mIoU FPS Model Scripts
EncNet ResNet-50 49.91 18.77
EncNet+JPU (ours) ResNet-50 51.05 37.56 GoogleDrive bash
PSP ResNet-50 50.58 18.08
PSP+JPU (ours) ResNet-50 50.89 28.48 GoogleDrive bash
DeepLabV3 ResNet-50 49.19 15.99
DeepLabV3+JPU (ours) ResNet-50 50.07 20.67 GoogleDrive bash
EncNet ResNet-101 52.60 (MS) 10.51
EncNet+JPU (ours) ResNet-101 54.03 (MS) 32.02 GoogleDrive bash

ADE20K

python -m scripts.prepare_ade20k

Training Set

Method Backbone mIoU (MS) Model Scripts
EncNet ResNet-50 41.11
EncNet+JPU (ours) ResNet-50 42.75 GoogleDrive bash
EncNet ResNet-101 44.65
EncNet+JPU (ours) ResNet-101 44.34 GoogleDrive bash

Training Set + Val Set

Method Backbone FinalScore (MS) Model Scripts
EncNet+JPU (ours) ResNet-50 GoogleDrive bash
EncNet ResNet-101 55.67
EncNet+JPU (ours) ResNet-101 55.84 GoogleDrive bash

Note: EncNet (ResNet-101) is trained with crop_size=576, while EncNet+JPU (ResNet-101) is trained with crop_size=480 for fitting 4 images into a 12G GPU.

Visual Results

Dataset Input GT EncNet Ours
PContext
ADE20K

More Visual Results

Acknowledgement

Code borrows heavily from PyTorch-Encoding.

Comments
  • Some problem when running test.py and train.py

    Some problem when running test.py and train.py

    Hi, I am a beginner in deep learning. Some problem occurred when I was running the code. First, I use the command 「 tar -xvf encnet_jpu_res50_pcontext.pth.tar 」 to extract the tar file, but it fails. Second, if i successfully extract the file and get checkpoint, which file should I put my checkpoint in ? Where should I extract my checkpoint file to? Thank You!

    opened by pp00704831 18
  • why i remove JPU,I also can  train model?

    why i remove JPU,I also can train model?

    Why does the code still execute without error when I delete the JPU module?(/FastFCN/encoding/nn/customize.py),I also can train model? These are my commands :(I did load the JPU module) CUDA_VISIBLE_DEVICES=4,5,6,7 python train.py --dataset pcontext --model encnet --jpu --aux --se-loss --backbone resnet101 --checkname encnet_res101_pcontext

    opened by E18301194 17
  • Segmentation fault

    Segmentation fault

    I think this problem is caused by my previous pytorch problem,so maybe i have to solve pytorch first.Could you give me some help? gcc:4.8 pytorch:1.1.0 python:3.5 and how could i change the pytorch version to 1.0.0?pip install torch==1.0?

    opened by Anikily 12
  • Performance Issue

    Performance Issue

    Thanks for your work. I have tried this script: https://github.com/wuhuikai/FastFCN/blob/master/experiments/segmentation/scripts/encnet_res50_pcontext.sh with the hardware and software: 4xTitanXp, Ubuntu16.04, CUDA9.0, PyToch1.0

    But I can't reproduce the performance reported in your paper. I got pixAcc: 0.7747, mIoU: 0.4785 for single-scale, and pixAcc: 0.7833, mIoU: 0.4898 for multi-scale.

    I would appreciate your help. Thanks for your consideration.

    bug 
    opened by tonysy 12
  • FastFCN has been supported by MMSegmentation.

    FastFCN has been supported by MMSegmentation.

    Hi, right now FastFCN has been supported by MMSegmentation. We do find using JPU with smaller feature maps from backbone could get similar or higher performance than original models with larger feature maps.

    There is still something to do for us, for example, we do not find obviously improvement about FPS in our implementation, thus we would try to figure it out in the future.

    Anyway, thanks for your work and hope more people from community could use FastFCN.

    Best,

    opened by MengzhangLI 9
  • RuntimeError: Failed downloading

    RuntimeError: Failed downloading

    Hi, thanks for your work. I try to run your code to train a model on the pascalContext dataset.But I got the following error: RuntimeError: Failed downloading url https://hangzh.s3.amazonaws.com/encoding/models/resnet50-ebb6acbb.zip I find the problem is I can not download the pretrained model. I find the author no longer provide the pretrained resnet model. https://github.com/zhanghang1989/PyTorch-Encoding/issues/273

    So, How can I solve this problem. Thanks for your consideration.

    opened by bufferXia 9
  • How could I set

    How could I set "resume" while running test_single_image?

    Hello!

    When I run test_single_image.py, I tried to set resume as path of resnet101-2a57e44d.pth and encountered an error.

    File "G:/gitfolder/FastFCN/experiments/segmentation/test_single_image.py", line 43, in test model.load_state_dict(checkpoint['state_dict'], strict=False) KeyError: 'state_dict

    I doubted that there existed a problem with "resume". Waiting for your reply.

    Thank you!

    opened by CN-HaoJiang 8
  • Questions about the SE-loss and  Aux-loss

    Questions about the SE-loss and Aux-loss

    Hi, first thank you for the great work. I just checked the codes and also had run some scripts. I am confused with the final loss which is composited with three individual losses. could you tell what is the se-loss and the aux-loss used for.

    opened by meanmee 7
  • Backbone weights download links not working anymore

    Backbone weights download links not working anymore

    Download links for the backbone do not seem to work anymore.

    I've tested with Resnet50 (https://hangzh.s3.amazonaws.com/encoding/models/resnet50-ebb6acbb.zip) and Resnet 101 (https://hangzh.s3.amazonaws.com/encoding/models/resnet101-2a57e44d.zip) too.

    I also tried to use torchivision weights instead, but I got matching errors when trying to load them.

    Could you consider reuploading the weights? That would be very helpful!

    opened by Khroto 6
  • Segmentation Fault

    Segmentation Fault

    我執行以下 command 準備 train model 但是發生 segmentation fault 有人有這個問題嗎 ? 謝謝幫忙 !

    run : CUDA_VISIBLE_DEVICES=0,1,2,3 python train.py --dataset pcontext --model encnet --jpu --aux --se-loss --backbone resnet101 --checkname encnet_res101_pcontext

    crashed : Using poly LR Scheduler! Starting Epoch: 0 Total Epoches: 80 0%| | 0/312 [00:00<?, ?it/s] =>Epoches 0, learning rate = 0.0010, previous best = 0.0000 Segmentation fault

    //------------ Nvidia GPU : Tesla P100-PCIE 16G x 4 CPU : GenuineIntel x 18 , Memory 140G totally

    opened by SimonTsungHanKuo 6
  • Need your suggestions

    Need your suggestions

    Hi, i have designed this SPP module for my network. But i am also interested in your work to replace my his module with JPU. Would you like to give me any suggestions? here is my implementation

    class SPP(nn.Module): def init(self, pool_sizes): super(SPP, self).init() self.pool_sizes = pool_sizes

    def forward(self, x):
        h, w = x.shape[2:]
        k_sizes = []
        strides = []
        for pool_size in self.pool_sizes:
            k_sizes.append((int(h / pool_size), int(w / pool_size)))
            strides.append((int(h / pool_size), int(w / pool_size)))
    
        spp_sum = x
    
        for i in range(len(self.pool_sizes)):
            out = F.avg_pool2d(x, k_sizes[i], stride=strides[i], padding=0)
            out = F.upsample(out, size=(h, w), mode="bilinear")
            spp_sum = spp_sum + out
    
        return spp_sum  
    
    opened by haideralimughal 5
  • add resnest and xception65

    add resnest and xception65

    Copy Resnest and xception65 from Pytorch-Encoding, and xception65 only can be used without pretrained models.

    Pls be careful as there are many changes!!

    I test it on my own server, and everything seems ok. As a caution, maybe you could test it by yourself first.My FastFCN

    I don't change the Readme.md and *.sh. Maybe you can rectify it if you agree this request.

    If the server resources are not tight, I will run the encnet+jpu+resnest101+pcontext and encnet+jpu_x+resnest101+pcontext, I will share you the results at issues or pull another request about Readme.md with my pth.tar.

    Thanks for your work again.

    opened by tjj1998 1
Releases(v1.0.0)
OpenPose: Real-time multi-person keypoint detection library for body, face, hands, and foot estimation

Build Type Linux MacOS Windows Build Status OpenPose has represented the first real-time multi-person system to jointly detect human body, hand, facia

25.7k Jan 09, 2023
Bonnet: An Open-Source Training and Deployment Framework for Semantic Segmentation in Robotics.

Bonnet: An Open-Source Training and Deployment Framework for Semantic Segmentation in Robotics. By Andres Milioto @ University of Bonn. (for the new P

Photogrammetry & Robotics Bonn 314 Dec 30, 2022
Reimplement of SimSwap training code

SimSwap-train Reimplement of SimSwap training code Instructions 1.Environment Preparation (1)Refer to the README document of SIMSWAP to configure the

seeprettyface.com 111 Dec 31, 2022
Multiple Object Tracking with Yolov5!

Tracking with yolov5 This implementation is for who need to tracking multi-object only with detector. You can easily track mult-object with your well

9 Nov 08, 2022
Instance-Dependent Partial Label Learning

Instance-Dependent Partial Label Learning Installation pip install -r requirements.txt Run the Demo benchmark-random mnist python -u main.py --gpu 0 -

17 Dec 29, 2022
List some popular DeepFake models e.g. DeepFake, FaceSwap-MarekKowal, IPGAN, FaceShifter, FaceSwap-Nirkin, FSGAN, SimSwap, CihaNet, etc.

deepfake-models List some popular DeepFake models e.g. DeepFake, CihaNet, SimSwap, FaceSwap-MarekKowal, IPGAN, FaceShifter, FaceSwap-Nirkin, FSGAN, Si

Mingcan Xiang 100 Dec 17, 2022
Awesome Monocular 3D detection

Awesome Monocular 3D detection Paper list of 3D detetction, keep updating! Contents Paper List 2022 2021 2020 2019 2018 2017 2016 KITTI Results Paper

Zhikang Zou 184 Jan 04, 2023
Code for Learning Manifold Patch-Based Representations of Man-Made Shapes, in ICLR 2021.

LearningPatches | Webpage | Paper | Video Learning Manifold Patch-Based Representations of Man-Made Shapes Dmitriy Smirnov, Mikhail Bessmeltsev, Justi

Dima Smirnov 22 Nov 14, 2022
A python library for time-series smoothing and outlier detection in a vectorized way.

tsmoothie A python library for time-series smoothing and outlier detection in a vectorized way. Overview tsmoothie computes, in a fast and efficient w

Marco Cerliani 517 Dec 28, 2022
Model of an AI powered sign language interpreter.

TEXT AND SPEECH TO SIGN LANGUAGE. A web application which takes in text or live audio speech recording as input, converts and displays the relevant Si

Mark Gatere 4 Mar 30, 2022
School of Artificial Intelligence at the Nanjing University (NJU)School of Artificial Intelligence at the Nanjing University (NJU)

F-Principle This is an exercise problem of the digital signal processing (DSP) course at School of Artificial Intelligence at the Nanjing University (

Thyrix 5 Nov 23, 2022
The Surprising Effectiveness of Visual Odometry Techniques for Embodied PointGoal Navigation

PointNav-VO The Surprising Effectiveness of Visual Odometry Techniques for Embodied PointGoal Navigation Project Page | Paper Table of Contents Setup

Xiaoming Zhao 41 Dec 15, 2022
Framework for Spectral Clustering on the Sparse Coefficients of Learned Dictionaries

Dictionary Learning for Clustering on Hyperspectral Images Overview Framework for Spectral Clustering on the Sparse Coefficients of Learned Dictionari

Joshua Bruton 6 Oct 25, 2022
Text to Image Generation with Semantic-Spatial Aware GAN

text2image This repository includes the implementation for Text to Image Generation with Semantic-Spatial Aware GAN This repo is not completely. Netwo

CVDDL 124 Dec 30, 2022
Neuron Merging: Compensating for Pruned Neurons (NeurIPS 2020)

Neuron Merging: Compensating for Pruned Neurons Pytorch implementation of Neuron Merging: Compensating for Pruned Neurons, accepted at 34th Conference

Woojeong Kim 33 Dec 30, 2022
Code for `BCD Nets: Scalable Variational Approaches for Bayesian Causal Discovery`, Neurips 2021

This folder contains the code for 'Scalable Variational Approaches for Bayesian Causal Discovery'. Installation To install, use conda with conda env c

14 Sep 21, 2022
Convert Table data to approximate values with GUI

Table_Editor Convert Table data to approximate values with GUIs... usage - Import methods for extension Tables. Imported method supposed to have only

CLJ 1 Jan 10, 2022
PyTorch implementation of the Deep SLDA method from our CVPRW-2020 paper "Lifelong Machine Learning with Deep Streaming Linear Discriminant Analysis"

Lifelong Machine Learning with Deep Streaming Linear Discriminant Analysis This is a PyTorch implementation of the Deep Streaming Linear Discriminant

Tyler Hayes 41 Dec 25, 2022
PyTorch Implement of Context Encoders: Feature Learning by Inpainting

Context Encoders: Feature Learning by Inpainting This is the Pytorch implement of CVPR 2016 paper on Context Encoders 1) Semantic Inpainting Demo Inst

321 Dec 25, 2022
An SMPC companion library for Syft

SyMPC A library that extends PySyft with SMPC support SyMPC /ˈsɪmpəθi/ is a library which extends PySyft ≥0.3 with SMPC support. It allows computing o

Arturo Marquez Flores 0 Oct 13, 2021