A PyTorch-based Semi-Supervised Learning (SSL) Codebase for Pixel-wise (Pixel) Vision Tasks

Overview

PixelSSL is a PyTorch-based semi-supervised learning (SSL) codebase for pixel-wise (Pixel) vision tasks.

The purpose of this project is to promote the research and application of semi-supervised learning on pixel-wise vision tasks. PixelSSL provides two major features:

  • Interface for implementing new semi-supervised algorithms
  • Template for encapsulating diverse computer vision tasks

As a result, the SSL algorithms integrated in PixelSSL are compatible with all task codes inherited from the given template.

In addition, PixelSSL provides the benchmarks for validating semi-supervised learning algorithms for some pixel-level tasks, which now include semantic segmentation.

News

  • [Dec 25 2020] PixelSSL v0.1.4 is Released!
    🎄 Merry Christmas! 🎄
    v0.1.4 supports the CutMix semi-supervised learning algorithm for pixel-wise classification.

  • [Nov 06 2020] PixelSSL v0.1.3 is Released!
    v0.1.3 supports the CCT semi-supervised learning algorithm for pixel-wise classification.

  • [Oct 28 2020] PixelSSL v0.1.2 is Released!
    v0.1.2 supports PSPNet and its SSL results for semantic segmentation task (check here).

    [More]

Supported Algorithms and Tasks

We are actively updating this project.
The SSL algorithms and demo tasks supported by PixelSSL are summarized in the following table:

Algorithms / Tasks Segmentation Other Tasks
SupOnly v0.1.0 Coming Soon
MT [1] v0.1.0 Coming Soon
AdvSSL [2] v0.1.0 Coming Soon
S4L [3] v0.1.1 Coming Soon
CCT [4] v0.1.3 Coming Soon
GCT [5] v0.1.0 Coming Soon
CutMix [6] v0.1.4 Coming Soon

[1] Mean Teachers are Better Role Models: Weight-Averaged Consistency Targets Improve Semi-Supervised Deep Learning Results
      Antti Tarvainen, and Harri Valpola. NeurIPS 2017.

[2] Adversarial Learning for Semi-Supervised Semantic Segmentation
      Wei-Chih Hung, Yi-Hsuan Tsai, Yan-Ting Liou, Yen-Yu Lin, and Ming-Hsuan Yang. BMVC 2018.

[3] S4L: Self-Supervised Semi-Supervised Learning
      Xiaohua Zhai, Avital Oliver, Alexander Kolesnikov, and Lucas Beyer. ICCV 2019.

[4] Semi-Supervised Semantic Segmentation with Cross-Consistency Training
      Yassine Ouali, Céline Hudelot, and Myriam Tami. CVPR 2020.

[5] Guided Collaborative Training for Pixel-wise Semi-Supervised Learning
      Zhanghan Ke, Di Qiu, Kaican Li, Qiong Yan, and Rynson W.H. Lau. ECCV 2020.

[6] Semi-Supervised Semantic Segmentation Needs Strong, Varied Perturbations
      Geoff French, Samuli Laine, Timo Aila, Michal Mackiewicz, and Graham Finlayson. BMVC 2020.

Installation

Please refer to the Installation document.

Getting Started

Please follow the Getting Started document to run the provided demo tasks.

Tutorials

We provide the API document and some tutorials for using PixelSSL.

License

This project is released under the Apache 2.0 license.

Acknowledgement

We thank City University of Hong Kong and SenseTime for their support to this project.

Citation

This project is extended from our ECCV 2020 paper Guided Collaborative Training for Pixel-wise Semi-Supervised Learning (GCT). If this codebase or our method helps your research, please cite:

@InProceedings{ke2020gct,
  author = {Ke, Zhanghan and Qiu, Di and Li, Kaican and Yan, Qiong and Lau, Rynson W.H.},
  title = {Guided Collaborative Training for Pixel-wise Semi-Supervised Learning},
  booktitle = {European Conference on Computer Vision (ECCV)},
  month = {August},
  year = {2020},
}

Contact

This project is currently maintained by Zhanghan Ke (@ZHKKKe).
If you have any questions, please feel free to contact [email protected].

Comments
  • Question about the input size of images during inference time.

    Question about the input size of images during inference time.

    Dear author: I have a question about the inference setting. In this section: https://github.com/ZHKKKe/PixelSSL/blob/2e85e12c1db5b24206bfbbf2d7f6348ae82b2105/task/sseg/data.py#L102

        def _val_prehandle(self, image, label):
            sample = {self.IMAGE: image, self.LABEL: label}
            composed_transforms = transforms.Compose([
                FixScaleCrop(crop_size=self.args.im_size),
                Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)),
                ToTensor()])
    
            transformed_sample = composed_transforms(sample)
    
            return transformed_sample[self.IMAGE], transformed_sample[self.LABEL]
    

    I find that you crop the image as the input and calculate the metrics on the cropped image. However, I think we should use the whole image to calculate the metric. Based on this setting, the supervised full baseline is 2~3% mIoU lower than the raw performance. Could you explain it?

    opened by charlesCXK 16
  • some questions about Paper

    some questions about Paper "Guided Collaborative Training"

    great work. Thanks for your amazing codebase. I have some questions about this paper "Guided Collaborative Training for Pixel-wise Semi-Supervised Learning"

    1.I'm wondering if I can just use max score of a pixel as an evaluation criterion without Flaw Detector in semantic segmentation task? If so, how would it work if I use score directly, have you ever done such experiment?

    1. Is Flaw Correction Constraint forcing the error to 0 to correct the result of semantic segmentation? This loss, not quite understand what it means.
    opened by czy341181 8
  • Add implementation for Semi-supervised Semantic Segmentation via Strong-weak Dual-branch Network

    Add implementation for Semi-supervised Semantic Segmentation via Strong-weak Dual-branch Network

    Thanks for your sharing and the repo is quite helpful for me to understand the work in SSL segmentation. If possible, could you add the implementation of Semi-supervised Semantic Segmentation via Strong-weak Dual-branch Network (ECCV 2020), which is a simply dual branch network. It's a quite easy and inituitive idea but I could not reproduce the results with deeplabv2. It would be great if you could add this into the repo.

    opened by syorami 5
  • CUDA out of memory

    CUDA out of memory

    Hi ZHKKKE,

    First of all, thank you for your work. Currently, I retrain the gct by PSPNet with the ResNet-101 backbone in Pascal VOC, and use the parameter of im_size=513, batch_size=4 with 4 gpus. However, i am getting the error of insufficient memory. I retrained other methods you offered by using the parameter of im_size=513, batch_size=4 with 4 gpus and can get the accuracy provided by README.md.

    I want to know how you train the gct with 4 GPUs? Save memory by changing im_size=513 to im_size=321?Or is there any other way?

    Thank you and regards

    opened by Rainfor1 4
  • A question about ASPP

    A question about ASPP

    Thanks for your great work for tackling the pixel-wise semi-supervised tasks. I am currently following it and I have the following question.

    Should the returned value of 'out' at https://github.com/ZHKKKe/PixelSSL/blob/master/task/sseg/module/deeplab_v2.py#L85 be out of the for loop? Otherwise, the ASPP only adds the outputs of dilation rates 6 and 12.

    Thanks in advance : )

    opened by tianzhuotao 3
  • More data splits of VOC

    More data splits of VOC

    Dear author: Thank you for sharing! Could you share more data splits of your ECCV paper, such as data split of 1/16, 1/4, 1/2 of VOC? We want to run experiments based on more splits and make a comparison with the numbers reported in the paper. Thank you!

    opened by charlesCXK 2
  • FlawDetector In 3D version

    FlawDetector In 3D version

    Hi there, thanks for your work, it's very inspiring!

    And now I want to use the job in my project, but in 3D. I found that the FlawDetector for 2D is stacked of some conv layers with kernel size is 4 stride is 1 or 2 or some stuff.

    But my input size is 256, 256 after the self.conv3_1 will cause errors. So I have to modify kernel size from 4 to 3, and now before interpolating the feature map, the x's shape is (1, 1, 8, 8, 8), but to interpolating to shape of (1, 1, 16, 256, 256), the gap between the x and the task_pred seems too large.

    But in 2D mode, I set the input is (3, 256, 256) while the num_classes is 14, the x will be interpolated from (1, 1, 8, 8) to (1, 1, 256, 256). Is is reasonable?

    Thanks a lot!

    opened by DISAPPEARED13 0
  • About the performance of PSPNet.

    About the performance of PSPNet.

    Hello, thanks for your perfect work. I have a question about the performance of PSPNet , when i use PSPNet alone in my own dataset and my own code and trainning with 1/2 samples, the miou could reach about 68%. But when I change to your code and trainningwith suponly, the miou is only 60% . Could you please tell me what may be the reason for this.

    opened by liyanping0317 1
  • Is there a bug in task/sseg/func.py  metrics?

    Is there a bug in task/sseg/func.py metrics?

    Hi, ZHKKKe, Thank you for your excellent code.

    I found a suspected bug in task/sseg/func.py.

    In the function metrics, you reset all meters named acc_str/acc_class_str/mIoU_str/fwIoU_str. if meters.has_key(acc_str): meters.reset(acc_str) if meters.has_key(acc_class_str): meters.reset(acc_class_str) if meters.has_key(mIoU_str): meters.reset(mIoU_str) if meters.has_key(fwIoU_str): meters.reset(fwIoU_str) When I test your pre-trained model deeplabv2_pascalvoc_1-8_suponly.ckpt, I found the Validation metrics logging the whole confusion matrix. Shouldn‘t we count the single image acc/mIoU independently?

    I'm not sure whether my speculation is right, could you help me?

    opened by HHuiwen 1
  • Splits of Cityscapes ...

    Splits of Cityscapes ...

    Hi, thanks for your nice work!

    I have noticed that you only give us the data split of VOC2012, will you offer us the splits of cityscapes dataset?

    And from your scripts, The labeled data used in your experiments only samples in the order of names from the txt file, https://github.com/ZHKKKe/PixelSSL/blob/ce192034355ae6a77e47d2983d9c9242df60802a/task/sseg/dataset/PascalVOC/tool/random_sublabeled_samples.py#L21 labeled_num = int(len(samples) * labeled_ratio + 1) labeled_list = samples[:labeled_num]

    opened by ghost 3
Releases(v0.1.4)
Owner
Zhanghan Ke
PhD Candidate @ CityU
Zhanghan Ke
My solutions for Stanford University course CS224W: Machine Learning with Graphs Fall 2021 colabs (GNN, GAT, GraphSAGE, GCN)

machine-learning-with-graphs My solutions for Stanford University course CS224W: Machine Learning with Graphs Fall 2021 colabs Course materials can be

Marko Njegomir 7 Dec 14, 2022
HCQ: Hybrid Contrastive Quantization for Efficient Cross-View Video Retrieval

HCQ: Hybrid Contrastive Quantization for Efficient Cross-View Video Retrieval [toc] 1. Introduction This repository provides the code for our paper at

13 Dec 08, 2022
[AAAI2021] The source code for our paper 《Enhancing Unsupervised Video Representation Learning by Decoupling the Scene and the Motion》.

DSM The source code for paper Enhancing Unsupervised Video Representation Learning by Decoupling the Scene and the Motion Project Website; Datasets li

Jinpeng Wang 114 Oct 16, 2022
공공장소에서 눈만 돌리면 CCTV가 보인다는 말이 과언이 아닐 정도로 CCTV가 우리 생활에 깊숙이 자리 잡았습니다.

ObsCare_Main 소개 공공장소에서 눈만 돌리면 CCTV가 보인다는 말이 과언이 아닐 정도로 CCTV가 우리 생활에 깊숙이 자리 잡았습니다. CCTV의 대수가 급격히 늘어나면서 관리와 효율성 문제와 더불어, 곳곳에 설치된 CCTV를 개별 관제하는 것으로는 응급 상

5 Jul 07, 2022
Weakly Supervised End-to-End Learning (NeurIPS 2021)

WeaSEL: Weakly Supervised End-to-end Learning This is a PyTorch-Lightning-based framework, based on our End-to-End Weak Supervision paper (NeurIPS 202

Auton Lab, Carnegie Mellon University 131 Jan 06, 2023
Fast, modular reference implementation and easy training of Semantic Segmentation algorithms in PyTorch.

TorchSeg This project aims at providing a fast, modular reference implementation for semantic segmentation models using PyTorch. Highlights Modular De

ycszen 1.4k Jan 02, 2023
Randomized Correspondence Algorithm for Structural Image Editing

===================================== README: Inpainting based PatchMatch ===================================== @Author: Younesse ANDAM @Conta

Younesse 116 Dec 24, 2022
An implementation of the research paper "Retina Blood Vessel Segmentation Using A U-Net Based Convolutional Neural Network"

Retina Blood Vessels Segmentation This is an implementation of the research paper "Retina Blood Vessel Segmentation Using A U-Net Based Convolutional

Srijarko Roy 23 Aug 20, 2022
Yas CRNN model training - Yet Another Genshin Impact Scanner

Yas-Train Yet Another Genshin Impact Scanner 又一个原神圣遗物导出器 介绍 该仓库为 Yas 的模型训练程序 相关资料 MobileNetV3 CRNN 使用 假设你会设置基本的pytorch环境。 生成数据集 python main.py gen 训练

wormtql 18 Jan 08, 2023
Benchmark VAE - Library for Variational Autoencoder benchmarking

Documentation pythae This library implements some of the most common (Variational) Autoencoder models. In particular it provides the possibility to pe

1.1k Jan 02, 2023
Implementation of Segformer, Attention + MLP neural network for segmentation, in Pytorch

Segformer - Pytorch Implementation of Segformer, Attention + MLP neural network for segmentation, in Pytorch. Install $ pip install segformer-pytorch

Phil Wang 208 Dec 25, 2022
B2EA: An Evolutionary Algorithm Assisted by Two Bayesian Optimization Modules for Neural Architecture Search

B2EA: An Evolutionary Algorithm Assisted by Two Bayesian Optimization Modules for Neural Architecture Search This is the offical implementation of the

SNU ADSL 0 Feb 07, 2022
JAX code for the paper "Control-Oriented Model-Based Reinforcement Learning with Implicit Differentiation"

Optimal Model Design for Reinforcement Learning This repository contains JAX code for the paper Control-Oriented Model-Based Reinforcement Learning wi

Evgenii Nikishin 43 Sep 28, 2022
MutualGuide is a compact object detector specially designed for embedded devices

Introduction MutualGuide is a compact object detector specially designed for embedded devices. Comparing to existing detectors, this repo contains two

ZHANG Heng 103 Dec 13, 2022
Learning from Guided Play: A Scheduled Hierarchical Approach for Improving Exploration in Adversarial Imitation Learning Source Code

Learning from Guided Play: A Scheduled Hierarchical Approach for Improving Exploration in Adversarial Imitation Learning Trevor Ablett*, Bryan Chan*,

STARS Laboratory 8 Sep 14, 2022
FEDn is an open-source, modular and ML-framework agnostic framework for Federated Machine Learning

FEDn is an open-source, modular and ML-framework agnostic framework for Federated Machine Learning (FedML) developed and maintained by Scaleout Systems. FEDn enables highly scalable cross-silo and cr

Scaleout 75 Nov 09, 2022
This is a computer vision based implementation of the popular childhood game 'Hand Cricket/Odd or Even' in python

Hand Cricket Table of Content Overview Installation Game rules Project Details Future scope Overview This is a computer vision based implementation of

Abhinav R Nayak 6 Jan 12, 2022
PyTorch implementation of the paper Deep Networks from the Principle of Rate Reduction

Deep Networks from the Principle of Rate Reduction This repository is the official PyTorch implementation of the paper Deep Networks from the Principl

459 Dec 27, 2022
This is the implementation of "SELF SUPERVISED REPRESENTATION LEARNING WITH DEEP CLUSTERING FOR ACOUSTIC UNIT DISCOVERY FROM RAW SPEECH" submitted to ICASSP 2022

CPC_DeepCluster This is the implementation of "SELF SUPERVISED REPRESENTATION LEARNING WITH DEEP CLUSTERING FOR ACOUSTIC UNIT DISCOVERY FROM RAW SPEEC

LEAP Lab 2 Sep 15, 2022
Fast, Attemptable Route Planner for Navigation in Known and Unknown Environments

FAR Planner uses a dynamically updated visibility graph for fast replanning. The planner models the environment with polygons and builds a global visi

Fan Yang 346 Dec 30, 2022