Pytorch Implementation for Dilated Continuous Random Field

Overview

DilatedCRF

Pytorch implementation for fully-learnable DilatedCRF.


If you find my work helpful, please consider our paper:

@article{Mo2022dilatedcrf,
    title={Dilated Continuous Random Field for Semantic Segmentation},  
    author={Xi Mo, Xiangyu Chen, Cuncong Zhong, Rui Li, Kaidong Li, Sajid Usman},
    booktitle={IEEE International Conference on Robotics and Automation}, 
    year={2022}  
}

Easy Setup

Please install these required packages by official guidance:

python >= 3.6
pytorch >= 1.0.0
torchvision
pillow
numpy

How to Use

1. Prepare dataset

  • Dowload suction-based-grasping-dataset.zip (1.6GB) [link]. Please cite relevant paper:
@article{zeng2018robotic, 
    title={Robotic Pick-and-Place of Novel Objects in Clutter with Multi-Affordance Grasping and Cross-Domain Image Matching},  
    author={Zeng, Andy and Song, Shuran and Yu, Kuan-Ting and Donlon, Elliott and Hogan, Francois Robert and Bauza, Maria and Ma, Daolin and Taylor, Orion and Liu,     Melody and Romo, Eudald and Fazeli, Nima and Alet, Ferran and Dafle, Nikhil Chavan and Holladay, Rachel and Morona, Isabella and Nair, Prem Qu and Green, Druck and Taylor, Ian and Liu, Weber and Funkhouser, Thomas and Rodriguez, Alberto},  
    booktitle={Proceedings of the IEEE International Conference on Robotics and Automation}, 
    year={2018}  
}
  • Train your own semantic segmentation classifers on the suction dataset, generate training samples and test samples for DilatedCRF. You can also download my training set and test set (872MB) [link], extract the default folder dataset to the main directory.
    NOTE: Customized training and test samples must be organized the same as the default dataset format.

2. Train network

  • If you want to customize training process, modify utils/configuration.py parameters according to its instructions.

  • Train DilatedCRF use default dataset folder, or customized dataset path by -d argument.
    NOTE: checkpoints will be written to the default folder checkpoint.

    python DialatedCRF.py -train
    

    or restore training using the lattest .pt file stored in default folder checkpoint:

    python DialatedCRF.py -train -r
    

    or you may want to use specified checkpoint:

    python DialatedCRF.py -train -r -c path/to/your/ckpt
    

    Note that checkpoint file must match the parameter "SCALE" specified in utils/configuration.py. To specify customized dataset folder, use:

    python RGANet.py -train -d your/dataset/path
    

3. Validation

  • Complete dataset folder mentioned above and a valid checkpoint are required. You can download my checkpoint for "SCALE" = 0.25 (42.4MB) [link], be sure to adjust corresponding configurations beforehand. Then run:

    python DialatedCRF.py -v
    

    or you may specify dataset folder by -d:

    python DialatedCRF.py -v -d your/path/to/dataset/folder
    
  • Final results will be written to folder results. Metrics including Jaccard, F1-score, accuracy, etc., will be gathered as evaluation.txt in the folder results/evaluation


Contributed by Xi Mo,
License: Apache 2.0

Owner
DunnoCoding_Plus
CODE HARD, LIVE HAPPY.
DunnoCoding_Plus
Differentiable simulation for system identification and visuomotor control

gradsim gradSim: Differentiable simulation for system identification and visuomotor control gradSim is a unified differentiable rendering and multiphy

105 Dec 18, 2022
Improving Transferability of Representations via Augmentation-Aware Self-Supervision

Improving Transferability of Representations via Augmentation-Aware Self-Supervision Accepted to NeurIPS 2021 TL;DR: Learning augmentation-aware infor

hankook 38 Sep 16, 2022
Python scripts for performing stereo depth estimation using the HITNET Tensorflow model.

HITNET-Stereo-Depth-estimation Python scripts for performing stereo depth estimation using the HITNET Tensorflow model from Google Research. Stereo de

Ibai Gorordo 76 Jan 02, 2023
PyTorch trainer and model for Sequence Classification

PyTorch-trainer-and-model-for-Sequence-Classification After cloning the repository, modify your training data so that the training data is a .csv file

NhanTieu 2 Dec 09, 2022
PyTorch implementation for View-Guided Point Cloud Completion

PyTorch implementation for View-Guided Point Cloud Completion

22 Jan 04, 2023
Offical code for the paper: "Growing 3D Artefacts and Functional Machines with Neural Cellular Automata" https://arxiv.org/abs/2103.08737

Growing 3D Artefacts and Functional Machines with Neural Cellular Automata Video of more results: https://www.youtube.com/watch?v=-EzztzKoPeo Requirem

Robotics Evolution and Art Lab 51 Jan 01, 2023
Incremental Cross-Domain Adaptation for Robust Retinopathy Screening via Bayesian Deep Learning

Incremental Cross-Domain Adaptation for Robust Retinopathy Screening via Bayesian Deep Learning Update (September 18th, 2021) A supporting document de

Taimur Hassan 1 Mar 16, 2022
Imagededup - 😎 Finding duplicate images made easy

imagededup is a python package that simplifies the task of finding exact and near duplicates in an image collection.

idealo 4.3k Jan 07, 2023
MacroTools provides a library of tools for working with Julia code and expressions.

MacroTools.jl MacroTools provides a library of tools for working with Julia code and expressions. This includes a powerful template-matching system an

FluxML 278 Dec 11, 2022
SeqAttack: a framework for adversarial attacks on token classification models

A framework for adversarial attacks against token classification models

Walter 23 Nov 25, 2022
pytorch implementation of Attention is all you need

A Pytorch Implementation of the Transformer: Attention Is All You Need Our implementation is largely based on Tensorflow implementation Requirements N

230 Dec 07, 2022
PyTorch implementation of UNet++ (Nested U-Net).

PyTorch implementation of UNet++ (Nested U-Net) This repository contains code for a image segmentation model based on UNet++: A Nested U-Net Architect

4ui_iurz1 642 Jan 04, 2023
The tl;dr on a few notable transformer/language model papers + other papers (alignment, memorization, etc).

The tl;dr on a few notable transformer/language model papers + other papers (alignment, memorization, etc).

Will Thompson 166 Jan 04, 2023
DWIPrep is a robust and easy-to-use pipeline for preprocessing of diverse dMRI data.

DWIPrep: A Robust Preprocessing Pipeline for dMRI Data DWIPrep is a robust and easy-to-use pipeline for preprocessing of diverse dMRI data. The transp

Gal Ben-Zvi 1 Jan 09, 2023
[NeurIPS'21] "AugMax: Adversarial Composition of Random Augmentations for Robust Training" by Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zhiding Yu, Animashree Anandkumar, and Zhangyang Wang.

[NeurIPS'21] "AugMax: Adversarial Composition of Random Augmentations for Robust Training" by Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zhiding Yu, Animashree Anandkumar, and Zhangyang Wang.

VITA 112 Nov 07, 2022
Mixed Transformer UNet for Medical Image Segmentation

MT-UNet Update 2021/11/19 Thank you for your interest in our work. We have uploaded the code of our MTUNet to help peers conduct further research on i

dotman 92 Dec 25, 2022
We simulate traveling back in time with a modern camera to rephotograph famous historical subjects.

[SIGGRAPH Asia 2021] Time-Travel Rephotography [Project Website] Many historical people were only ever captured by old, faded, black and white photos,

298 Jan 02, 2023
A repo with study material, exercises, examples, etc for Devnet SPAUTO

MPLS in the SDN Era -- DevNet SPAUTO Get right to the study material: Checkout the Wiki! A lab topology based on MPLS in the SDN era book used for 30

Hugo Tinoco 67 Nov 16, 2022
Bayesian Neural Networks in PyTorch

We present the new scheme to compute Monte Carlo estimator in Bayesian VI settings with almost no memory cost in GPU, regardles of the number of sampl

Jurijs Nazarovs 7 May 03, 2022
Real-Time Multi-Contact Model Predictive Control via ADMM

Here, you can find the code for the paper 'Real-Time Multi-Contact Model Predictive Control via ADMM'. Code is currently being cleared up and optimize

17 Dec 28, 2022