A semantic segmentation toolbox based on PyTorch

Overview

Introduction

vedaseg is an open source semantic segmentation toolbox based on PyTorch.

Features

  • Modular Design

    We decompose the semantic segmentation framework into different components. The flexible and extensible design make it easy to implement a customized semantic segmentation project by combining different modules like building Lego.

  • Support of several popular frameworks

    The toolbox supports several popular semantic segmentation frameworks out of the box, e.g. DeepLabv3+, DeepLabv3, U-Net, PSPNet, FPN, etc.

  • High efficiency

    Multi-GPU data parallelism & distributed training.

  • Multi-Class/Multi-Label segmentation

    We implement multi-class and multi-label segmentation(where a pixel can belong to multiple classes).

  • Acceleration and deployment

    Models can be accelerated and deployed with TensorRT.

License

This project is released under the Apache 2.0 license.

Benchmark and model zoo

Note: All models are trained only on PASCAL VOC 2012 trainaug dataset and evaluated on PASCAL VOC 2012 val dataset.

Architecture backbone OS MS & Flip mIOU
DeepLabv3plus ResNet-101 16 True 79.46%
DeepLabv3plus ResNet-101 16 False 77.90%
DeepLabv3 ResNet-101 16 True 79.22%
DeepLabv3 ResNet101 16 False 77.08%
FPN ResNet-101 4 True 77.05%
FPN ResNet-101 4 False 75.64%
PSPNet ResNet-101 8 True 74.83%
PSPNet ResNet-101 8 False 73.28%
U-Net ResNet-101 1 True 74.58%
U-Net ResNet-101 1 False 72.59%

OS: Output stride used during evaluation
MS: Multi-scale inputs during evaluation
Flip: Adding left-right flipped inputs during evaluation

Models above are available in the GoogleDrive.

Installation

Requirements

  • Linux
  • Python 3.6+
  • PyTorch 1.4.0 or higher
  • CUDA 9.0 or higher

We have tested the following versions of OS and softwares:

  • OS: Ubuntu 16.04.6 LTS
  • CUDA: 10.2
  • PyTorch 1.4.0
  • Python 3.6.9

Install vedaseg

  1. Create a conda virtual environment and activate it.
conda create -n vedaseg python=3.6.9 -y
conda activate vedaseg
  1. Install PyTorch and torchvision following the official instructions, e.g.,
conda install pytorch torchvision -c pytorch
  1. Clone the vedaseg repository.
git clone https://github.com/Media-Smart/vedaseg.git
cd vedaseg
vedaseg_root=${PWD}
  1. Install dependencies.
pip install -r requirements.txt

Prepare data

VOC data

Download Pascal VOC 2012 and Pascal VOC 2012 augmented (you can get details at Semantic Boundaries Dataset and Benchmark), resulting in 10,582 training images(trainaug), 1,449 validatation images.

cd ${vedaseg_root}
mkdir ${vedaseg_root}/data
cd ${vedaseg_root}/data

wget http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar
wget http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/semantic_contours/benchmark.tgz

tar xf VOCtrainval_11-May-2012.tar
tar xf benchmark.tgz

python ../tools/encode_voc12_aug.py
python ../tools/encode_voc12.py

mkdir VOCdevkit/VOC2012/EncodeSegmentationClass
#cp benchmark_RELEASE/dataset/encode_cls/* VOCdevkit/VOC2012/EncodeSegmentationClass
(cd benchmark_RELEASE/dataset/encode_cls; cp * ${vedaseg_root}/data/VOCdevkit/VOC2012/EncodeSegmentationClass)
#cp VOCdevkit/VOC2012/EncodeSegmentationClassPart/* VOCdevkit/VOC2012/EncodeSegmentationClass
(cd VOCdevkit/VOC2012/EncodeSegmentationClassPart; cp * ${vedaseg_root}/data/VOCdevkit/VOC2012/EncodeSegmentationClass)

comm -23 <(cat benchmark_RELEASE/dataset/{train,val}.txt VOCdevkit/VOC2012/ImageSets/Segmentation/train.txt | sort -u) <(cat VOCdevkit/VOC2012/ImageSets/Segmentation/val.txt | sort -u) > VOCdevkit/VOC2012/ImageSets/Segmentation/trainaug.txt

To avoid tedious operations, you could save the above linux commands as a shell file and execute it.

COCO data

Download the COCO-2017 dataset.

cd ${vedaseg_root}
mkdir ${vedaseg_root}/data
cd ${vedaseg_root}/data
mkdir COCO2017 && cd COCO2017
wget -c http://images.cocodataset.org/zips/train2017.zip
unzip train2017.zip && rm train2017.zip
wget -c http://images.cocodataset.org/zips/val2017.zip
unzip val2017.zip &&  rm val2017.zip
wget -c http://images.cocodataset.org/annotations/annotations_trainval2017.zip
unzip annotations_trainval2017.zip && rm annotations_trainval2017.zip

Folder structure

The folder structure should similar as following:

data
├── COCO2017
│   ├── annotations
│   │   ├── instances_train2017.json
│   │   ├── instances_val2017.json
│   ├── train2017
│   ├── val2017
│── VOCdevkit
│   │   ├── VOC2012
│   │   │   ├── JPEGImages
│   │   │   ├── SegmentationClass
│   │   │   ├── ImageSets
│   │   │   │   ├── Segmentation
│   │   │   │   │   ├── trainaug.txt
│   │   │   │   │   ├── val.txt

Train

  1. Config

Modify some configuration accordingly in the config file like configs/voc_unet.py

  • for multi-label training use config file configs/coco_multilabel_unet.py and modify some configuration, the difference between single-label and multi-label training are mainly in following parameter in config file: nclasses, multi_label, metrics and criterion. Currently multi-label training is only supported in coco data format.
  1. Ditributed training
./tools/dist_train.sh configs/voc_unet.py gpu_num
  1. Non-distributed training
python tools/train.py configs/voc_unet.py

Snapshots and logs will be generated at ${vedaseg_root}/workdir.

Test

  1. Config

Modify some configuration accordingly in the config file like configs/voc_unet.py

  1. Ditributed testing
./tools/dist_test.sh configs/voc_unet.py checkpoint_path gpu_num
  1. Non-distributed testing
python tools/test.py configs/voc_unet.py checkpoint_path

Inference

  1. Config

Modify some configuration accordingly in the config file like configs/voc_unet.py

  1. Run
# visualize the results in a new window
python tools/inference.py configs/voc_unet.py checkpoint_path image_file_path --show

# save the visualization results in folder which named with image prefix, default under folder './result/'
python tools/inference.py configs/voc_unet.py checkpoint_path image_file_path --out folder_name

Deploy

  1. Convert to Onnx

Firstly, install volksdep following the official instructions.

Then, run the following code to convert PyTorch to Onnx. The input shape format is CxHxW. If you need the onnx model with dynamic input shape, please add --dynamic_shape in the end.

python tools/torch2onnx.py configs/voc_unet.py weight_path out_path --dummy_input_shape 3,513,513 --opset_version 11

Here are some known issues:

  • Currently PSPNet model is not supported because of the unsupported operation AdaptiveAvgPool2d.
  • Default onnx opset version is 9 and PyTorch Upsample operation is only supported with specified size, nearest mode and align_corners being None. If bilinear mode and align_corners are wanted, please add --opset_version 11 when using torch2onnx.py.
  1. Inference SDK

Firstly, install flexinfer and see the example for details.

Contact

This repository is currently maintained by Yuxin Zou (@YuxinZou), Tianhe Wang(@DarthThomas), Hongxiang Cai (@hxcai), Yichao Xiong (@mileistone).

Comments
  • Update inference.py

    Update inference.py

    Bugfix: inconsistent naming & hw mismatch (Note that this inverse_resize does not always work due to the use of int(): it's possible to generate a one-pixel difference between inversed prediction and the original image emerges)

    opened by DarthThomas 4
  • AttributeError: 'SyncBatchNorm' object has no attribute '_specify_ddp_gpu_num'

    AttributeError: 'SyncBatchNorm' object has no attribute '_specify_ddp_gpu_num'

    How can i slove this problem?

    (vedaseg) E:\00_Public_Project\vedaseg>python tools/train.py configs/voc_deeplabv3plus.py 2021-09-13 14:58:24,709 - INFO - Set cudnn deterministic False 2021-09-13 14:58:24,710 - INFO - Set cudnn benchmark True 2021-09-13 14:58:24,710 - INFO - Set seed 0 2021-09-13 14:58:24,711 - INFO - Build model Traceback (most recent call last): File "tools/train.py", line 47, in main() File "tools/train.py", line 42, in main runner = TrainRunner(train_cfg, inference_cfg, common_cfg) File "tools..\vedaseg\runners\train_runner.py", line 16, in init super().init(inference_cfg, base_cfg) File "tools..\vedaseg\runners\inference_runner.py", line 21, in init self.model = self._build_model(inference_cfg['model']) File "tools..\vedaseg\runners\inference_runner.py", line 39, in _build_model model = build_model(cfg) File "tools..\vedaseg\models\builder.py", line 10, in build_model encoder = build_encoder(cfg.get('encoder')) File "tools..\vedaseg\models\encoders\builder.py", line 9, in build_encoder backbone = build_from_cfg(cfg['backbone'], BACKBONES, default_args) File "tools..\vedaseg\utils\registry.py", line 51, in build_from_cfg return build_from_registry(cfg, src, default_args=default_args) File "tools..\vedaseg\utils\registry.py", line 84, in build_from_registry return obj_cls(**args) File "tools..\vedaseg\models\encoders\backbones\resnet.py", line 315, in init act_cfg=act_cfg) File "tools..\vedaseg\models\encoders\backbones\resnet.py", line 181, in init self._make_stem_layer() File "tools..\vedaseg\models\encoders\backbones\resnet.py", line 270, in _make_stem_layer self.bn1 = self._norm_layer(self.inplanes) File "tools..\vedaseg\models\utils\norm.py", line 81, in build_norm_layer layer._specify_ddp_gpu_num(1) # noqa File "C:\ProgramData\Anaconda3\envs\vedaseg\lib\site-packages\torch\nn\modules\module.py", line 1131, in getattr type(self).name, name)) AttributeError: 'SyncBatchNorm' object has no attribute '_specify_ddp_gpu_num'

    opened by CamelKing1997 3
  • ImportError: cannot import name 'weak_module'

    ImportError: cannot import name 'weak_module'

    I get the error:

    ImportError: cannot import name 'weak_module'

    when I running the following command python tools/trainval.py configs/deeplabv3plus.py and my PyTorch version is 1.3.0

    Reason

    After reading the source code from PyTorch, I found weak_script_method is in _jit_internal.py in versionv1.1.0. But, after version v1.2.0 PyTorch has removed the function detail

    opened by weixia1 3
  • AssertionError: Default process group is not initialized

    AssertionError: Default process group is not initialized

    I am trying to train using "python tools/train.py configs/voc_unet.py" I get an error saying AssertionError: Default process group is not initialized Can you please help me resolve this? Do I need to change anything in the config file?

    Traceback (most recent call last): File "tools/train.py", line 47, in main() File "tools/train.py", line 42, in main runner = TrainRunner(train_cfg, inference_cfg, common_cfg) File "tools/../vedaseg/runner/train_runner.py", line 20, in init train_cfg['data']['train']) File "tools/../vedaseg/runner/base.py", line 91, in _build_dataloader 'sampler') is not None else None File "tools/../vedaseg/dataloaders/samplers/builder.py", line 6, in build_sampler sampler = build_from_cfg(cfg, SAMPLERS, default_args) File "tools/../vedaseg/utils/registry.py", line 50, in build_from_cfg return build_from_registry(cfg, src, default_args=default_args) File "tools/../vedaseg/utils/registry.py", line 83, in build_from_registry return obj_cls(**args) File "/home/rajrup/miniconda3/envs/vedaseg/lib/python3.7/site-packages/torch/utils/data/distributed.py", line 43, in init num_replicas = dist.get_world_size() File "/home/rajrup/miniconda3/envs/vedaseg/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 582, in get_world_size return _get_group_size(group) File "/home/rajrup/miniconda3/envs/vedaseg/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 196, in _get_group_size _check_default_pg() File "/home/rajrup/miniconda3/envs/vedaseg/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 187, in _check_default_pg "Default process group is not initialized" AssertionError: Default process group is not initialized

    opened by Rajrup 2
  • Bad mIoU when using many GPUs

    Bad mIoU when using many GPUs

    I use the default deeplabv3plus config to train, and only modify the number of GPUs used. I noticed that the mIoU in the validation set drops significantly when the number of GPUs exceeds 4, as follows:

    1 gpu: 0.7729 2 gpus: 0.7750 4 gpus: 0.7478 8 gpus: 0.5373

    I guess it is caused by the batch normalization. Maybe sync BN will make a difference. Things are quite different in object detection, e.g. mmdetection, where basic BN is used. The performance does not vary too much when I change the number of GPUs.

    opened by xpngzhng 2
  • RuntimeError: Default process group has not been initialized, please make sure to call init_process_group.

    RuntimeError: Default process group has not been initialized, please make sure to call init_process_group.

    when training with "Non-distributed" cmd: python tools/train.py configs/voc_unet.py I got an error: RuntimeError: Default process group has not been initialized, please make sure to call init_process_group. Image: image

    opened by anhTuan0712 1
  • Implementation Error of ResNet BasicBlock

    Implementation Error of ResNet BasicBlock

    Hi, I was trying to train with a resnet34 backbone, and find mismatch when loading pretrained model I find some error

    left, the implementation of this repo, is wrong, right is correct image

    opened by xpngzhng 1
  • can't download the backbone checkpoints in macOS

    can't download the backbone checkpoints in macOS

    it will cause error in macOS with python 3.6.it's probably because Python 3.6 on OSX has no certificates at all, and can't validate any SSL connections.

    urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:833)>
    

    Some thing more in this link https://stackoverflow.com/questions/27835619/urllib-and-ssl-certificate-verify-failed-error

    opened by GeneralLi95 1
  • Support more datasets

    Support more datasets

    Nice job!

    • I was wondering whether you would support more datasets, like Cityscape and COCO, since these models are also used widely in related papers.

    • Besides, would you continue to maintain this repo, just like MMDetection, so we can use it without worrying that it would be abandoned suddenly?

    Thanks!

    opened by Spritea 1
  • Updates around PSPNet

    Updates around PSPNet

    This PR contains updates below:

    1. updates for PSPNet 1.1. update metric results (relevant weights already uploaded to google drive) 1.2. add config for ResNet-v1c backbone
    2. add option to train with ResNet-v1c backbone
    3. use SyncBN as default
    opened by DarthThomas 0
  • fix panoptic fpn

    fix panoptic fpn

    Here is a list of modifications during fixing panoptic fpn(in 8 commits):

    • update: redesigned junction block
      • junction block update
      • gfpn update
      • configs update
      • readme update
    • bugfix:
      • VOC data process script
      • test time augmentation
      • distribute test data gathering issue
    • style: use a new protocol to sort all imports
    opened by DarthThomas 0
  • Suggest to loosen the dependency on albumentations

    Suggest to loosen the dependency on albumentations

    Hi, your project vedaseg requires "albumentations==0.4.1" in its dependency. After analyzing the source code, we found that some other versions of albumentations can also be suitable without affecting your project, i.e., albumentations 0.4.0. Therefore, we suggest to loosen the dependency on albumentations from "albumentations==0.4.1" to "albumentations>=0.4.0,<=0.4.1" to avoid any possible conflict for importing more packages or for downstream projects that may use vedaseg.

    May I pull a request to loosen the dependency on albumentations?

    By the way, could you please tell us whether such dependency analysis may be potentially helpful for maintaining dependencies easier during your development?



    For your reference, here are details in our analysis.

    Your project vedaseg(commit id: fa4ff42234176b05ef0dff8759c7e62a17498ab9) directly uses 5 APIs from package albumentations.

    albumentations.augmentations.functional.scale, albumentations.core.transforms_interface.to_tuple, albumentations.core.transforms_interface.DualTransform.__init__, albumentations.core.composition.Compose.__init__, albumentations.augmentations.transforms.PadIfNeeded.__init__
    
    

    From which, 15 functions are then indirectly called, including 14 albumentations's internal APIs and 1 outsider APIs, as follows (neglecting some repeated function occurrences).

    [/Media-Smart/vedaseg]
    +--albumentations.augmentations.functional.scale
    |      +--albumentations.augmentations.functional.resize
    |      |      +--albumentations.augmentations.functional._maybe_process_in_chunks
    |      |      |      +--albumentations.augmentations.functional.get_num_channels
    |      |      |      +--numpy.dstack
    +--albumentations.core.transforms_interface.to_tuple
    +--albumentations.core.transforms_interface.DualTransform.__init__
    |      +--albumentations.core.transforms_interface.BasicTransform.__init__
    +--albumentations.core.composition.Compose.__init__
    |      +--albumentations.core.composition.BaseCompose.__init__
    |      |      +--albumentations.core.composition.Transforms.__init__
    |      |      |      +--albumentations.core.composition.Transforms._find_dual_start_end
    |      |      |      |      +--albumentations.core.composition.Transforms._find_dual_start_end
    |      +--albumentations.augmentations.bbox_utils.BboxProcessor.__init__
    |      |      +--albumentations.core.utils.DataProcessor.__init__
    |      +--albumentations.core.composition.BboxParams.__init__
    |      |      +--albumentations.core.utils.Params.__init__
    |      +--albumentations.augmentations.keypoints_utils.KeypointsProcessor.__init__
    |      |      +--albumentations.core.utils.DataProcessor.__init__
    |      +--albumentations.core.composition.KeypointParams.__init__
    |      |      +--albumentations.core.utils.Params.__init__
    |      +--albumentations.core.composition.BaseCompose.add_targets
    +--albumentations.augmentations.transforms.PadIfNeeded.__init__
    |      +--albumentations.core.transforms_interface.BasicTransform.__init__
    

    We scan albumentations's versions among [0.4.0] and 0.4.1, the changing functions (diffs being listed below) have none intersection with any function or API we mentioned above (either directly or indirectly called by this project).

    diff: 0.4.1(original) 0.4.0
    ['albumentations.augmentations.transforms.Resize.apply_to_keypoint', 'albumentations.augmentations.transforms.RandomGridShuffle.__init__', 'albumentations.augmentations.transforms.RandomGridShuffle', 'albumentations.augmentations.transforms.Resize']
    
    

    As for other packages, the APIs of @outside_package_name are called by albumentations in the call graph and the dependencies on these packages also stay the same in our suggested versions, thus avoiding any outside conflict.

    Therefore, we believe that it is quite safe to loose your dependency on albumentations from "albumentations==0.4.1" to "albumentations>=0.4.0,<=0.4.1". This will improve the applicability of vedaseg and reduce the possibility of any further dependency conflict with other projects/packages.

    opened by Agnes-U 0
  • Why are there so many methods, and even some file have only one method

    Why are there so many methods, and even some file have only one method

    Why make the code so complicated? I just want to see how you use transforms to augment the image. From the dataset file, I jumped 6 pages and haven't seen the final code so far. . . . .

    opened by Czshippee 2
  • AttributeError: module 'albumentations.augmentations.functional' has no attribute 'scale'

    AttributeError: module 'albumentations.augmentations.functional' has no attribute 'scale'

    DESCRIPTION

    vedaseg train fails getting AttributeError: module 'albumentations.augmentations.functional' has no attribute 'scale'.

    REPRODUCE PROCEDURE

    Use current PiPy version of albumentation and execute training. I'm using the following versions of software stacks.

    docker image: pytorch/pytorch:1.7.1-cuda11.0-cudnn8-devel torch: 1.7.1 torchvision: 0.8.2 conda: 4.10.3 Python: 3.8.10 conda origin imgaug: 0.4.0 albumentation: 1.1.0 PyPi current version

    ANALYSYS and SUGGESTED RESOLUTION

    It looks like scale() method in albumentations.augmentations.functional does not exist in albumentations 1.1.0 any longer. The method exists at least until 0.5.1, and after downgrading albumentations train process worked.

    Thus, I think nowadays it's better to write albumentations version in requirements.txt:

    albumentations==0.5.1

    rather than:

    albumentations>=0.4.1

    LOG

    The below is an exerption from the stack trace I got.

    Original Traceback (most recent call last):
      File "/root/miniconda3/envs/py38/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 198, in _worker_loop
        data = fetcher.fetch(index)
      File "/root/miniconda3/envs/py38/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
        data = [self.dataset[idx] for idx in possibly_batched_index]
      File "/root/miniconda3/envs/py38/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
        data = [self.dataset[idx] for idx in possibly_batched_index]
      File "/work/vedaseg/tools/../vedaseg/datasets/voc.py", line 39, in __getitem__
        image, mask = self.process(img, [mask])
      File "/work/vedaseg/tools/../vedaseg/datasets/base.py", line 16, in process
        augmented = self.transform(image=image, masks=masks)
      File "/root/miniconda3/envs/py38/lib/python3.8/site-packages/albumentations/core/composition.py", line 210, in __call__
        data = t(force_apply=force_apply, **data)
      File "/root/miniconda3/envs/py38/lib/python3.8/site-packages/albumentations/core/transforms_interface.py", line 97, in __call__
        return self.apply_with_params(params, **kwargs)
      File "/root/miniconda3/envs/py38/lib/python3.8/site-packages/albumentations/core/transforms_interface.py", line 112, in apply_with_params
        res[key] = target_function(arg, **dict(params, **target_dependencies))
      File "/work/vedaseg/tools/../vedaseg/transforms/transforms.py", line 22, in apply
        return F.scale(image, scale, interpolation=self.interpolation)
    AttributeError: module 'albumentations.augmentations.functional' has no attribute 'scale'
    
    opened by thatsdone 3
  • How to train with custom dataset?

    How to train with custom dataset?

    I want to train with custom dataset, and I have some questions: (1) my custom dataset has two folders images and labels, where each label image is RGB image which uses different color for different object class. Should I organize this dataset in Pascal VOC format? (2) I need to adapt voc_unet.py for custom dataset, Pascal VOC uses ignore_label for object boundary, how to set ignore_label for my own custom dataset? (3) How to set crop_size_h, crop_size_w = 513, 513? My custom dataset has image dimension 512x512. Thanks!

    opened by panovr 0
  • Validating Problem

    Validating Problem

    Hello, I am testing the model, but it shows . How can I fix this? Thanks

    2021-08-15 16:53:59,781 - INFO - Start validating Traceback (most recent call last): File "tools/train.py", line 47, in <module> main() File "tools/train.py", line 43, in main runner() File "tools/../vedaseg/runners/train_runner.py", line 148, in __call__ res = self._val() File "tools/../vedaseg/runners/train_runner.py", line 111, in _val for idx, (image, mask) in enumerate(self.val_dataloader): File "/root/anaconda3/envs/vedaseg/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 345, in __next__ data = self._next_data() File "/root/anaconda3/envs/vedaseg/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 838, in _next_data return self._process_data(data) File "/root/anaconda3/envs/vedaseg/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 881, in _process_data data.reraise() File "/root/anaconda3/envs/vedaseg/lib/python3.6/site-packages/torch/_utils.py", line 394, in reraise raise self.exc_type(msg) RuntimeError: Caught RuntimeError in DataLoader worker process 1. Original Traceback (most recent call last): File "/root/anaconda3/envs/vedaseg/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop data = fetcher.fetch(index) File "/root/anaconda3/envs/vedaseg/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch return self.collate_fn(data) File "/root/anaconda3/envs/vedaseg/lib/python3.6/site-packages/torch/utils/data/_utils/collate.py", line 79, in default_collate return [default_collate(samples) for samples in transposed] File "/root/anaconda3/envs/vedaseg/lib/python3.6/site-packages/torch/utils/data/_utils/collate.py", line 79, in <listcomp> return [default_collate(samples) for samples in transposed] File "/root/anaconda3/envs/vedaseg/lib/python3.6/site-packages/torch/utils/data/_utils/collate.py", line 55, in default_collate return torch.stack(batch, 0, out=out) RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 641 and 691 in dimension 2 at /pytorch/aten/src/TH/generic/THTensor.cpp:612

    opened by lun-lun-byte 1
Releases(v2.1.2)
Code for "3D Human Pose and Shape Regression with Pyramidal Mesh Alignment Feedback Loop"

PyMAF This repository contains the code for the following paper: 3D Human Pose and Shape Regression with Pyramidal Mesh Alignment Feedback Loop Hongwe

Hongwen Zhang 450 Dec 28, 2022
A deep-learning pipeline for segmentation of ambiguous microscopic images.

Welcome to Official repository of deepflash2 - a deep-learning pipeline for segmentation of ambiguous microscopic images. Quick Start in 30 seconds se

Matthias Griebel 39 Dec 19, 2022
Implementation detail for paper "Multi-level colonoscopy malignant tissue detection with adversarial CAC-UNet"

Multi-level-colonoscopy-malignant-tissue-detection-with-adversarial-CAC-UNet Implementation detail for our paper "Multi-level colonoscopy malignant ti

CVSM Group - email: <a href=[email protected]"> 84 Nov 22, 2022
LV-BERT: Exploiting Layer Variety for BERT (Findings of ACL 2021)

LV-BERT Introduction In this repo, we introduce LV-BERT by exploiting layer variety for BERT. For detailed description and experimental results, pleas

Weihao Yu 14 Aug 24, 2022
GPT, but made only out of gMLPs

GPT - gMLP This repository will attempt to crack long context autoregressive language modeling (GPT) using variations of gMLPs. Specifically, it will

Phil Wang 80 Dec 01, 2022
Optimized code based on M2 for faster image captioning training

Transformer Captioning This repository contains the code for Transformer-based image captioning. Based on meshed-memory-transformer, we further optimi

lyricpoem 16 Dec 16, 2022
Hyperbolic Procrustes Analysis Using Riemannian Geometry

Hyperbolic Procrustes Analysis Using Riemannian Geometry The code in this repository creates the figures presented in this article: Please notice that

Ronen Talmon's Lab 2 Jan 08, 2023
Source code related to the article submitted to the International Conference on Computational Science ICCS 2022 in London

POTHER: Patch-Voted Deep Learning-based Chest X-ray Bias Analysis for COVID-19 Detection Source code related to the article submitted to the Internati

Tomasz Szczepański 1 Apr 29, 2022
This is the official code release for the paper Shape and Material Capture at Home

This is the official code release for the paper Shape and Material Capture at Home. The code enables you to reconstruct a 3D mesh and Cook-Torrance BRDF from one or more images captured with a flashl

89 Dec 10, 2022
CROSS-LINGUAL ABILITY OF MULTILINGUAL BERT: AN EMPIRICAL STUDY

M-BERT-Study CROSS-LINGUAL ABILITY OF MULTILINGUAL BERT: AN EMPIRICAL STUDY Motivation Multilingual BERT (M-BERT) has shown surprising cross lingual a

CogComp 1 Feb 28, 2022
Code to use Augmented Shapiro Wilks Stopping, as well as code for the paper "Statistically Signifigant Stopping of Neural Network Training"

This codebase is being actively maintained, please create and issue if you have issues using it Basics All data files are included under losses and ea

J K Terry 32 Nov 09, 2021
Practical Blind Denoising via Swin-Conv-UNet and Data Synthesis

Practical Blind Denoising via Swin-Conv-UNet and Data Synthesis [Paper] [Online Demo] The following results are obtained by our SCUNet with purely syn

Kai Zhang 312 Jan 07, 2023
Generate text captions for images from their CLIP embeddings. Includes PyTorch model code and example training script.

clip-text-decoder Generate text captions for images from their CLIP embeddings. Includes PyTorch model code and example training script. Example Predi

Frank Odom 36 Dec 21, 2022
Vision Transformer and MLP-Mixer Architectures

Vision Transformer and MLP-Mixer Architectures Update (2.7.2021): Added the "When Vision Transformers Outperform ResNets..." paper, and SAM (Sharpness

Google Research 6.4k Jan 04, 2023
The world's simplest facial recognition api for Python and the command line

Face Recognition You can also read a translated version of this file in Chinese 简体中文版 or in Korean 한국어 or in Japanese 日本語. Recognize and manipulate fa

Adam Geitgey 46.9k Jan 03, 2023
Code for the CVPR 2021 paper: Understanding Failures of Deep Networks via Robust Feature Extraction

Welcome to Barlow Barlow is a tool for identifying the failure modes for a given neural network. To achieve this, Barlow first creates a group of imag

Sahil Singla 33 Dec 05, 2022
[ICRA 2022] CaTGrasp: Learning Category-Level Task-Relevant Grasping in Clutter from Simulation

This is the official implementation of our paper: Bowen Wen, Wenzhao Lian, Kostas Bekris, and Stefan Schaal. "CaTGrasp: Learning Category-Level Task-R

Bowen Wen 199 Jan 04, 2023
Learning Super-Features for Image Retrieval

Learning Super-Features for Image Retrieval This repository contains the code for running our FIRe model presented in our ICLR'22 paper: @inproceeding

NAVER 101 Dec 28, 2022
This is an early in-development version of training CLIP models with hivemind.

A transformer that does not hog your GPU memory This is an early in-development codebase: if you want a stable and documented hivemind codebase, look

<a href=[email protected]"> 4 Nov 06, 2022
MEAL V2: Boosting Vanilla ResNet-50 to 80%+ Top-1 Accuracy on ImageNet without Tricks

MEAL-V2 This is the official pytorch implementation of our paper: "MEAL V2: Boosting Vanilla ResNet-50 to 80%+ Top-1 Accuracy on ImageNet without Tric

Zhiqiang Shen 653 Dec 19, 2022