A deep learning framework for historical document image analysis

Related tags

Deep LearningDIVA-DAF
Overview

DIVA-DAF

PyTorch Lightning Config: Hydra Template
tests codecov

Description

A deep learning framework for historical document image analysis.

How to run

Install dependencies

# clone project
git clone https://github.com/DIVA-DIA/unsupervised_learning.git
cd unsupervised_learing

# create conda environment (IMPORTANT: needs Python 3.8+)
conda env create -f conda_env_gpu.yaml

# activate the environment using .autoenv
source .autoenv

# install requirements
pip install -r requirements.txt

Train model with default configuration. Care: you need to change the value of data_dir in config/datamodule/cb55_10_cropped_datamodule.yaml.

# default run based on config/config.yaml
python run.py

# train on CPU
python run.py trainer.gpus=0

# train on GPU
python run.py trainer.gpus=1

Train using GPU

# [default] train on all available GPUs
python run.py trainer.gpus=-1

# train on one GPU
python run.py trainer.gpus=1

# train on two GPUs
python run.py trainer.gpus=2

# train on CPU
python run.py trainer.accelerator=ddp_cpu

Train using CPU for debugging

# train on CPU
python run.py trainer.accelerator=ddp_cpu trainer.precision=32

Train model with chosen experiment configuration from configs/experiment/

python run.py +experiment=experiment_name

You can override any parameter from command line like this

python run.py trainer.max_epochs=20 datamodule.batch_size=64

Setup PyCharm

  1. Fork this repo
  2. Clone the repo to your local filesystem (git clone CLONELINK)
  3. Clone the repo onto your remote machine
  4. Move into the folder on your remote machine and create the conda environment (conda env create -f conda_env_gpu.yaml)
  5. Run source .autoenv in the root folder on your remote machine (activates the environment)
  6. Open the folder in PyCharm (File -> open)
  7. Add the interpreter (Preferences -> Project -> Python interpreter -> top left gear icon -> add... -> SSH Interpreter) follow the instructions (set the correct mapping to enable deployment)
  8. Upload the files (deployment)
  9. Create a wandb account (wandb.ai)
  10. Log via ssh onto your remote machine
  11. Go to the root folder of the framework and activate the environment (source .autoenv OR conda activate unsupervised_learning)
  12. Log into wandb. Execute wandb login and follow the instructions
  13. Now you should be able to run the basic experiment from PyCharm

Loading models

You can load the different model parts backbone or header as well as the whole task. To load the backbone or the header you need to add to your experiment config the field path_to_weights. e.g.

model:
    header:
        path_to_weights: /my/path/to/the/pth/file

To load the whole task you need to provide the path to the whole task to the trainer. This is with the field resume_from_checkpoint. e.g.

trainer:
    resume_from_checkpoint: /path/to/.ckpt/file

Freezing model parts

You can freeze both parts of the model (backbone or header) with the freeze flag in the config. E.g. you want to freeze the backbone: In the command line:

python run.py +model.backbone.freeze=True

In the config (e.g. model/backbone/baby_unet_model.yaml):

...
freeze: True
...

CARE: You can not train a model when you do not have trainable parameters (e.g. freezing backbone and header).

Selection in datasets

If you use the selection key you can either use an int, which takes the first n files, or a list of strings to filter the different datasets. In the case you are using a full-page dataset be aware that the selection list is a list of file names without the extension.

Cite us

@misc{vögtlin2022divadaf,
      title={DIVA-DAF: A Deep Learning Framework for Historical Document Image Analysis}, 
      author={Lars Vögtlin and Paul Maergner and Rolf Ingold},
      year={2022},
      eprint={2201.08295},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
Comments
  • Not working with ddp_cpu

    Not working with ddp_cpu

    Describe the bug If we want to run the framework with ddp_cpu as accelerator it wont work as it has a working directory problem.

    To Reproduce python run.py trainer.accelerator='ddp_cpu' trainer.precision=32

    Expected behavior We can use ddp_cpu to debug our system

    Additional context To avoid this problem at the moment we can just use the full path to the run.py file ($PWD/run.py).

    Checklist

    • [ ] Add a warning if ddp_cpu and not presicion=32
    bug If time Pipeline 
    opened by lvoegtlin 3
  • Use deepspeed to speed up the training

    Use deepspeed to speed up the training

    Is your feature request related to a problem? Please describe. To accelerate the training we could use the deepspeed plugin

    Describe the solution you'd like Make it possible to activate deepspeed through the config

    Checklist

    • [x] Test deepspeed
    • [ ] Include it into the config system
    wontfix If time Pipeline 
    opened by lvoegtlin 3
  • Load model checkpoint instead of default init

    Load model checkpoint instead of default init

    differ between train test and train and test Already started with two parameters train and test to define what part of the process should be done. need to include loading from ckpt for fine-tuning or just testing

    https://pytorch-lightning.readthedocs.io/en/stable/common/weights_loading.html

    PXL_20210706_154513642

    Evtl. weights_only would work

    We need to make our own callback which inherits from ModelCheckpoint and override/add the just model checkpoint save (https://github.com/PyTorchLightning/pytorch-lightning/blob/bca5adf6de1ae74c7103839aac54c8648464bee6/pytorch_lightning/callbacks/model_checkpoint.py#L485)

    Checklist

    • [x] test check if path_to_weights is set
    • [x] load model state from path
    • [x] create a generic model which takes an encoder and a header (configs)
    • [x] #15
    • [x] save model with a callback (create callback)
    • [x] if we are just testing we need a path_to_weights for both
    Important Module Pipeline 
    opened by lvoegtlin 3
  • Updating dependecies

    Updating dependecies

    Description

    Updating PL, torchmetrics and pytest to the newest version. Also introduces codecoverage with sonarcloud. Each PR will now be tested on testcoverage

    How to Test/Run?

    pytest

    opened by lvoegtlin 2
  • Fixed problem with multiple empty folders in checkpoints

    Fixed problem with multiple empty folders in checkpoints

    Description

    The checkpoint callback created the checkpoints in a dedicated epochs folder. The folder should get deleted if it's no longer the best. This did not also work with the built-in version of the model checkpoint callback. Solved it by doing a clean-up at the end of the experiment.

    How to Test/Run?

    python run.py trainer.max_epochs=20

    Something missing?

    opened by lvoegtlin 2
  • Feature/datamodule for gif imgs

    Feature/datamodule for gif imgs

    Description

    A datamodule that takes advantage of the index format. It no longer determines the classes by the color but takes the classes directly form the raw image and uses the palette as class encoding.

    How to Test/Run?

    pytest or python run.py experiment=development_baby_unet_indexed.yaml

    opened by lvoegtlin 2
  • DDP metric bias

    DDP metric bias

    Is your feature request related to a problem? Please describe. When running an experiment with DDP we have a little data bias if the dataset is not dividable by batch_size * num_processors. To make the users aware of this problem we can add a warning if num_samples % (batch_size * num_processors) != 0. Problem described here

    Describe the solution you'd like Raining an error if the condition from above is not met. Also, add a flag to ignore this error (ignore_ddp_bias)

    Describe alternatives you've considered Solve it with the ddp join function from PyTorch but it is very hard to hack that into pl.

    Checklist

    • [x] Create check and warning
    • [x] Add shuffle and drop_last_batch options to datamodule config
    • [x] Add shuffle/drop_last_batch to default config files
    enhancement Pipeline 
    opened by lvoegtlin 2
  • Add the strict parameter to make it possible to load non-fitting models

    Add the strict parameter to make it possible to load non-fitting models

    Describe the feature

    Make it possible to transfer weights between similar models

    Describe the solution you'd like

    A parameter strict in the models which defines the way to load if it is not fitting the weights file

    Checklist

    • [x] Add this parameter in the model config
    • [x] Use it to load the model
    • [x] Add log for missed/unexpected keys
    If time Module Pipeline 
    opened by lvoegtlin 2
  • Loss function as config

    Loss function as config

    Is your feature request related to a problem? Please describe. Make it possible to define the loss function in the config.

    Describe the solution you'd like Define some defaults functions and create a config for them. Then hand over the criterion object to the task at the beginning of the training.

    Checklist

    • [ ] define 4 basic losses (Xentropy, L1, MSE, BCE)
    • [ ] create configs
    • [ ] hand over the loss function as a parameter to the task
    enhancement If time Module Pipeline 
    opened by lvoegtlin 2
  • Specify metric via callback

    Specify metric via callback

    Is your feature request related to a problem? Please describe. To make the system more flexible we have to implement the metrics with callbacks s.t. we can combine multiple metrics and also reuse them in other tasks.

    Describe the solution you'd like Implement mIoU (jar fashion), precision, recall, and accuracy as metric callbacks. Call metrics at the end of the steps (see) Also make sure that when we are testing and in ddp that we just run it on one gpu or with join (documentation of join) (look here or here)

    Checklist

    • [x] Implement DIVA HisDB metric class (our metric)
    • [x] Metric which is exactly like the jar
    • [x] Create config for mIoU
    enhancement If time Module Pipeline 
    opened by lvoegtlin 2
  • Feature/add fcn

    Feature/add fcn

    Description

    UNet now has a swappable classifier. This makes working with it way easier, as we can easily fine-tune it onto a dataset with more or less classes.

    How to Test/Run?

    pytest or python run.py

    opened by lvoegtlin 1
  • Training/validation and test time

    Training/validation and test time

    Is your feature request related to a problem? Please describe. Get the exact time for the training (incl. validation) and the testing in seconds. This can be reported overall as well as for an epoch. The setup time of the framework should be excluded.

    Describe the solution you'd like Log these times into the used loggers and report it in the experiment summary file.

    Checklist

    • [ ] Check if PL already provides such a feature
    • [ ] Create timers for the different phases
    • [ ] Report these times
    • [ ] Test
    • [ ] PR
    opened by lvoegtlin 1
  • More complex return

    More complex return

    Is your feature request related to a problem? Please describe. Let the framework return more information, like beast model path, metric, etc. as a dictionary, s.t. calling files can chain together multiple frameworks runs.

    Describe the solution you'd like With a dictionary

    Checklist

    • [ ] Check what return information are needed
    • [ ] Add it tot he execution class
    • [ ] Test
    • [ ] PR
    enhancement Needed Config 
    opened by lvoegtlin 0
  • Rework the backbone header model

    Rework the backbone header model

    Is your feature request related to a problem? Please describe. Think about the current Backboneheader model and try to adapt it to the new needs. Eventually, changes it to a new model.

    Checklist

    • [ ] Evaluate the existing model with the new needs
    • [ ] Think about solutions
    • [ ] Prototype the solutions
    • [ ] Implementation (models, workflow, callbacks)
    • [ ] Config adaption
    • [ ] Test
    • [ ] PR
    enhancement Needed Config 
    opened by lvoegtlin 0
  • Test if possible conf_mat from base_task into a callback

    Test if possible conf_mat from base_task into a callback

    Is your feature request related to a problem? Please describe. The problem before with the conf mat callback was that it had a semaphore leak. As described here (https://github.com/ashleve/lightning-hydra-template/issues/189#issuecomment-1003532448), it should work now with the usage of torchmetrics.

    Checklist

    • [ ] Factor the conf mat log into callback
    • [ ] Extensice testing
    • [ ] Tests
    • [ ] PR
    enhancement Config 
    opened by lvoegtlin 0
  • Update hydra to 1.2

    Update hydra to 1.2

    Is your feature request related to a problem? Please describe. Update hydra to the newest version

    Checklist

    • [ ] update
    • [ ] adapt code
    • [ ] test
    • [ ] PR
    enhancement 
    opened by lvoegtlin 0
  • Hyperparameter optimization

    Hyperparameter optimization

    Is your feature request related to a problem? Please describe. Create a possibility to do hyperparameter optimization with the framework

    Checklist

    • [ ] Check out which one works best
    • [ ] integrate it or use it as a script
    • [ ] Test
    • [ ] PR
    enhancement 
    opened by lvoegtlin 0
Releases(version_0.2.2)
  • version_0.2.2(Jun 24, 2022)

    What's Changed

    • Experiment for rotnet with unet backbone by @lvoegtlin in https://github.com/DIVA-DIA/DIVA-DAF/pull/101
    • Created additional tests by @lvoegtlin in https://github.com/DIVA-DIA/DIVA-DAF/pull/100
    • Updated the version on PL to 1.5.10 by @lvoegtlin in https://github.com/DIVA-DIA/DIVA-DAF/pull/112
    • Added tests for RolfFormat datamodule and RGB takes by @lvoegtlin in https://github.com/DIVA-DIA/DIVA-DAF/pull/114
    • Release 0.2.2 by @lvoegtlin in https://github.com/DIVA-DIA/DIVA-DAF/pull/113

    Full Changelog: https://github.com/DIVA-DIA/DIVA-DAF/compare/version_0.2.1...version_0.2.2

    Source code(tar.gz)
    Source code(zip)
  • version_0.2.1(Dec 2, 2021)

    What's Changed

    • Fixed selection parameter, removed todos, improved print_config, added self to configs by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/87
    • Added tests for tasks and fixed merge scripts by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/89
    • New log folder structure by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/91
    • Replacing numpy with torch in divahisdb functional by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/93
    • Rename config saved during a run, and print commands to rerun a run by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/95
    • Release 0.2.1 by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/98

    Full Changelog: https://github.com/DIVA-DIA/unsupervised_learning/compare/version_0.2.0...version_0.2.1

    Source code(tar.gz)
    Source code(zip)
  • version_0.2.0(Nov 25, 2021)

    Some new things

    • new architectures (resnet)
    • new datamodules (rolf format, RGB, full-page, and SSL)
    • different bug fixes
    • experiment configs
    • refactoring and deletion of unused code
    • callback to check the compatibility of backbone and header
    • inference/prediction stage (list of files with regex)
    • freezing header or backbone
    • improved readme
    • improved testing

    What's Changed

    • Dev data refactoring by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/74
    • Dev rgb encoding by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/76
    • RotNet by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/75
    • log more by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/77
    • More architectures by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/78
    • Dev fixing tests by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/79
    • Created resnet FCN header by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/83
    • Dev rolf data format by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/84
    • Introduce inference/prediction and refactoring by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/85
    • release 0.2.0 by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/86

    Full Changelog: https://github.com/DIVA-DIA/unsupervised_learning/compare/version_0.1.1...version_0.2.0

    Source code(tar.gz)
    Source code(zip)
  • version_0.1.1(Oct 22, 2021)

    Changelog:

    • fixed conf mat
    • optimized test and validation step
    • improved merging of crops
    • more metrics and optimizers
    • updated requirements

    What's Changed

    • made tests running also in the terminal by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/60
    • fixed evaluation tool problem by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/62
    • adding new optimiser configs by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/64
    • removed unused dependency by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/65
    • Dev improve datamodule tests by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/66
    • Dev fixing conf and f1 heatmap by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/68
    • :art: each worker of the dl gets now an own seed by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/69
    • Dev reduce gpu memory by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/71
    • upload run config by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/72
    • release version 0.1.1 by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/73

    Full Changelog: https://github.com/DIVA-DIA/unsupervised_learning/compare/version_0.1.0...version_0.1.1

    Source code(tar.gz)
    Source code(zip)
  • version_0.1.0(Oct 6, 2021)

    The first version of the framework

    What's Changed

    • Dev 38 create hydra configs by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/1
    • Dev 47 better logger name by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/3
    • Dev 43 configurable optimizers by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/2
    • Dev 44 load model checkpoint by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/16
    • dev synced metric logging by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/17
    • When DDP num_workers = 0 was forced by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/19
    • Resolve ddp warning by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/20
    • Add strict parameter by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/21
    • Config refinement by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/23
    • Save config file for each run by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/28
    • add env by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/29
    • Dev 25 torchmetric introduction by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/30
    • Removed custom hydra config by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/32
    • Dev 24 abstract task class by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/33
    • Dev 26 loading warning improvements by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/34
    • update pl to 1.4.4 by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/36
    • Loss functions as config by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/37
    • ddp cpu not working by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/39
    • Dev shuffle data option by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/44
    • Dev dataset selected pages by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/49
    • Dev 9 metric as config by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/47
    • Fix conf mat and extend by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/51
    • Save metrics to csv by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/52
    • Check backbone header compatibility by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/53
    • abstract datamodule and resolvers by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/56
    • Dev refactoring and tests by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/57
    • Dev 34 refactoring semantic segmentation by @powl7 in https://github.com/DIVA-DIA/unsupervised_learning/pull/58
    • Version 0.1.0 of the fw by @lvoegtlin in https://github.com/DIVA-DIA/unsupervised_learning/pull/59

    Full Changelog: https://github.com/DIVA-DIA/unsupervised_learning/commits/version_0.1.0

    Source code(tar.gz)
    Source code(zip)
HyperDict - Self linked dictionary in Python

Hyper Dictionary Advanced python dictionary(hash-table), which can link it-self

8 Feb 06, 2022
GrailQA: Strongly Generalizable Question Answering

GrailQA is a new large-scale, high-quality KBQA dataset with 64,331 questions annotated with both answers and corresponding logical forms in different syntax (i.e., SPARQL, S-expression, etc.). It ca

OSU DKI Lab 76 Dec 21, 2022
CSD: Consistency-based Semi-supervised learning for object Detection

CSD: Consistency-based Semi-supervised learning for object Detection (NeurIPS 2019) By Jisoo Jeong, Seungeui Lee, Jee-soo Kim, Nojun Kwak Installation

80 Dec 15, 2022
Predicting Tweet Sentiment Maching Learning and streamlit

Predicting-Tweet-Sentiment-Maching-Learning-and-streamlit (I prefere using Visual Studio Code ) Open the folder in VS Code Run the first cell in requi

1 Nov 20, 2021
CCP dataset from Clothing Co-Parsing by Joint Image Segmentation and Labeling

Clothing Co-Parsing (CCP) Dataset Clothing Co-Parsing (CCP) dataset is a new clothing database including elaborately annotated clothing items. 2, 098

Wei Yang 434 Dec 24, 2022
😇A pyTorch implementation of the DeepMoji model: state-of-the-art deep learning model for analyzing sentiment, emotion, sarcasm etc

------ Update September 2018 ------ It's been a year since TorchMoji and DeepMoji were released. We're trying to understand how it's being used such t

Hugging Face 865 Dec 24, 2022
Repository for the AugmentedPCA Python package.

Overview This Python package provides implementations of Augmented Principal Component Analysis (AugmentedPCA) - a family of linear factor models that

Billy Carson 6 Dec 07, 2022
GE2340 project source code without credentials.

GE2340-Project-Public GE2340 project source code without credentials. Run the bot.py to start the bot Telegram: @jasperwong_ge2340_bot If the bot does

0 Feb 10, 2022
DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.

DeepSpeed+Megatron trained the world's most powerful language model: MT-530B DeepSpeed is hiring, come join us! DeepSpeed is a deep learning optimizat

Microsoft 8.4k Dec 28, 2022
Official PyTorch implementation of "Improving Face Recognition with Large AgeGaps by Learning to Distinguish Children" (BMVC 2021)

Inter-Prototype (BMVC 2021): Official Project Webpage This repository provides the official PyTorch implementation of the following paper: Improving F

Jungsoo Lee 16 Jun 30, 2022
The official implementation of the research paper "DAG Amendment for Inverse Control of Parametric Shapes"

DAG Amendment for Inverse Control of Parametric Shapes This repository is the official Blender implementation of the paper "DAG Amendment for Inverse

Elie Michel 157 Dec 26, 2022
Official PyTorch implementation of UACANet: Uncertainty Aware Context Attention for Polyp Segmentation

UACANet: Uncertainty Aware Context Attention for Polyp Segmentation Official pytorch implementation of UACANet: Uncertainty Aware Context Attention fo

Taehun Kim 85 Dec 14, 2022
Finite difference solution of 2D Poisson equation. Can handle Dirichlet, Neumann and mixed boundary conditions.

Poisson-solver-2D Finite difference solution of 2D Poisson equation Current version can handle Dirichlet, Neumann, and mixed (combination of Dirichlet

Mohammad Asif Zaman 34 Dec 23, 2022
To prepare an image processing model to classify the type of disaster based on the image dataset

Disaster Classificiation using CNNs bunnysaini/Disaster-Classificiation Goal To prepare an image processing model to classify the type of disaster bas

Bunny Saini 1 Jan 24, 2022
Image marine sea litter prediction Shiny

MARLITE Shiny app for floating marine litter detection in aerial images. This directory contains the instructions and software needed to install the S

19 Dec 22, 2022
Video-based open-world segmentation

UVO_Challenge Team Alpes_runner Solutions This is an official repo for our UVO Challenge solutions for Image/Video-based open-world segmentation. Our

Yuming Du 84 Dec 22, 2022
AutoDeeplab / auto-deeplab / AutoML for semantic segmentation, implemented in Pytorch

AutoML for Image Semantic Segmentation Currently this repo contains the only working open-source implementation of Auto-Deeplab which, by the way out-

AI Necromancer 299 Dec 17, 2022
[ICCV'21] PlaneTR: Structure-Guided Transformers for 3D Plane Recovery

PlaneTR: Structure-Guided Transformers for 3D Plane Recovery This is the official implementation of our ICCV 2021 paper News There maybe some bugs in

73 Nov 30, 2022
Segmentation models with pretrained backbones. PyTorch.

Python library with Neural Networks for Image Segmentation based on PyTorch. The main features of this library are: High level API (just two lines to

Pavel Yakubovskiy 6.6k Jan 06, 2023