A deep-learning pipeline for segmentation of ambiguous microscopic images.

Overview

Welcome to

deepflash2

Official repository of deepflash2 - a deep-learning pipeline for segmentation of ambiguous microscopic images.

CI PyPI PyPI - Downloads Conda (channel only) Build fastai images GitHub stars GitHub forks


Quick Start in 30 seconds

Colab

setup5.mov

Why using deepflash2?

The best of two worlds: Combining state of the art deep learning with a barrier free environment for life science researchers.

  • End-to-end process for life science researchers
    • graphical user interface - no coding skills required
    • free usage on Google Colab at no costs
    • easy deployment on own hardware
  • Reliable prediction on new data
    • Quality assurance and out-of-distribution detection

Kaggle Gold Medal and Innovation Price Winner

deepflash2 does not only work on fluorescent labels. The deepflash2 API built the foundation for winning the Innovation Award a Kaggle Gold Medal in the HuBMAP - Hacking the Kidney challenge. Have a look at our solution

Gold Medal

Citing

The preprint of our paper is available on arXiv. Please cite

@misc{griebel2021deepflash2,
    title={Deep-learning in the bioimaging wild: Handling ambiguous data with deepflash2}, 
    author={Matthias Griebel and Dennis Segebarth and Nikolai Stein and Nina Schukraft and Philip Tovote and Robert Blum and Christoph M. Flath},
    year={2021},
    eprint={2111.06693}
}

Installing

You can use deepflash2 by using Google Colab. You can run every page of the documentation as an interactive notebook - click "Open in Colab" at the top of any page to open it.

  • Be sure to change the Colab runtime to "GPU" to have it run fast!
  • Use Firefox or Google Chrome if you want to upload your images.

You can install deepflash2 on your own machines with conda (highly recommended):

conda install -c fastchan -c matjesg deepflash2 

To install with pip, use

pip install deepflash2

If you install with pip, you should install PyTorch first by following the installation instructions of pytorch or fastai.

Using Docker

Docker images for deepflash2 are built on top of the latest pytorch image and fastai images. You must install Nvidia-Docker to enable gpu compatibility with these containers.

  • CPU only

docker run -p 8888:8888 matjesg/deepflash

  • With GPU support (Nvidia-Docker must be installed.) has an editable install of fastai and fastcore.

docker run --gpus all -p 8888:8888 matjesg/deepflash All docker containers are configured to start a jupyter server. deepflash2 notebooks are available in the deepflash2_notebooks folder.

For more information on how to run docker see docker orientation and setup and fastai docker.

Creating segmentation masks with Fiji/ImageJ

If you don't have labelled training data available, you can use this instruction manual for creating segmentation maps. The ImagJ-Macro is available here.

Acronym

A Deep-learning pipeline for Fluorescent Label Segmentation that learns from Human experts

Comments
  • Ensembling Issue

    Ensembling Issue

    @matjesg
    There comes another issue which is while Ensembling of two models. Some how there is Either a Memory Leak or GPU RAM leak so our Sub are failing. we are clueless on this. This succeeded once but now not succeeding at all. any help would be appreciated.

    We use Ensembler Model that does mean of predictions of model we want to ensemble. Single model works fine. two models ensembling fails.

    1. How can we use Ensemble Learner class.Willl it help
    opened by jaideep11061982 5
  • Approach for Cross Validation and Seeding

    Approach for Cross Validation and Seeding

    Thanks for this wonderful framework.

    It would be really helpful if you can help me with an approach for K-fold cross validation.

    With the help of classes RandomTileDataset and TileDataset, I'm able to sample tiles from tiffle files but, how can I validate by performing k-fold cross validation over the entire training data picking some validation tiles in the first round of training and then picking the other tiles in the subsequent round of training without leakage of data between multiple validation folds.

    train_ds = RandomTileDataset(files, ...)
    valid_ds = TileDataset(files, ...)
    

    Also, please let me know if RandomTileDataset and TileDataset picks tiles without leakage between two datasets. That is, are both the splits exclusive of each other? Is there a chance of some of the validation tiles picked by TileDataset to be present in the random tiles picked by RandomTileDataset

    On a related note, please help me with the required seeds that should be set for the entire pipeline with steps such as dataset/data loader preparation, training, inference, etc.

    Thanks in advance.

    opened by sreevishnu-damodaran 2
  • initial commit new features

    initial commit new features

    New features

    • Multiclass GT Estimation, closes #34
    • Torchscript ensemble class for inference / tta adjusted
    • ONNX export possible

    Major changes

    • Different classes for training (EnsembleLearner) and Inference (EnsemblePredictor)
    • Normalization based on uin8 images (0...255)
    enhancement 
    opened by matjesg 1
  • Add train and predict tutorial

    Add train and predict tutorial

    Add notebook tutorial_train_and_pred.ipynb to give users an introduction into codebased training and prediction for example data from google-drive, zip files or own directory. Available example datasets: "PV_in_HC", "cFOS_in_HC", "mScarlet_in_PAG", "YFP_in_CTX", "GFAP_in_HC"

    opened by nicoelbert 1
  • Error in opening pdf and labels

    Error in opening pdf and labels

    Hi there , thanx for this wonderful library i am getting this error is it because of the directory management i think , i have a dataset of masks where i have sampled it to pdf and the labels but they are on separate directory but have a same root folder , i am not able to create a tiledataset using your RandomTileDataset or TileDataset the path that i am using is '/content/masks_scale2/' and it has two sub folders labels and pdf

    /usr/local/lib/python3.7/dist-packages/deepflash2/data.py in _read_msk(path, n_classes, instance_labels, **kwargs) 268 msk = tifffile.imread(path, **kwargs) 269 else: --> 270 msk = imageio.imread(path, **kwargs) 271 if not instance_labels: 272 if np.max(msk)>n_classes:

    /usr/local/lib/python3.7/dist-packages/imageio/core/functions.py in imread(uri, format, **kwargs) 219 220 # Get reader and read first --> 221 reader = read(uri, format, "i", **kwargs) 222 with reader: 223 return reader.get_data(0)

    /usr/local/lib/python3.7/dist-packages/imageio/core/functions.py in get_reader(uri, format, mode, **kwargs) 137 if format is None: 138 raise ValueError( --> 139 "Could not find a format to read the specified file " "in mode %r" % mode 140 ) ValueError: Could not find a format to read the specified file in mode 'i'

    opened by mu745511 1
  • Prediction not possible on Windows machines due to cuda error

    Prediction not possible on Windows machines due to cuda error

    During prediction on a windows machine a a OS Error occurs due to:

    "RuntimeError: cuda runtime error (801) : operation not supported at ..\torch/csrc/generic/StorageSharing.cpp:247"

    Problem: Storage sharing currently not supported on windows.

    Proposed solution: Ensemble learner takes "num_workers" argument and passes it to subsequent functions. If num_workers == 0, prediction works for me.

    bug 
    opened by Maddonix 1
  • Type Error when starting training

    Type Error when starting training

    When I try to start the training process in google colab, this error occurs:

    TypeError: no implementation found for 'torch.nn.functional.cross_entropy' on types that implement torch_function: [<class 'fastai.torch_core.TensorImage'>, <class 'fastai.torch_core.TensorMask'>]

    as well as

    FileNotFoundError: [Errno 2] No such file or directory: 'models/model.pth'

    in the end.

    Hope you know whats the problem.

    bug 
    opened by AmSchulte 1
  • Groundtruth calculation is broken since update of simpleITK

    Groundtruth calculation is broken since update of simpleITK

    The calculation of the ground truth is broken since the release of simpleITK 2.0

    Solution is to add "!pip install SimpleITK==1.2.4" manually to the google colab. Maybe it would be good to add a requirements.txt to the installation and add an upper version bound to the installed dependencies.

    opened by AmSchulte 1
  • Bump nokogiri from 1.13.9 to 1.13.10 in /docs

    Bump nokogiri from 1.13.9 to 1.13.10 in /docs

    Bumps nokogiri from 1.13.9 to 1.13.10.

    Release notes

    Sourced from nokogiri's releases.

    1.13.10 / 2022-12-07

    Security

    • [CRuby] Address CVE-2022-23476, unchecked return value from xmlTextReaderExpand. See GHSA-qv4q-mr5r-qprj for more information.

    Improvements

    • [CRuby] XML::Reader#attribute_hash now returns nil on parse errors. This restores the behavior of #attributes from v1.13.7 and earlier. [#2715]

    sha256 checksums:

    777ce2e80f64772e91459b943e531dfef387e768f2255f9bc7a1655f254bbaa1  nokogiri-1.13.10-aarch64-linux.gem
    b432ff47c51386e07f7e275374fe031c1349e37eaef2216759063bc5fa5624aa  nokogiri-1.13.10-arm64-darwin.gem
    73ac581ddcb680a912e92da928ffdbac7b36afd3368418f2cee861b96e8c830b  nokogiri-1.13.10-java.gem
    916aa17e624611dddbf2976ecce1b4a80633c6378f8465cff0efab022ebc2900  nokogiri-1.13.10-x64-mingw-ucrt.gem
    0f85a1ad8c2b02c166a6637237133505b71a05f1bb41b91447005449769bced0  nokogiri-1.13.10-x64-mingw32.gem
    91fa3a8724a1ce20fccbd718dafd9acbde099258183ac486992a61b00bb17020  nokogiri-1.13.10-x86-linux.gem
    d6663f5900ccd8f72d43660d7f082565b7ffcaade0b9a59a74b3ef8791034168  nokogiri-1.13.10-x86-mingw32.gem
    81755fc4b8130ef9678c76a2e5af3db7a0a6664b3cba7d9fe8ef75e7d979e91b  nokogiri-1.13.10-x86_64-darwin.gem
    51d5246705dedad0a09b374d09cc193e7383a5dd32136a690a3cd56e95adf0a3  nokogiri-1.13.10-x86_64-linux.gem
    d3ee00f26c151763da1691c7fc6871ddd03e532f74f85101f5acedc2d099e958  nokogiri-1.13.10.gem
    
    Changelog

    Sourced from nokogiri's changelog.

    1.13.10 / 2022-12-07

    Security

    • [CRuby] Address CVE-2022-23476, unchecked return value from xmlTextReaderExpand. See GHSA-qv4q-mr5r-qprj for more information.

    Improvements

    • [CRuby] XML::Reader#attribute_hash now returns nil on parse errors. This restores the behavior of #attributes from v1.13.7 and earlier. [#2715]
    Commits
    • 4c80121 version bump to v1.13.10
    • 85410e3 Merge pull request #2715 from sparklemotion/flavorjones-fix-reader-error-hand...
    • 9fe0761 fix(cruby): XML::Reader#attribute_hash returns nil on error
    • 3b9c736 Merge pull request #2717 from sparklemotion/flavorjones-lock-psych-to-fix-bui...
    • 2efa87b test: skip large cdata test on system libxml2
    • 3187d67 dep(dev): pin psych to v4 until v5 builds in CI
    • a16b4bf style(rubocop): disable Minitest/EmptyLineBeforeAssertionMethods
    • See full diff in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump nokogiri from 1.13.6 to 1.13.9 in /docs

    Bump nokogiri from 1.13.6 to 1.13.9 in /docs

    Bumps nokogiri from 1.13.6 to 1.13.9.

    Release notes

    Sourced from nokogiri's releases.

    1.13.9 / 2022-10-18

    Security

    Dependencies

    • [CRuby] Vendored libxml2 is updated to v2.10.3 from v2.9.14.
    • [CRuby] Vendored libxslt is updated to v1.1.37 from v1.1.35.
    • [CRuby] Vendored zlib is updated from 1.2.12 to 1.2.13. (See LICENSE-DEPENDENCIES.md for details on which packages redistribute this library.)

    Fixed

    • [CRuby] Nokogiri::XML::Namespace objects, when compacted, update their internal struct's reference to the Ruby object wrapper. Previously, with GC compaction enabled, a segmentation fault was possible after compaction was triggered. [#2658] (Thanks, @​eightbitraptor and @​peterzhu2118!)
    • [CRuby] Document#remove_namespaces! now defers freeing the underlying xmlNs struct until the Document is GCed. Previously, maintaining a reference to a Namespace object that was removed in this way could lead to a segfault. [#2658]

    sha256 checksums:

    9b69829561d30c4461ea803baeaf3460e8b145cff7a26ce397119577a4083a02  nokogiri-1.13.9-aarch64-linux.gem
    e76ebb4b7b2e02c72b2d1541289f8b0679fb5984867cf199d89b8ef485764956  nokogiri-1.13.9-arm64-darwin.gem
    15bae7d08bddeaa898d8e3f558723300137c26a2dc2632a1f89c8574c4467165  nokogiri-1.13.9-java.gem
    f6a1dbc7229184357f3129503530af73cc59ceba4932c700a458a561edbe04b9  nokogiri-1.13.9-x64-mingw-ucrt.gem
    36d935d799baa4dc488024f71881ff0bc8b172cecdfc54781169c40ec02cbdb3  nokogiri-1.13.9-x64-mingw32.gem
    ebaf82aa9a11b8fafb67873d19ee48efb565040f04c898cdce8ca0cd53ff1a12  nokogiri-1.13.9-x86-linux.gem
    11789a2a11b28bc028ee111f23311461104d8c4468d5b901ab7536b282504154  nokogiri-1.13.9-x86-mingw32.gem
    01830e1646803ff91c0fe94bc768ff40082c6de8cfa563dafd01b3f7d5f9d795  nokogiri-1.13.9-x86_64-darwin.gem
    8e93b8adec22958013799c8690d81c2cdf8a90b6f6e8150ab22e11895844d781  nokogiri-1.13.9-x86_64-linux.gem
    96f37c1baf0234d3ae54c2c89aef7220d4a8a1b03d2675ff7723565b0a095531  nokogiri-1.13.9.gem
    

    1.13.8 / 2022-07-23

    Deprecated

    • XML::Reader#attribute_nodes is deprecated due to incompatibility between libxml2's xmlReader memory semantics and Ruby's garbage collector. Although this method continues to exist for backwards compatibility, it is unsafe to call and may segfault. This method will be removed in a future version of Nokogiri, and callers should use #attribute_hash instead. [#2598]

    Improvements

    • XML::Reader#attribute_hash is a new method to safely retrieve the attributes of a node from XML::Reader. [#2598, #2599]

    Fixed

    ... (truncated)

    Changelog

    Sourced from nokogiri's changelog.

    1.13.9 / 2022-10-18

    Security

    Dependencies

    • [CRuby] Vendored libxml2 is updated to v2.10.3 from v2.9.14.
    • [CRuby] Vendored libxslt is updated to v1.1.37 from v1.1.35.
    • [CRuby] Vendored zlib is updated from 1.2.12 to 1.2.13. (See LICENSE-DEPENDENCIES.md for details on which packages redistribute this library.)

    Fixed

    • [CRuby] Nokogiri::XML::Namespace objects, when compacted, update their internal struct's reference to the Ruby object wrapper. Previously, with GC compaction enabled, a segmentation fault was possible after compaction was triggered. [#2658] (Thanks, @​eightbitraptor and @​peterzhu2118!)
    • [CRuby] Document#remove_namespaces! now defers freeing the underlying xmlNs struct until the Document is GCed. Previously, maintaining a reference to a Namespace object that was removed in this way could lead to a segfault. [#2658]

    1.13.8 / 2022-07-23

    Deprecated

    • XML::Reader#attribute_nodes is deprecated due to incompatibility between libxml2's xmlReader memory semantics and Ruby's garbage collector. Although this method continues to exist for backwards compatibility, it is unsafe to call and may segfault. This method will be removed in a future version of Nokogiri, and callers should use #attribute_hash instead. [#2598]

    Improvements

    • XML::Reader#attribute_hash is a new method to safely retrieve the attributes of a node from XML::Reader. [#2598, #2599]

    Fixed

    • [CRuby] Calling XML::Reader#attributes is now safe to call. In Nokogiri <= 1.13.7 this method may segfault. [#2598, #2599]

    1.13.7 / 2022-07-12

    Fixed

    XML::Node objects, when compacted, update their internal struct's reference to the Ruby object wrapper. Previously, with GC compaction enabled, a segmentation fault was possible after compaction was triggered. [#2578] (Thanks, @​eightbitraptor!)

    Commits
    • 897759c version bump to v1.13.9
    • aeb1ac3 doc: update CHANGELOG
    • c663e49 Merge pull request #2671 from sparklemotion/flavorjones-update-zlib-1.2.13_v1...
    • 212e07d ext: hack to cross-compile zlib v1.2.13 on darwin
    • 76dbc8c dep: update zlib to v1.2.13
    • 24e3a9c doc: update CHANGELOG
    • 4db3b4d Merge pull request #2668 from sparklemotion/flavorjones-namespace-scopes-comp...
    • 73d73d6 fix: Document#remove_namespaces! use-after-free bug
    • 5f58b34 fix: namespace nodes behave properly when compacted
    • b08a858 test: repro namespace_scopes compaction issue
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump tzinfo from 1.2.9 to 1.2.10 in /docs

    Bump tzinfo from 1.2.9 to 1.2.10 in /docs

    Bumps tzinfo from 1.2.9 to 1.2.10.

    Release notes

    Sourced from tzinfo's releases.

    v1.2.10

    TZInfo v1.2.10 on RubyGems.org

    Changelog

    Sourced from tzinfo's changelog.

    Version 1.2.10 - 19-Jul-2022

    Commits
    • 0814dcd Fix the release date.
    • fd05e2a Preparing v1.2.10.
    • b98c32e Merge branch 'fix-directory-traversal-1.2' into 1.2
    • ac3ee68 Remove unnecessary escaping of + within regex character classes.
    • 9d49bf9 Fix relative path loading tests.
    • 394c381 Remove private_constant for consistency and compatibility.
    • 5e9f990 Exclude Arch Linux's SECURITY file from the time zone index.
    • 17fc9e1 Workaround for 'Permission denied - NUL' errors with JRuby on Windows.
    • 6bd7a51 Update copyright years.
    • 9905ca9 Fix directory traversal in Timezone.get when using Ruby data source
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
Releases(0.2.1)
Streamlit Tutorial (ex: stock price dashboard, cartoon-stylegan, vqgan-clip, stylemixing, styleclip, sefa)

Streamlit Tutorials Install pip install streamlit Run cd [directory] streamlit run app.py --server.address 0.0.0.0 --server.port [your port] # http:/

Jihye Back 30 Jan 06, 2023
This project intends to use SVM supervised learning to determine whether or not an individual is diabetic given certain attributes.

Diabetes Prediction Using SVM I explore a diabetes prediction algorithm using a Diabetes dataset. Using a Support Vector Machine for my prediction alg

Jeff Shen 1 Jan 14, 2022
Multi-View Radar Semantic Segmentation

Multi-View Radar Semantic Segmentation Paper Multi-View Radar Semantic Segmentation, ICCV 2021. Arthur Ouaknine, Alasdair Newson, Patrick Pérez, Flore

valeo.ai 37 Oct 25, 2022
[ICCV 2021 Oral] PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers

PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers Created by Xumin Yu*, Yongming Rao*, Ziyi Wang, Zuyan Liu, Jiwen Lu, Jie Zhou

Xumin Yu 317 Dec 26, 2022
ML for NLP and Computer Vision.

Sparrow is our open-source ML product. It runs on Skipper MLOps infrastructure.

Katana ML 2 Nov 28, 2021
Detectron2-FC a fast construction platform of neural network algorithm based on detectron2

What is Detectron2-FC Detectron2-FC a fast construction platform of neural network algorithm based on detectron2. We have been working hard in two dir

董晋宗 9 Jun 06, 2022
Code-free deep segmentation for computational pathology

NoCodeSeg: Deep segmentation made easy! This is the official repository for the manuscript "Code-free development and deployment of deep segmentation

André Pedersen 26 Nov 23, 2022
🔥3D-RecGAN in Tensorflow (ICCV Workshops 2017)

3D Object Reconstruction from a Single Depth View with Adversarial Learning Bo Yang, Hongkai Wen, Sen Wang, Ronald Clark, Andrew Markham, Niki Trigoni

Bo Yang 125 Nov 26, 2022
Implementation of "Semi-supervised Domain Adaptive Structure Learning"

Semi-supervised Domain Adaptive Structure Learning - ASDA This repo contains the source code and dataset for our ASDA paper. Illustration of the propo

3 Dec 13, 2021
Source code release of the paper: Knowledge-Guided Deep Fractal Neural Networks for Human Pose Estimation.

GNet-pose Project Page: http://guanghan.info/projects/guided-fractal/ UPDATE 9/27/2018: Prototxts and model that achieved 93.9Pck on LSP dataset. http

Guanghan Ning 83 Nov 21, 2022
Implementation of TransGanFormer, an all-attention GAN that combines the finding from the recent GanFormer and TransGan paper

TransGanFormer (wip) Implementation of TransGanFormer, an all-attention GAN that combines the finding from the recent GansFormer and TransGan paper. I

Phil Wang 146 Dec 06, 2022
Ranger - a synergistic optimizer using RAdam (Rectified Adam), Gradient Centralization and LookAhead in one codebase

Ranger-Deep-Learning-Optimizer Ranger - a synergistic optimizer combining RAdam (Rectified Adam) and LookAhead, and now GC (gradient centralization) i

Less Wright 1.1k Dec 21, 2022
For encoding a text longer than 512 tokens, for example 800. Set max_pos to 800 during both preprocessing and training.

LongScientificFormer For encoding a text longer than 512 tokens, for example 800. Set max_pos to 800 during both preprocessing and training. Some code

Athar Sefid 6 Nov 02, 2022
Benchmark for Answering Existential First Order Queries with Single Free Variable

EFO-1-QA Benchmark for First Order Query Estimation on Knowledge Graphs This repository contains an entire pipeline for the EFO-1-QA benchmark. EFO-1

HKUST-KnowComp 14 Oct 24, 2022
SOFT: Softmax-free Transformer with Linear Complexity, NeurIPS 2021 Spotlight

SOFT: Softmax-free Transformer with Linear Complexity SOFT: Softmax-free Transformer with Linear Complexity, Jiachen Lu, Jinghan Yao, Junge Zhang, Xia

Fudan Zhang Vision Group 272 Dec 25, 2022
Official implementation of the paper 'Efficient and Degradation-Adaptive Network for Real-World Image Super-Resolution'

DASR Paper Efficient and Degradation-Adaptive Network for Real-World Image Super-Resolution Jie Liang, Hui Zeng, and Lei Zhang. In arxiv preprint. Abs

81 Dec 28, 2022
AdelaiDet is an open source toolbox for multiple instance-level detection and recognition tasks.

AdelaiDet is an open source toolbox for multiple instance-level detection and recognition tasks.

Adelaide Intelligent Machines (AIM) Group 3k Jan 02, 2023
A particular navigation route using satellite feed and can help in toll operations & traffic managemen

How about adding some info that can quanitfy the stress on a particular navigation route using satellite feed and can help in toll operations & traffic management The current analysis is on the satel

Ashish Pandey 1 Feb 14, 2022
RGBD-Net - This repository contains a pytorch lightning implementation for the 3DV 2021 RGBD-Net paper.

[3DV 2021] We propose a new cascaded architecture for novel view synthesis, called RGBD-Net, which consists of two core components: a hierarchical depth regression network and a depth-aware generator

Phong Nguyen Ha 4 May 26, 2022
Using contrastive learning and OpenAI's CLIP to find good embeddings for images with lossy transformations

The official code for the paper "Inverse Problems Leveraging Pre-trained Contrastive Representations" (to appear in NeurIPS 2021).

Sriram Ravula 26 Dec 10, 2022