PHOTONAI is a high level python API for designing and optimizing machine learning pipelines.

Related tags

Deep Learningphotonai
Overview

PHOTON LOGO

GitHub Workflow Status Coverage Status Github Contributors Github Commits PyPI Version License Twitter URL

PHOTONAI is a high level python API for designing and optimizing machine learning pipelines.

We've created a system in which you can easily select and combine both pre-processing and learning algorithms from state-of-the-art machine learning toolboxes, and arrange them in simple or parallel pipeline data streams.

In addition, you can parametrize your training and testing workflow choosing cross-validation schemes, performance metrics and hyperparameter optimization metrics from a list of pre-registered options.

Importantly, you can integrate custom solutions into your data processing pipeline, but also for any part of the model training and evaluation process including custom hyperparameter optimization strategies.

For a detailed description, visit our website and read the documentation

or you can read our paper in PLOS ONE


Getting Started

In order to use PHOTONAI you only need to have your favourite Python IDE ready. Then install the latest stable version simply via pip

pip install photonai
# Or try out the latest features if you don't rely on a stable version, using:
pip install --upgrade git+https://github.com/wwu-mmll/[email protected]

You can setup a full stack machine learning pipeline in a few lines of code:

from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import KFold

from photonai.base import Hyperpipe, PipelineElement
from photonai.optimization import FloatRange, Categorical, IntegerRange

# DESIGN YOUR PIPELINE
my_pipe = Hyperpipe('basic_svm_pipe',  # the name of your pipeline
                    # which optimizer PHOTONAI shall use
                    optimizer='sk_opt',
                    optimizer_params={'n_configurations': 25},
                    # the performance metrics of your interest
                    metrics=['accuracy', 'precision', 'recall', 'balanced_accuracy'],
                    # after hyperparameter optimization, this metric declares the winner config
                    best_config_metric='accuracy',
                    # repeat hyperparameter optimization three times
                    outer_cv=KFold(n_splits=3),
                    # test each configuration five times respectively,
                    inner_cv=KFold(n_splits=5),
                    verbosity=1,
                    project_folder='./tmp/')


# first normalize all features
my_pipe.add(PipelineElement('StandardScaler'))

# then do feature selection using a PCA
my_pipe += PipelineElement('PCA', 
                           hyperparameters={'n_components': IntegerRange(5, 20)}, 
                           test_disabled=True)

# engage and optimize the good old SVM for Classification
my_pipe += PipelineElement('SVC', 
                           hyperparameters={'kernel': Categorical(['rbf', 'linear']),
                                            'C': FloatRange(0.5, 2)}, gamma='scale')

# train pipeline
X, y = load_breast_cancer(return_X_y=True)
my_pipe.fit(X, y)

Features

Easy access to established ML implementations

We pre-registered diverse preprocessing and learning algorithms from state-of-the-art toolboxes e.g. scikit-learn, keras and imbalanced learn to rapidly build custom pipelines

Hyperparameter Optimization

With PHOTONAI you can seamlessly switch between diverse hyperparameter optimization strategies, such as (random) grid-search or bayesian optimization (scikit-optimize, smac3).

Extended ML Pipeline

You can build custom sequences of processing and learning algorithms with a simple syntax. PHOTONAI offers extended pipeline functionality such as parallel sequences, custom callbacks in-between pipeline elements, AND- and OR- Operations, as well as the possibility to flexibly position data augmentation, class balancing or learning algorithms anywhere in the pipeline.

Model Sharing

PHOTONAI provides a standardized format for sharing and loading optimized pipelines across platforms with only one line of code.

Automation

While you concentrate on selecting appropriate processing steps, learning algorithms, hyperparameters and training parameters, PHOTONAI automates the nested cross-validated optimization and evaluation loop for any custom pipeline.

Results Visualization

PHOTONAI comes with extensive logging of all information in the training, testing and hyperparameter optimization process. In addition, optimum performances and the hyperparameter optimization progress are visualized in the PHOTONAI Explorer.

For more use cases, examples, contribution guidelines and API details visit our website

www.photon-ai.com

Comments
  • Question on the Over/Under sampling on validation and test splits

    Question on the Over/Under sampling on validation and test splits

    Hi, I defined a classification hyperpipe that involves a PipelineElement that Oversamples or Undersamples the input dataset. I would like to know if this step is done only on the training split of the nested cross validation or also on the validation and test splits ? Actually, I would like to know if the metrics computed to select the best models and to evaluate them are only computed on the "real" samples and not on the "real + fake" ones (in case of an oversampling), and if it is computed on all the samples and not only the selected ones in case of an undersampling. Do you know the answer or maybe a document where I can search for the answer ? I have not found it on the documentation but maybe I searched #badly. Thanks a lot ! Clément

    opened by brosscle 2
  • Bump dask from 2.30.0 to 2021.10.0

    Bump dask from 2.30.0 to 2021.10.0

    Bumps dask from 2.30.0 to 2021.10.0.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 2
  • FIX #44 ImbalancedDataTransformer

    FIX #44 ImbalancedDataTransformer

    Associated to #44.

    ImbalancedDataTransformer did not subsequently adjust the method name. This should be fixed by this PR. Furthermore the input of the kwargs was replaced by a config parameter. This can now specifically set the setting for each strategy. It is important that this is not used within the hyperparameters, as it is not certain which parameter will be set first. I have added this to the comments in the relevant places. Is there a better way to do this in this context?

    opened by lucasplagwitz 1
  • Imbalanced Data Transform always set to 'RandomUnderSampler' method

    Imbalanced Data Transform always set to 'RandomUnderSampler' method

    Hi, and thanks for your work on PhotonAI ! I created an hyperpipe and added a PipelineElement about imbalanced data transformations, like explained on https://wwu-mmll.github.io/photonai/examples/imbalanced_data/ . Unfortunately, when I look at my hyperpipe elements after the creation, I get the element PipelineElement(method_name='RandomUnderSampler', name='ImbalancedDataTransformer') for every selected method, event when selecting an oversampling method, which is quite embarrassing... Do you have any idea about how to solve this issue, and effectively add the selected method as element to my hyperpipe ? I am using version 2.1.0, maybe this issue has been addressed on version 2.2.0 ? Thanks in advance for your help ! Clément

    opened by brosscle 1
  • Test-Adaptations for Dask 2020.12.0-2021.2.0

    Test-Adaptations for Dask 2020.12.0-2021.2.0

    Move function self.create_hyperpipe() to create_hyperpipe() to enable a pickle-based serialization for newer versions of dask/distributed.

    Works only for versions != 2021.3.x -> is probably related to dask/distributed#4645 and solved with 2021.04.0.

    Small skopt adaptations: use defaults (base_estimator: ET->GP)

    opened by lucasplagwitz 1
  • Permutation test: (1) document to large to save to MongoDB (2) _calculate_results: mongodb path set to trap-umbriel

    Permutation test: (1) document to large to save to MongoDB (2) _calculate_results: mongodb path set to trap-umbriel

    While running a permutation test with Photon, the following issues occurred.

    Issue 1: While running the test (implemented as suggested in the docu https://www.photon-ai.com/documentation/permutation_test), the script is unable to write the results of the very first permutation (y=ytrue) into the MongoDB since the file size is too large. Since the PermutationTest relies on the MonoDB entries the process finishes with code 1.

    As a workaround, reducing the number of cv folds or the number of hyperparameter configurations (hence reducing the data to be stored in the MongoDB) solves the problem and the test will continue to run without any issues. The most obvious solution to me would be to forgo saving certain data into the MongoDB (e.g. feature importances, predictions). To my understanding this had been implemented in the past but was discontinued (commit d5ecd1b, 24.09.2019).

    Issue 2: While calculating the permutation results, the server cannot be found. It seems as in the def _calculate_results function (line 181), the server is set to mongodb_path="mongodb://trap-umbriel:27017/photon_results" and not to the server which has been set by the user.

    photonai version: 1.1.0 (develop tree) OS: MAC OS 10.15.4 n samples: 1650 features (predictors): 55

    Error log: Issue 1: File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/photonai/processing/permutation_test.py", line 90, in fit self.pipe.results.save() File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymodm/base/models.py", line 476, in save self.to_son(), upsert=True) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/collection.py", line 930, in replace_one collation=collation, session=session), File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/collection.py", line 856, in _update_retryable _update, session) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/mongo_client.py", line 1491, in _retryable_write return self._retry_with_session(retryable, func, s, None) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/mongo_client.py", line 1384, in _retry_with_session return func(session, sock_info, retryable) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/collection.py", line 852, in _update retryable_write=retryable_write) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/collection.py", line 822, in _update retryable_write=retryable_write).copy() File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/pool.py", line 618, in command self._raise_connection_failure(error) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/pool.py", line 613, in command user_fields=user_fields) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/network.py", line 143, in command name, size, max_bson_size + message._COMMAND_OVERHEAD) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/message.py", line 1077, in _raise_document_too_large raise DocumentTooLarge("%r command document too large" % (operation,)) pymongo.errors.DocumentTooLarge: 'update' command document too large

    Issue 2: Traceback (most recent call last): perm_tester.fit(X, y) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/photonai/processing/permutation_test.py", line 143, in fit perm_result = self._calculate_results(self.permutation_id) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/photonai/processing/permutation_test.py", line 185, in _calculate_results mother_permutation = PermutationTest.find_reference(mongodb_path, permutation_id) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/photonai/processing/permutation_test.py", line 291, in find_reference mother_permutation = _find_mummy(permutation_id) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/photonai/processing/permutation_test.py", line 284, in _find_mummy 'computation_completed': True}).order_by([('computation_start_time', DESCENDING)]).first() File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymodm/queryset.py", line 127, in first return next(iter(self.limit(-1))) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymodm/queryset.py", line 543, in return (to_instance(doc) for doc in self._get_raw_cursor()) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/cursor.py", line 1156, in next if len(self.__data) or self._refresh(): File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/cursor.py", line 1050, in _refresh self.__session = self.__collection.database.client._ensure_session() File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/mongo_client.py", line 1810, in _ensure_session return self.__start_session(True, causal_consistency=False) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/mongo_client.py", line 1763, in __start_session server_session = self._get_server_session() File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/mongo_client.py", line 1796, in _get_server_session return self._topology.get_server_session() File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/topology.py", line 485, in get_server_session None) File "/Users/michael/opt/anaconda3/envs/photon/lib/python3.7/site-packages/pymongo/topology.py", line 209, in _select_servers_loop self._error_message(selector)) pymongo.errors.ServerSelectionTimeoutError: trap-umbriel:27017: [Errno 8] nodename nor servname provided, or not known

    opened by MSchmitt-git 1
  • Could not load meta information for optimum pipe

    Could not load meta information for optimum pipe

    I am trying to use a model I was given by a colleague but keep coming up against an error finding base.PhotonBase.

    Here is the code I ran:

    from photonai.base import Hyperpipe best_model_file = 'mymodel.photon' my_model = Hyperpipe.load_optimum_pipe(best_model_file)

    where mymodel.photon sits in a folder that also includes a "photon_best_model" folder with __optimum_pipe_0_SimpleImputer.pkl, _optimum_pipe_1_StandardScaler.pkl, _optimum_pipe_2_Ridge.pkl, and optimum_pipe_blueprint.pkl' I requested these files from my colleague because the errors I was getting indicated they needed to be there to load the model.

    Running this I get the following error: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details. defaults = yaml.load(f) /Users/lee_jollans/anaconda3/lib/python3.7/site-packages/sklearn/externals/joblib/__init__.py:15: FutureWarning: sklearn.externals.joblib is deprecated in 0.21 and will be removed in 0.23. Please import this functionality directly from joblib, which can be installed with: pip install joblib. If this warning is raised when loading pickled models, you may need to re-serialize those models with scikit-learn 0.21+. warnings.warn(msg, category=FutureWarning) Could not load meta information for optimum pipe Traceback (most recent call last): File "tryphoton.py", line 10, in <module> my_model = Hyperpipe.load_optimum_pipe(best_model_file) File "/Users/lee_jollans/anaconda3/lib/python3.7/site-packages/photonai/base/hyperpipe.py", line 1105, in load_optimum_pipe return PhotonModelPersistor.load_optimum_pipe(file, password) File "/Users/lee_jollans/anaconda3/lib/python3.7/site-packages/photonai/base/hyperpipe.py", line 1444, in load_optimum_pipe element_list = PhotonModelPersistor.load_elements(folder=load_folder) File "/Users/lee_jollans/anaconda3/lib/python3.7/site-packages/photonai/base/hyperpipe.py", line 1410, in load_elements loaded_pipeline_element = joblib.load(os.path.join(folder, element_info['filename'] + '.pkl')) File "/Users/lee_jollans/anaconda3/lib/python3.7/site-packages/joblib/numpy_pickle.py", line 605, in load obj = _unpickle(fobj, filename, mmap_mode) File "/Users/lee_jollans/anaconda3/lib/python3.7/site-packages/joblib/numpy_pickle.py", line 529, in _unpickle obj = unpickler.load() File "/Users/lee_jollans/anaconda3/lib/python3.7/pickle.py", line 1085, in load dispatch[key[0]](self) File "/Users/lee_jollans/anaconda3/lib/python3.7/pickle.py", line 1373, in load_global klass = self.find_class(module, name) File "/Users/lee_jollans/anaconda3/lib/python3.7/pickle.py", line 1423, in find_class __import__(module, level=0) ModuleNotFoundError: No module named 'photonai.base.PhotonBase'

    My colleague had originally noted that Hyperpipe is imported using from photonai.base.PhotonBase import Hyperpipe, which also did not work because I got the PhotonBase error.

    Note, I am running macOS Mojave and python 3.7.3

    Any help is greatly appreciated!

    opened by ljollans 8
  • Suggestions  in fabolas.py

    Suggestions in fabolas.py

    Hi, I’m a student and learning about BayesianOptimization rencently. I’m trying to make fabolas compactible to George 0.3.1 and I think I did it. And I hope I can give you some suggestions:

    1. I suggest that using stationary kernel (E.g. SE kernel) instead of non-stationary kernel (LinearKernel in Fabolas.py), because when you run get_incumbent() (in Fabolas.py), you will project the environment variables to 1, and then change to 0 because of _quadratic_bf(). Then you will run predict() in get_incumbent(), and the parameters of predict() will be matrix with env=0 (E.g. (a1,b1,0) ,(a2,b2,0) ,(a3,b3,0)...) If you use LinearKernel, the var of predict() will be a zeroes, and mean will of predict() will be a vector with same elements.As a result, the epmgp.py cannot work

    2. The parameter of EnvPrior() “n_lr=degree+1” can change to “n_lr= len(env_kernel) “

    opened by cjfcsjt 1
Releases(2.2.1)
  • 2.2.1(Aug 3, 2022)

  • 2.2.0(Nov 5, 2021)

  • 2.1.0(Mar 5, 2021)

    Documentation: https://wwu-mmll.github.io/photonai/

    Changelog Features:

    • enable integration of custom metrics
    • integrate automatic generation of learning curves
    • integrate nevergrad hyperparameter optimization strategy
    • add new hyperparameter optimizer designed to compare different (learning) algorithms in a Switch (OR-element)
    • add functionality to automatically find, analyze and compare the best config for each estimator (Switch) per outer fold
    • added scorer method to hyperpipe that scores with best_config_metric, therefore the Hyperpipe object can be used with scikit-learn functions.
    • integrated sklearn permutation feature importances into the workflow
    • disable usage of test samples with the parameter use_test_set in hyperpipe
    • removed the need to import Output Settings class to declare the project_folder -> moved to Hyperpipe constructor
    • added inverse_transform methods to several PHOTONAI algorithm implementations

    Development:

    • integrate documentation into github repo based on mkdocs and material theme: https://wwu-mmll.github.io/photonai/
    • switch continuos integration protocol to github actions: https://github.com/wwu-mmll/photonai/actions
    • code clean ups
    Source code(tar.gz)
    Source code(zip)
  • 2.0.0(Jul 14, 2020)

    • removed investigator, instead we offer the Explorer, a Javascript web application to visualize and analyze the results
    • moved photon.neuro module to an own package called photonai_neuro
    • updated repository structure, moved tests and examples to root directory
    • consistently named the repository photonai everywhere
    • included continuous integration pipeline with travis-ci
    Source code(tar.gz)
    Source code(zip)
  • 0.4.0(Feb 21, 2019)

    Starting with this release, you should be able to install PHOTON via pip. Issues with installing the requirements have been resolved. This release includes the PHOTON Investigator to visualize the results.

    Source code(tar.gz)
    Source code(zip)
Owner
Medical Machine Learning Lab - University of Münster
Medical Machine Learning Lab - University of Münster
LowRankModels.jl is a julia package for modeling and fitting generalized low rank models.

LowRankModels.jl LowRankModels.jl is a Julia package for modeling and fitting generalized low rank models (GLRMs). GLRMs model a data array by a low r

Madeleine Udell 183 Dec 17, 2022
A short code in python, Enchpyter, is able to encrypt and decrypt words as you determine, of course

Enchpyter Enchpyter is a program do encrypt and decrypt any word you want (just letters). You enter how many letters jumps and write the word, so, the

João Assalim 2 Oct 10, 2022
Lowest memory consumption and second shortest runtime in NTIRE 2022 challenge on Efficient Super-Resolution

FMEN Lowest memory consumption and second shortest runtime in NTIRE 2022 on Efficient Super-Resolution. Our paper: Fast and Memory-Efficient Network T

33 Dec 01, 2022
This is the official source code of "BiCAT: Bi-Chronological Augmentation of Transformer for Sequential Recommendation".

BiCAT This is our TensorFlow implementation for the paper: "BiCAT: Sequential Recommendation with Bidirectional Chronological Augmentation of Transfor

John 15 Dec 06, 2022
Learning Multiresolution Matrix Factorization and its Wavelet Networks on Graphs

Project Learning Multiresolution Matrix Factorization and its Wavelet Networks on Graphs, https://arxiv.org/pdf/2111.01940.pdf. Authors Truong Son Hy

5 Jun 28, 2022
Code accompanying the paper "How Tight Can PAC-Bayes be in the Small Data Regime?"

How Tight Can PAC-Bayes be in the Small Data Regime? This is the code to reproduce all experiments for the following paper: @inproceedings{Foong:2021:

5 Dec 21, 2021
This repository contains the implementation of Deep Detail Enhancment for Any Garment proposed in Eurographics 2021

Deep-Detail-Enhancement-for-Any-Garment Introduction This repository contains the implementation of Deep Detail Enhancment for Any Garment proposed in

40 Dec 13, 2022
Code of paper: "DropAttack: A Masked Weight Adversarial Training Method to Improve Generalization of Neural Networks"

DropAttack: A Masked Weight Adversarial Training Method to Improve Generalization of Neural Networks Abstract: Adversarial training has been proven to

倪仕文 (Shiwen Ni) 58 Nov 10, 2022
A PyTorch implementation of a Factorization Machine module in cython.

fmpytorch A library for factorization machines in pytorch. A factorization machine is like a linear model, except multiplicative interaction terms bet

Jack Hessel 167 Jul 06, 2022
Hierarchical Aggregation for 3D Instance Segmentation (ICCV 2021)

HAIS Hierarchical Aggregation for 3D Instance Segmentation (ICCV 2021) by Shaoyu Chen, Jiemin Fang, Qian Zhang, Wenyu Liu, Xinggang Wang*. (*) Corresp

Hust Visual Learning Team 145 Jan 05, 2023
VIsually-Pivoted Audio and(N) Text

VIP-ANT: VIsually-Pivoted Audio and(N) Text Code for the paper Connecting the Dots between Audio and Text without Parallel Data through Visual Knowled

Yän.PnG 16 Nov 04, 2022
Text-to-Image generation

Generate vivid Images for Any (Chinese) text CogView is a pretrained (4B-param) transformer for text-to-image generation in general domain. Read our p

THUDM 1.3k Dec 29, 2022
Code for the Paper: Conditional Variational Capsule Network for Open Set Recognition

Conditional Variational Capsule Network for Open Set Recognition This repository hosts the official code related to "Conditional Variational Capsule N

Guglielmo Camporese 35 Nov 21, 2022
Offical implementation of Shunted Self-Attention via Multi-Scale Token Aggregation

Shunted Transformer This is the offical implementation of Shunted Self-Attention via Multi-Scale Token Aggregation by Sucheng Ren, Daquan Zhou, Shengf

156 Dec 27, 2022
GPU implementation of $k$-Nearest Neighbors and Shared-Nearest Neighbors

GPU implementation of kNN and SNN GPU implementation of $k$-Nearest Neighbors and Shared-Nearest Neighbors Supported by numba cuda and faiss library E

Hyeon Jeon 7 Nov 23, 2022
PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation

PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation Created by Charles R. Qi, Hao Su, Kaichun Mo, Leonidas J. Guibas from Sta

Charles R. Qi 4k Dec 30, 2022
Read and write layered TIFF ImageSourceData and ImageResources tags

Read and write layered TIFF ImageSourceData and ImageResources tags Psdtags is a Python library to read and write the Adobe Photoshop(r) specific Imag

Christoph Gohlke 4 Feb 05, 2022
The code is for the paper "A Self-Distillation Embedded Supervised Affinity Attention Model for Few-Shot Segmentation"

SD-AANet The code is for the paper "A Self-Distillation Embedded Supervised Affinity Attention Model for Few-Shot Segmentation" [arxiv] Overview confi

cv516Buaa 9 Nov 07, 2022
The code for Bi-Mix: Bidirectional Mixing for Domain Adaptive Nighttime Semantic Segmentation

BiMix The code for Bi-Mix: Bidirectional Mixing for Domain Adaptive Nighttime Semantic Segmentation arxiv Framework: visualization results: Requiremen

stanley 18 Sep 18, 2022
DI-HPC is an acceleration operator component for general algorithm modules in reinforcement learning algorithms

DI-HPC: Decision Intelligence - High Performance Computation DI-HPC is an acceleration operator component for general algorithm modules in reinforceme

OpenDILab 185 Dec 29, 2022