An implementation of Deep Forest 2021.2.1.

Overview

Deep Forest (DF) 21

github readthedocs codecov python pypi style

DF21 is an implementation of Deep Forest 2021.2.1. It is designed to have the following advantages:

  • Powerful: Better accuracy than existing tree-based ensemble methods.
  • Easy to Use: Less efforts on tunning parameters.
  • Efficient: Fast training speed and high efficiency.
  • Scalable: Capable of handling large-scale data.

Whenever one used tree-based machine learning approaches such as Random Forest or GBDT, DF21 may offer a new powerful option.

For a quick start, please refer to How to Get Started. For a detailed guidance on parameter tunning, please refer to Parameters Tunning.

Installation

The package is available via PyPI using:

pip install deep-forest

Quickstart

from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score

from deepforest import CascadeForestClassifier

X, y = load_digits(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
model = CascadeForestClassifier(random_state=1)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
acc = accuracy_score(y_test, y_pred) * 100
print("\nTesting Accuracy: {:.3f} %".format(acc))
>>> Testing Accuracy: 98.667 %

Resources

Reference

@article{zhou2019deep,
    title={Deep forest},
    author={Zhi-Hua Zhou and Ji Feng},
    journal={National Science Review},
    volume={6},
    number={1},
    pages={74--86},
    year={2019}}

@inproceedings{zhou2017deep,
    Author = {Zhi-Hua Zhou and Ji Feng},
    Booktitle = {IJCAI},
    Pages = {3553-3559},
    Title = {{Deep Forest:} Towards an alternative to deep neural networks},
    Year = {2017}}

Acknowledgement

The lead developer and maintainer of DF21 is Mr. Yi-Xuan Xu. Before the release, it has been used internally in the LAMDA Group, Nanjing University, China.

Comments
  • Custom CascadeForestClassifier

    Custom CascadeForestClassifier

    Hey,

    Thanks for your awesome repo.

    I have a question if you don't mind could you please give me an example on how to change RandomForestClassifier and ExtraTreesClassifier in the CascadeForestClassifier?

    opened by Maryom 31
  • Starting the interpretability of the Deep Forest using SHAP

    Starting the interpretability of the Deep Forest using SHAP

    Hey,

    This is an initial implementation, however I'm not sure it will work I see that we will get the following error:

    AttributeError: 'CascadeForestClassifier' object has no attribute 'estimators_'

    What do you think @xuyxu ?

    opened by Maryom 25
  • Error: could not allocate 0 bytes

    Error: could not allocate 0 bytes

    When I was using this package, I experienced the following problem. According to my observation, there is still a lot of available memory. Thus, what's the problem?

      File "deepforest/tree/_tree.pyx", line 123, in deepforest.tree._tree.DepthFirstTreeBuilder.build
      File "deepforest/tree/_tree.pyx", line 256, in deepforest.tree._tree.DepthFirstTreeBuilder.build
      File "deepforest/tree/_tree.pyx", line 480, in deepforest.tree._tree.Tree._resize_node_c
      File "deepforest/tree/_utils.pyx", line 34, in deepforest.tree._utils.safe_realloc
    MemoryError: could not allocate 0 bytes
    
    bug 
    opened by hengzhe-zhang 23
  • Add support for pandas.DataFrame and list in `fit`

    Add support for pandas.DataFrame and list in `fit`

    Now, the fit method only support np.array for input. However, most ml algorithms with scikit-learn-Compatible API (i.e. XGBoost , NGBoost) support DataFrame object or List or numpy array of predictors (n x p) in numeric format using sklearn.utils.check_array . This RP is to make Deep-Forest consistent with other algorithms so that it will be more easy to be used in other integrated machine learning frameworks (i.e. PyCaret).

    opened by IncubatorShokuhou 13
  • [BUG] `CascadeForestRegressor` somehow cannot be inserted into a DataFrame

    [BUG] `CascadeForestRegressor` somehow cannot be inserted into a DataFrame

    Describe the bug CascadeForestRegressor somehow cannot be inserted into a DataFrame

    To Reproduce

    import pandas as pd
    from deepforest import CascadeForestRegressor
    from ngboost import NGBRegressor
    
    ngr = NGBRegressor()  # ngboost regressor for example. xgb, lgb should also be no problem.
    cfr = CascadeForestRegressor()
    df= pd.DataFrame()
    
    # somehow OK
    df.insert(0, "ngr", [ngr])
    # somehow error
    df.insert(0, "cf", [cforest])
    

    Expected behavior No error

    Additional context

    ValueError                                Traceback (most recent call last)
    <ipython-input-32-ab0139d10254> in <module>
    ----> 1 df.insert(0, "cf", [cforest])
    
    /mnt/hdd2/lvhao/miniconda3/envs/pycaret/lib/python3.7/site-packages/pandas/core/frame.py in insert(self, loc, column, value, allow_duplicates)
       3760             )
       3761         self._ensure_valid_index(value)
    -> 3762         value = self._sanitize_column(column, value, broadcast=False)
       3763         self._mgr.insert(loc, column, value, allow_duplicates=allow_duplicates)
       3764 
    
    /mnt/hdd2/lvhao/miniconda3/envs/pycaret/lib/python3.7/site-packages/pandas/core/frame.py in _sanitize_column(self, key, value, broadcast)
       3900             if not isinstance(value, (np.ndarray, Index)):
       3901                 if isinstance(value, list) and len(value) > 0:
    -> 3902                     value = maybe_convert_platform(value)
       3903                 else:
       3904                     value = com.asarray_tuplesafe(value)
    
    /mnt/hdd2/lvhao/miniconda3/envs/pycaret/lib/python3.7/site-packages/pandas/core/dtypes/cast.py in maybe_convert_platform(values)
        110     """ try to do platform conversion, allow ndarray or list here """
        111     if isinstance(values, (list, tuple, range)):
    --> 112         values = construct_1d_object_array_from_listlike(values)
        113     if getattr(values, "dtype", None) == np.object_:
        114         if hasattr(values, "_values"):
    
    /mnt/hdd2/lvhao/miniconda3/envs/pycaret/lib/python3.7/site-packages/pandas/core/dtypes/cast.py in construct_1d_object_array_from_listlike(values)
       1636     # making a 1D array that contains list-likes is a bit tricky:
       1637     result = np.empty(len(values), dtype="object")
    -> 1638     result[:] = values
       1639     return result
       1640 
    
    /mnt/hdd2/lvhao/miniconda3/envs/pycaret/lib/python3.7/site-packages/deepforest/cascade.py in __getitem__(self, index)
        518 
        519     def __getitem__(self, index):
    --> 520         return self._get_layer(index)
        521 
        522     def _get_n_output(self, y):
    
    /mnt/hdd2/lvhao/miniconda3/envs/pycaret/lib/python3.7/site-packages/deepforest/cascade.py in _get_layer(self, layer_idx)
        561             logger.debug("self.n_layers_ = "+ str(self.n_layers_))
        562             logger.debug("layer_idx = "+ str(layer_idx))
    --> 563             raise ValueError(msg.format(self.n_layers_ - 1, layer_idx))
        564 
        565         layer_key = "layer_{}".format(layer_idx)
    
    ValueError: The layer index should be in the range [0, 1], but got 2 instead.
    

    This bug can be simpliy fixed if we change if not 0 <= layer_idx < self.n_layers_: to if not 0 <= layer_idx <= self.n_layers_:, but I still don't know the cause of this error and whether this fix is corret.

    needtriage 
    opened by IncubatorShokuhou 10
  • [Question] use custom estimator to tackle imbalanced datasets

    [Question] use custom estimator to tackle imbalanced datasets

    Hi All,

    As I expressed previously in another post, I want to express my gratitude for your amazing research. I am delighted you found time in your library to deal with custom estimators. However, I am having difficulty with the following:

    Assume I develop the following implementation (using imblearn) and obtain an AUROC score of 0.62:

    model = BalancedRandomForestClassifier(random_state=global_seed_random_state,
                                           class_weight="balanced_subsample",
                                           n_jobs=-1,
                                           replacement=True,
                                           )
    
    model.fit(X_train, y_train)
    y_pred = model.predict(X_test)
    show_output(model, X_test, y_test, y_pred)
    
    
    Classification_report:
                  precision    recall  f1-score   support
    
               0       0.97      0.61      0.75       499
               1       0.08      0.63      0.14        27
    
        accuracy                           0.61       526
       macro avg       0.52      0.62      0.45       526
    weighted avg       0.92      0.61      0.72       526
    
    ROC AUC Score:
    0.6204260372597047
    

    According to the reviews I've been reading regarding your original paper, if we have good results with RF and other similar classifiers, it is worthwhile to attempt Deep Forest and as base learner the one that worked well. However, I attempted to use the custom estimators via the following implementation:

    model = CascadeForestClassifier(
        random_state=global_seed_random_state,
    )
    
    main_estimators = [BalancedRandomForestClassifier(
        class_weight="balanced_subsample",
        n_jobs=-1,
        replacement=True,
        random_state=global_seed_random_state,
    ) for _ in range(2)]
    
    
    diverse_estimators = [BalancedRandomForestClassifier(
        class_weight="balanced_subsample",
        n_jobs=-1,
        replacement=True,
        random_state=global_seed_random_state,
    ) for _ in range(2)]
    
    estimators = main_estimators + diverse_estimators
    
    # layer
    model.set_estimator(estimators)
    

    The findings, however, are 10% less impressive, with an AUROC of 0.555. Note: Above, diverse estimators appears because I attempted to add ExtraTrees or XGBoost instead of a second set of BalancedRandomForestClassifiers. Could you please attempt to direct me in the proper direction? What did I do incorrectly? From your perspective, what type of diversified classifier should I use? Note 2: An AUROC above of 0.6 is quite promising for my current application.

    Thank you very much for your help in advanced. Great day,

    opened by simonprovost 8
  • Survival models

    Survival models

    Hi maintainer,

    I am wondering is that possible to cascade random survival forest (maybe a sksurv model) instead of RF in your deep forest model? As in #48, it seems that the supported model types are classification and regression. (or did I miss some parts of those tutorial docs?)

    Thanks.

    feature request 
    opened by yunwezhang 8
  • [ENH] Support customized base estimator and predictor

    [ENH] Support customized base estimator and predictor

    resolves #29 #26

    Steps

    • [x] Implement K-Fold wrapper for base estimators
    • [x] Implement customized cascade layer
    • [x] Implement set_estimator and set_predictor for the model
    • [x] Add unit tests
    • [x] Add backward compatibility
    • [x] Add documentation and working examples

    Code Snippet

    from deepforest import CascadeForestClassifier
    
    model = CascadeForestClassifier()
    
    # New Steps
    estimator_1, estimator_2 = your_estimator(), your_estimator()
    model.set_estimator(estimator=[estimator_1, estimator_2],  # a list of your base estimators
                        n_splits=5,  # the number of folds
                        oob_approx=False,  # whether to use out-of-bag approximation
                        random_state=None)  # random state used for base estimators
    
    model.set_predictor(predictor=your_predictor)  # an instantiated object of your predictor
    
    model.fit(X_train, y_train)
    y_pred = model.predict(X_test)
    
    feature request 
    opened by xuyxu 8
  • Label encoder for the case where y is 1-D.

    Label encoder for the case where y is 1-D.

    Resolved issue #13

    This is a very naive label encoder implemented with sklearn.preprocessing.LabelEncoder

    • [x] single output (1-D) partial mode
    • [x] single output (1-D) full mode
    • [x] unit test
    opened by NiMaZi 8
  • can't install package use conda env

    can't install package use conda env

    ERROR: Could not find a version that satisfies the requirement deep-forest (from versions: none) ERROR: No matching distribution found for deep-forest

    system: mac python version: 3.8.5 pip version: 20.2.4

    opened by morestart 7
  • Buffer dtype mismatch

    Buffer dtype mismatch

    调用数据集训练出现错误: File "deepforest/_cutils.pyx", line 59, in deepforest._cutils._map_to_bins File "deepforest/_cutils.pyx", line 76, in deepforest._cutils._map_to_bins ValueError: Buffer dtype mismatch, expected 'const X_DTYPE_C' but got 'long'

    bug 
    opened by Mr-memorandum 7
  • pip install deep-forest didn't work in wsl2

    pip install deep-forest didn't work in wsl2

    i was trying to install the package using wsl2. But the terminal raises an error:

    ERROR: Could not find a version that satisfies the requirement deep-forest (from versions: none) ERROR: No matching distribution found for deep-forest

    i don't find any related articles or even stackoverflow post to solve this, please help me.

    opened by romfahrury 4
  • How to apply shap model to DF model to interpret features?

    How to apply shap model to DF model to interpret features?

    How to apply shap model to DF model to interpret features? I tried to apply it directly, but suggested that the model was not in the SHAP package. https://github.com/slundberg/shap. image I suggest that the author improve the interpretability of the DF model,thanks.

    opened by Leopoldxxx 2
  • importing error

    importing error

    Got this erroe with importing

    ImportError Traceback (most recent call last) Input In [59], in <cell line: 24>() 22 import time 23 import io ---> 24 from deepforest import CascadeForestRegressor 25 import joblib 26 from sklearn.utils.fixes import joblib

    File ~\anaconda3\lib\site-packages\deepforest_init_.py:1, in ----> 1 from .cascade import CascadeForestClassifier, CascadeForestRegressor 2 from .forest import RandomForestClassifier, RandomForestRegressor 3 from .forest import ExtraTreesClassifier, ExtraTreesRegressor

    File ~\anaconda3\lib\site-packages\deepforest\cascade.py:17, in 15 from . import _utils 16 from . import _io ---> 17 from ._layer import ( 18 ClassificationCascadeLayer, 19 RegressionCascadeLayer, 20 CustomCascadeLayer, 21 ) 22 from ._binner import Binner 25 def _get_predictor_kwargs(predictor_kwargs, **kwargs) -> dict:

    File ~\anaconda3\lib\site-packages\deepforest_layer.py:17, in 14 from sklearn.base import BaseEstimator, ClassifierMixin, RegressorMixin 16 from . import _utils ---> 17 from ._estimator import Estimator 18 from .utils.kfoldwrapper import KFoldWrapper 21 def _build_estimator( 22 X, 23 y, (...) 32 sample_weight=None, 33 ):

    File ~\anaconda3\lib\site-packages\deepforest_estimator.py:7, in 4 all = ["Estimator"] 6 import numpy as np ----> 7 from .forest import ( 8 RandomForestClassifier, 9 ExtraTreesClassifier, 10 RandomForestRegressor, 11 ExtraTreesRegressor, 12 ) 13 from sklearn.ensemble import ( 14 RandomForestClassifier as sklearn_RandomForestClassifier, 15 ExtraTreesClassifier as sklearn_ExtraTreesClassifier, 16 RandomForestRegressor as sklearn_RandomForestRegressor, 17 ExtraTreesRegressor as sklearn_ExtraTreesRegressor, 18 ) 21 def make_classifier_estimator( 22 name, 23 criterion, (...) 30 ): 31 # RandomForestClassifier

    File ~\anaconda3\lib\site-packages\deepforest\forest.py:34, in 32 from sklearn.utils import check_random_state, compute_sample_weight 33 from sklearn.exceptions import DataConversionWarning ---> 34 from sklearn.utils.fixes import _joblib_parallel_args 35 from sklearn.utils.validation import check_is_fitted, _check_sample_weight 36 from sklearn.utils.validation import _deprecate_positional_args

    ImportError: cannot import name '_joblib_parallel_args' from 'sklearn.utils.fixes' (C:\Users\Mohammad\anaconda3\lib\site-packages\sklearn\utils\fixes.py)

    scikit-learn was upgraded joblib was upgraded still got error

    opened by MohammadSoltani100 2
  • [BUG] cannot correctly clone `CascadeForestRegressor` with `sklearn.base.clone` when using customized estimators

    [BUG] cannot correctly clone `CascadeForestRegressor` with `sklearn.base.clone` when using customized estimators

    Describe the bug cannot correctly clone CascadeForestClassifier/CascadeForestRegressor object with sklearn.base.clone when using customized stimators

    To Reproduce

    from sklearn.datasets import load_boston
    from sklearn.model_selection import train_test_split
    from sklearn.metrics import mean_squared_error
    from sklearn.base import clone
    from deepforest import CascadeForestRegressor
    import xgboost as xgb
    import lightgbm as lgb
    
    X, y = load_boston(return_X_y=True)
    X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
    model = CascadeForestRegressor(random_state=1)
    
    # set estimator
    n_estimators = 4  # the number of base estimators per cascade layer
    estimators = [lgb.LGBMRegressor(random_state=i)  for i in range(n_estimators)]
    model.set_estimator(estimators)
    
    # set predictor 
    predictor = xgb.XGBRegressor()
    model.set_predictor(predictor)
    
    # clone model
    model_new = clone(model)
    
    # try to fit
    model.fit(X_train, y_train)
    

    Expected behavior No error

    Additional context

    ~/miniconda3/envs/pycaret/lib/python3.8/site-packages/deep_forest-0.1.5-py3.8-linux-x86_64.egg/deepforest/cascade.py in fit(self, X, y, sample_weight)
       1004                 if not hasattr(self, "predictor_"):
       1005                     msg = "Missing predictor after calling `set_predictor`"
    -> 1006                     raise RuntimeError(msg)
       1007 
       1008             binner_ = Binner(
    
    RuntimeError: Missing predictor after calling `set_predictor`
    

    This bug occours because when the model is cloned, if the model has customized predictor or estimators, predictor='custom' will be cloned, while self.predictor_ / self.dummy_estimators will not be correctly cloned, which introduced the bug described above.

    I think this bug can be easily fixed by putting the predictor and the list of estimators into the parameter of CascadeForestClassifier/CascadeForestRegressor, just like the way of those meta estimators (e.g. ngboost), but maybe the corresponding APIs will have to be changed.

    For example, the API parameters could be:

    model = CascadeForestRegressor(
        estimators=[lgb.LGBMRegressor(random_state=i) for i in range(n_estimators)],
        predictor=xgb.XGBRegressor(),
    )
    
    needtriage 
    opened by IncubatorShokuhou 1
  • take() got an unexpected keyword argument 'axis'

    take() got an unexpected keyword argument 'axis'

    Got error with code: from sklearn.datasets import load_digits from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score

    from deepforest import CascadeForestClassifier

    model = CascadeForestClassifier(random_state=1) model.fit(X_train, y_train)


    TypeError Traceback (most recent call last) in 6 7 model = CascadeForestClassifier(random_state=1) ----> 8 model.fit(X_train, y_train.values.ravel())

    /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/deepforest/cascade.py in fit(self, X, y, sample_weight) 1395 y = self._encode_class_labels(y) 1396 -> 1397 super().fit(X, y, sample_weight) 1398 1399 def predict_proba(self, X):

    /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/deepforest/cascade.py in fit(self, X, y, sample_weight) 754 755 # Bin the training data --> 756 X_train_ = self.bin_data(binner, X, is_training_data=True) 757 X_train_ = self.buffer_.cache_data(0, X_train_, is_training_data=True) 758

    /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/deepforest/cascade.py in _bin_data(self, binner, X, is_training_data) 665 tic = time.time() 666 if is_training_data: --> 667 X_binned = binner.fit_transform(X) 668 else: 669 X_binned = binner.transform(X)

    /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/sklearn/base.py in fit_transform(self, X, y, **fit_params) 697 if y is None: 698 # fit method of arity 1 (unsupervised transformation) --> 699 return self.fit(X, **fit_params).transform(X) 700 else: 701 # fit method of arity 2 (supervised transformation)

    /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/deepforest/_binner.py in fit(self, X) 128 self.validate_params() 129 --> 130 self.bin_thresholds = _find_binning_thresholds( 131 X, 132 self.n_bins - 1,

    /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/deepforest/_binner.py in _find_binning_thresholds(X, n_bins, bin_subsample, bin_type, random_state) 75 if n_samples > bin_subsample: 76 subset = rng.choice(np.arange(n_samples), bin_subsample, replace=False) ---> 77 X = X.take(subset, axis=0) 78 79 binning_thresholds = []

    TypeError: take() got an unexpected keyword argument 'axis'

    Dataset is loaded with vaex, is this a problem particular for vaex?

    enhancement 
    opened by JiaLeXian 5
Releases(v0.1.7)
  • v0.1.7(Oct 1, 2022)

  • v0.1.6(Sep 17, 2022)

  • v0.1.5(Apr 16, 2021)

  • v0.1.4(Mar 11, 2021)

    Added

    • Add support on customized estimators (#48) @xuyxu
    • Add official support for ManyLinux-aarch64 (#47) @xuyxu

    Fixed

    • Fix the prediction workflow with only one cascade layer (#56) @xuyxu
    • Fix inconsistency on predictor name (#52) @xuyxu
    • Fix accepted types of target for CascadeForestRegressor (#44) @xuyxu

    Improved

    • Improve target checks for CascadeForestRegressor (#53) @chendingyan
    Source code(tar.gz)
    Source code(zip)
  • v0.1.3(Feb 22, 2021)

    Added

    • Add multi-output support for CascadeForestRegressor (#40) @Alex-Medium
    • Add layer-wise feature importances (#39) @xuyxu
    • Add scikit-learn backend (#36) @xuyxu
    • Add official support for Mac-OS (#34) @T-Allen-sudo
    • Add support on configurable criterion (#28) @tczhao
    Source code(tar.gz)
    Source code(zip)
  • v0.1.2(Feb 11, 2021)

  • v0.1.1(Feb 7, 2021)

    Added

    • Implement the get_forest() method for efficient indexing (#22) @xuyxu
    • Support class label encoding (#18) @NiMaZi
    • Support sample weight in fit() (#7) @tczhao
    • Add configurable predictor parameter (#9) @tczhao
    • Add base class BaseEstimator and ClassifierMixin (#8) @pjgao

    Fixed

    • Fix accepted data types on the binner (#23) @xuyxu
    Source code(tar.gz)
    Source code(zip)
Owner
LAMDA Group, Nanjing University
LAMDA is affiliated with the National Key Laboratory for Novel Software Technology and the Department of Computer Science & Technology, Nanjing University.
LAMDA Group, Nanjing University
FAST-RIR: FAST NEURAL DIFFUSE ROOM IMPULSE RESPONSE GENERATOR

This is the official implementation of our neural-network-based fast diffuse room impulse response generator (FAST-RIR) for generating room impulse responses (RIRs) for a given acoustic environment.

Anton Jeran Ratnarajah 89 Dec 22, 2022
FIRA: Fine-Grained Graph-Based Code Change Representation for Automated Commit Message Generation

FIRA is a learning-based commit message generation approach, which first represents code changes via fine-grained graphs and then learns to generate commit messages automatically.

Van 21 Dec 30, 2022
3D position tracking for soccer players with multi-camera videos

This repo contains a full pipeline to support 3D position tracking of soccer players, with multi-view calibrated moving/fixed video sequences as inputs.

Yuchang Jiang 72 Dec 27, 2022
Official implementation of "Learning Not to Reconstruct" (BMVC 2021)

Official PyTorch implementation of "Learning Not to Reconstruct Anomalies" This is the implementation of the paper "Learning Not to Reconstruct Anomal

Marcella Astrid 13 Dec 04, 2022
Code for the paper Language as a Cognitive Tool to Imagine Goals in Curiosity Driven Exploration

IMAGINE: Language as a Cognitive Tool to Imagine Goals in Curiosity Driven Exploration This repo contains the code base of the paper Language as a Cog

Flowers Team 26 Dec 22, 2022
The code release of paper Low-Light Image Enhancement with Normalizing Flow

[AAAI 2022] Low-Light Image Enhancement with Normalizing Flow Paper | Project Page Low-Light Image Enhancement with Normalizing Flow Yufei Wang, Renji

Yufei Wang 176 Jan 06, 2023
Predicting Tweet Sentiment Maching Learning and streamlit

Predicting-Tweet-Sentiment-Maching-Learning-and-streamlit (I prefere using Visual Studio Code ) Open the folder in VS Code Run the first cell in requi

1 Nov 20, 2021
Goal of the project : Detecting Temporal Boundaries in Sign Language videos

MVA RecVis course final project : Goal of the project : Detecting Temporal Boundaries in Sign Language videos. Sign language automatic indexing is an

Loubna Ben Allal 6 Dec 21, 2022
Implementation of RegretNet with Pytorch

Dependencies are Python 3, a recent PyTorch, numpy/scipy, tqdm, future and tensorboard. Plotting with Matplotlib. Implementation of the neural network

Horris zhGu 1 Nov 05, 2021
Official PyTorch Implementation of HELP: Hardware-adaptive Efficient Latency Prediction for NAS via Meta-Learning (NeurIPS 2021 Spotlight)

[NeurIPS 2021 Spotlight] HELP: Hardware-adaptive Efficient Latency Prediction for NAS via Meta-Learning [Paper] This is Official PyTorch implementatio

42 Nov 01, 2022
Toward Multimodal Image-to-Image Translation

BicycleGAN Project Page | Paper | Video Pytorch implementation for multimodal image-to-image translation. For example, given the same night image, our

Jun-Yan Zhu 1.4k Dec 22, 2022
FairMOT - A simple baseline for one-shot multi-object tracking

FairMOT - A simple baseline for one-shot multi-object tracking

Yifu Zhang 3.6k Jan 08, 2023
Numbering permanent and deciduous teeth via deep instance segmentation in panoramic X-rays

Numbering permanent and deciduous teeth via deep instance segmentation in panoramic X-rays In this repo, you will find the instructions on how to requ

Intelligent Vision Research Lab 4 Jul 21, 2022
The fastest way to visualize GradCAM with your Keras models.

VizGradCAM VizGradCam is the fastest way to visualize GradCAM in Keras models. GradCAM helps with providing visual explainability of trained models an

58 Nov 19, 2022
This repository contains a set of codes to run (i.e., train, perform inference with, evaluate) a diarization method called EEND-vector-clustering.

EEND-vector clustering The EEND-vector clustering (End-to-End-Neural-Diarization-vector clustering) is a speaker diarization framework that integrates

45 Dec 26, 2022
Implementation of the paper Recurrent Glimpse-based Decoder for Detection with Transformer.

REGO-Deformable DETR By Zhe Chen, Jing Zhang, and Dacheng Tao. This repository is the implementation of the paper Recurrent Glimpse-based Decoder for

Zhe Chen 33 Nov 30, 2022
Fast EMD for Python: a wrapper for Pele and Werman's C++ implementation of the Earth Mover's Distance metric

PyEMD: Fast EMD for Python PyEMD is a Python wrapper for Ofir Pele and Michael Werman's implementation of the Earth Mover's Distance that allows it to

William Mayner 433 Dec 31, 2022
Repository of Vision Transformer with Deformable Attention

Vision Transformer with Deformable Attention This repository contains the code for the paper Vision Transformer with Deformable Attention [arXiv]. Int

410 Jan 03, 2023
This repository contains the map content ontology used in narrative cartography

Narrative-cartography-ontology This repository contains the map content ontology used in narrative cartography, which is associated with a submission

Weiming Huang 0 Oct 31, 2021
Noether Networks: meta-learning useful conserved quantities

Noether Networks: meta-learning useful conserved quantities This repository contains the code necessary to reproduce experiments from "Noether Network

Dylan Doblar 33 Nov 23, 2022