(JMLR' 19) A Python Toolbox for Scalable Outlier Detection (Anomaly Detection)

Overview

Python Outlier Detection (PyOD)

Deployment & Documentation & Stats & License

PyPI version Anaconda version Documentation status GitHub stars GitHub forks Downloads testing Coverage Status Maintainability License

PyOD is a comprehensive and scalable Python toolkit for detecting outlying objects in multivariate data. This exciting yet challenging field is commonly referred as Outlier Detection or Anomaly Detection.

PyOD includes more than 30 detection algorithms, from classical LOF (SIGMOD 2000) to the latest COPOD (ICDM 2020) and SUOD (MLSys 2021). Since 2017, PyOD has been successfully used in numerous academic researches and commercial products [35] [36]. It is also well acknowledged by the machine learning community with various dedicated posts/tutorials, including Analytics Vidhya, KDnuggets, Towards Data Science, Computer Vision News, and awesome-machine-learning.

PyOD is featured for:

  • Unified APIs, detailed documentation, and interactive examples across various algorithms.
  • Advanced models, including classical ones from scikit-learn, latest deep learning methods, and emerging algorithms like COPOD.
  • Optimized performance with JIT and parallelization when possible, using numba and joblib.
  • Fast training & prediction with SUOD [36].
  • Compatible with both Python 2 & 3.

Outlier Detection with 5 Lines of Code:

# train the COPOD detector
from pyod.models.copod import COPOD
clf = COPOD()
clf.fit(X_train)

# get outlier scores
y_train_scores = clf.decision_scores_  # raw outlier scores on the train data
y_test_scores = clf.decision_function(X_test)  # predict raw outlier scores on test

Citing PyOD:

PyOD paper is published in Journal of Machine Learning Research (JMLR) (MLOSS track). If you use PyOD in a scientific publication, we would appreciate citations to the following paper:

@article{zhao2019pyod,
  author  = {Zhao, Yue and Nasrullah, Zain and Li, Zheng},
  title   = {PyOD: A Python Toolbox for Scalable Outlier Detection},
  journal = {Journal of Machine Learning Research},
  year    = {2019},
  volume  = {20},
  number  = {96},
  pages   = {1-7},
  url     = {http://jmlr.org/papers/v20/19-011.html}
}

or:

Zhao, Y., Nasrullah, Z. and Li, Z., 2019. PyOD: A Python Toolbox for Scalable Outlier Detection. Journal of machine learning research (JMLR), 20(96), pp.1-7.

Key Links and Resources:

Table of Contents:


Installation

It is recommended to use pip or conda for installation. Please make sure the latest version is installed, as PyOD is updated frequently:

pip install pyod            # normal install
pip install --upgrade pyod  # or update if needed
conda install -c conda-forge pyod

Alternatively, you could clone and run setup.py file:

git clone https://github.com/yzhao062/pyod.git
cd pyod
pip install .

Required Dependencies:

  • Python 2.7, 3.5, 3.6, or 3.7
  • combo>=0.0.8
  • joblib
  • numpy>=1.13
  • numba>=0.35
  • scipy>=0.19.1
  • scikit_learn>=0.20.0
  • statsmodels

Optional Dependencies (see details below):

  • combo (optional, required for models/combination.py and FeatureBagging)
  • keras (optional, required for AutoEncoder, and other deep learning models)
  • matplotlib (optional, required for running examples)
  • pandas (optional, required for running benchmark)
  • suod (optional, required for running SUOD model)
  • tensorflow (optional, required for AutoEncoder, and other deep learning models)
  • xgboost (optional, required for XGBOD)

Warning 1: PyOD has multiple neural network based models, e.g., AutoEncoders, which are implemented in both PyTorch and Tensorflow. However, PyOD does NOT install DL libraries for you. This reduces the risk of interfering with your local copies. If you want to use neural-net based models, please make sure Keras and a backend library, e.g., TensorFlow, are installed. Instructions are provided: neural-net FAQ. Similarly, models depending on xgboost, e.g., XGBOD, would NOT enforce xgboost installation by default.

Warning 2: PyOD contains multiple models that also exist in scikit-learn. However, these two libraries' API is not exactly the same--it is recommended to use only one of them for consistency but not mix the results. Refer Differences between scikit-learn and PyOD for more information.


API Cheatsheet & Reference

Full API Reference: (https://pyod.readthedocs.io/en/latest/pyod.html). API cheatsheet for all detectors:

  • fit(X): Fit detector.
  • decision_function(X): Predict raw anomaly score of X using the fitted detector.
  • predict(X): Predict if a particular sample is an outlier or not using the fitted detector.
  • predict_proba(X): Predict the probability of a sample being outlier using the fitted detector.
  • predict_confidence(X): Predict the model's sample-wise confidence (available in predict and predict_proba) [26].

Key Attributes of a fitted model:

  • decision_scores_: The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores.
  • labels_: The binary labels of the training data. 0 stands for inliers and 1 for outliers/anomalies.

Fast training and prediction: it is possible to train and predict with a large number of detection models in PyOD by leveraging SUOD framework [36]. See SUOD Paper and repository.


Model Save & Load

PyOD takes a similar approach of sklearn regarding model persistence. See model persistence for clarification.

In short, we recommend to use joblib or pickle for saving and loading PyOD models. See "examples/save_load_model_example.py" for an example. In short, it is simple as below:

from joblib import dump, load

# save the model
dump(clf, 'clf.joblib')
# load the model
clf = load('clf.joblib')

It is known that there are challenges in saving neural network models. Check #328 and #88 for temporary workaround.


Fast Train with SUOD

Fast training and prediction: it is possible to train and predict with a large number of detection models in PyOD by leveraging SUOD framework [36]. See SUOD Paper and SUOD example.

from pyod.models.suod import SUOD

# initialized a group of outlier detectors for acceleration
detector_list = [LOF(n_neighbors=15), LOF(n_neighbors=20),
                 LOF(n_neighbors=25), LOF(n_neighbors=35),
                 COPOD(), IForest(n_estimators=100),
                 IForest(n_estimators=200)]

# decide the number of parallel process, and the combination method
# then clf can be used as any outlier detection model
clf = SUOD(base_estimators=detector_list, n_jobs=2, combination='average',
           verbose=False)

Implemented Algorithms

PyOD toolkit consists of three major functional groups:

(i) Individual Detection Algorithms :

Type Abbr Algorithm Year Ref
Probabilistic ECOD Unsupervised Outlier Detection Using Empirical Cumulative Distribution Functions 2021 [21]
Probabilistic ABOD Angle-Based Outlier Detection 2008 [16]
Probabilistic FastABOD Fast Angle-Based Outlier Detection using approximation 2008 [16]
Probabilistic COPOD COPOD: Copula-Based Outlier Detection 2020 [20]
Probabilistic MAD Median Absolute Deviation (MAD) 1993 [13]
Probabilistic SOS Stochastic Outlier Selection 2012 [14]
Linear Model PCA Principal Component Analysis (the sum of weighted projected distances to the eigenvector hyperplanes) 2003 [31]
Linear Model MCD Minimum Covariance Determinant (use the mahalanobis distances as the outlier scores) 1999 [11] [28]
Linear Model OCSVM One-Class Support Vector Machines 2001 [30]
Linear Model LMDD Deviation-based Outlier Detection (LMDD) 1996 [6]
Proximity-Based LOF Local Outlier Factor 2000 [7]
Proximity-Based COF Connectivity-Based Outlier Factor 2002 [32]
Proximity-Based (Incremental) COF Memory Efficient Connectivity-Based Outlier Factor (slower but reduce storage complexity) 2002 [32]
Proximity-Based CBLOF Clustering-Based Local Outlier Factor 2003 [12]
Proximity-Based LOCI LOCI: Fast outlier detection using the local correlation integral 2003 [24]
Proximity-Based HBOS Histogram-based Outlier Score 2012 [9]
Proximity-Based kNN k Nearest Neighbors (use the distance to the kth nearest neighbor as the outlier score) 2000 [27]
Proximity-Based AvgKNN Average kNN (use the average distance to k nearest neighbors as the outlier score) 2002 [5]
Proximity-Based MedKNN Median kNN (use the median distance to k nearest neighbors as the outlier score) 2002 [5]
Proximity-Based SOD Subspace Outlier Detection 2009 [17]
Proximity-Based ROD Rotation-based Outlier Detection 2020 [4]
Outlier Ensembles IForest Isolation Forest 2008 [22]
Outlier Ensembles FB Feature Bagging 2005 [18]
Outlier Ensembles LSCP LSCP: Locally Selective Combination of Parallel Outlier Ensembles 2019 [35]
Outlier Ensembles XGBOD Extreme Boosting Based Outlier Detection (Supervised) 2018 [34]
Outlier Ensembles LODA Lightweight On-line Detector of Anomalies 2016 [25]
Outlier Ensembles SUOD SUOD: Accelerating Large-scale Unsupervised Heterogeneous Outlier Detection (Acceleration) 2021 [36]
Neural Networks AutoEncoder Fully connected AutoEncoder (use reconstruction error as the outlier score)   [1] [Ch.3]
Neural Networks VAE Variational AutoEncoder (use reconstruction error as the outlier score) 2013 [15]
Neural Networks Beta-VAE Variational AutoEncoder (all customized loss term by varying gamma and capacity) 2018 [8]
Neural Networks SO_GAAL Single-Objective Generative Adversarial Active Learning 2019 [23]
Neural Networks MO_GAAL Multiple-Objective Generative Adversarial Active Learning 2019 [23]
Neural Networks DeepSVDD Deep One-Class Classification 2018 [29]

(ii) Outlier Ensembles & Outlier Detector Combination Frameworks:

Type Abbr Algorithm Year Ref
Outlier Ensembles   Feature Bagging 2005 [18]
Outlier Ensembles LSCP LSCP: Locally Selective Combination of Parallel Outlier Ensembles 2019 [35]
Outlier Ensembles XGBOD Extreme Boosting Based Outlier Detection (Supervised) 2018 [34]
Outlier Ensembles LODA Lightweight On-line Detector of Anomalies 2016 [25]
Outlier Ensembles SUOD SUOD: Accelerating Large-scale Unsupervised Heterogeneous Outlier Detection (Acceleration) 2021 [36]
Combination Average Simple combination by averaging the scores 2015 [2]
Combination Weighted Average Simple combination by averaging the scores with detector weights 2015 [2]
Combination Maximization Simple combination by taking the maximum scores 2015 [2]
Combination AOM Average of Maximum 2015 [2]
Combination MOA Maximization of Average 2015 [2]
Combination Median Simple combination by taking the median of the scores 2015 [2]
Combination majority Vote Simple combination by taking the majority vote of the labels (weights can be used) 2015 [2]

(iii) Utility Functions:

Type Name Function Documentation
Data generate_data Synthesized data generation; normal data is generated by a multivariate Gaussian and outliers are generated by a uniform distribution generate_data
Data generate_data_clusters Synthesized data generation in clusters; more complex data patterns can be created with multiple clusters generate_data_clusters
Stat wpearsonr Calculate the weighted Pearson correlation of two samples wpearsonr
Utility get_label_n Turn raw outlier scores into binary labels by assign 1 to top n outlier scores get_label_n
Utility precision_n_scores calculate precision @ rank n precision_n_scores

Algorithm Benchmark

The comparison among of implemented models is made available below (Figure, compare_all_models.py, Interactive Jupyter Notebooks). For Jupyter Notebooks, please navigate to "/notebooks/Compare All Models.ipynb".

Comparision_of_All

A benchmark is supplied for select algorithms to provide an overview of the implemented models. In total, 17 benchmark datasets are used for comparison, which can be downloaded at ODDS.

For each dataset, it is first split into 60% for training and 40% for testing. All experiments are repeated 10 times independently with random splits. The mean of 10 trials is regarded as the final result. Three evaluation metrics are provided:

  • The area under receiver operating characteristic (ROC) curve
  • Precision @ rank n ([email protected])
  • Execution time

Check the latest benchmark. You could replicate this process by running benchmark.py.


Quick Start for Outlier Detection

PyOD has been well acknowledged by the machine learning community with a few featured posts and tutorials.

Analytics Vidhya: An Awesome Tutorial to Learn Outlier Detection in Python using PyOD Library

KDnuggets: Intuitive Visualization of Outlier Detection Methods, An Overview of Outlier Detection Methods from PyOD

Towards Data Science: Anomaly Detection for Dummies

Computer Vision News (March 2019): Python Open Source Toolbox for Outlier Detection

"examples/knn_example.py" demonstrates the basic API of using kNN detector. It is noted that the API across all other algorithms are consistent/similar.

More detailed instructions for running examples can be found in examples directory.

  1. Initialize a kNN detector, fit the model, and make the prediction.

    from pyod.models.knn import KNN   # kNN detector
    
    # train kNN detector
    clf_name = 'KNN'
    clf = KNN()
    clf.fit(X_train)
    
    # get the prediction label and outlier scores of the training data
    y_train_pred = clf.labels_  # binary labels (0: inliers, 1: outliers)
    y_train_scores = clf.decision_scores_  # raw outlier scores
    
    # get the prediction on the test data
    y_test_pred = clf.predict(X_test)  # outlier labels (0 or 1)
    y_test_scores = clf.decision_function(X_test)  # outlier scores
    
    # it is possible to get the prediction confidence as well
    y_test_pred, y_test_pred_confidence = clf.predict(X_test, return_confidence=True)  # outlier labels (0 or 1) and confidence in the range of [0,1]
  2. Evaluate the prediction by ROC and Precision @ Rank n ([email protected]).

    from pyod.utils.data import evaluate_print
    
    # evaluate and print the results
    print("\nOn Training Data:")
    evaluate_print(clf_name, y_train, y_train_scores)
    print("\nOn Test Data:")
    evaluate_print(clf_name, y_test, y_test_scores)
  3. See a sample output & visualization.

    On Training Data:
    KNN ROC:1.0, precision @ rank n:1.0
    
    On Test Data:
    KNN ROC:0.9989, precision @ rank n:0.9
    visualize(clf_name, X_train, y_train, X_test, y_test, y_train_pred,
        y_test_pred, show_figure=True, save_figure=False)

Visualization (knn_figure):

kNN example figure

How to Contribute

You are welcome to contribute to this exciting project:

  • Please first check Issue lists for "help wanted" tag and comment the one you are interested. We will assign the issue to you.
  • Fork the master branch and add your improvement/modification/fix.
  • Create a pull request to development branch and follow the pull request template PR template
  • Automatic tests will be triggered. Make sure all tests are passed. Please make sure all added modules are accompanied with proper test functions.

To make sure the code has the same style and standard, please refer to abod.py, hbos.py, or feature_bagging.py for example.

You are also welcome to share your ideas by opening an issue or dropping me an email at [email protected] :)

Inclusion Criteria

Similarly to scikit-learn, We mainly consider well-established algorithms for inclusion. A rule of thumb is at least two years since publication, 50+ citations, and usefulness.

However, we encourage the author(s) of newly proposed models to share and add your implementation into PyOD for boosting ML accessibility and reproducibility. This exception only applies if you could commit to the maintenance of your model for at least two year period.


Reference

[1] Aggarwal, C.C., 2015. Outlier analysis. In Data mining (pp. 237-263). Springer, Cham.
[2] (1, 2, 3, 4, 5, 6, 7) Aggarwal, C.C. and Sathe, S., 2015. Theoretical foundations and algorithms for outlier ensembles.ACM SIGKDD Explorations Newsletter, 17(1), pp.24-47.
[3] Aggarwal, C.C. and Sathe, S., 2017. Outlier ensembles: An introduction. Springer.
[4] Almardeny, Y., Boujnah, N. and Cleary, F., 2020. A Novel Outlier Detection Method for Multivariate Data. IEEE Transactions on Knowledge and Data Engineering.
[5] (1, 2) Angiulli, F. and Pizzuti, C., 2002, August. Fast outlier detection in high dimensional spaces. In European Conference on Principles of Data Mining and Knowledge Discovery pp. 15-27.
[6] Arning, A., Agrawal, R. and Raghavan, P., 1996, August. A Linear Method for Deviation Detection in Large Databases. In KDD (Vol. 1141, No. 50, pp. 972-981).
[7] Breunig, M.M., Kriegel, H.P., Ng, R.T. and Sander, J., 2000, May. LOF: identifying density-based local outliers. ACM Sigmod Record, 29(2), pp. 93-104.
[8] Burgess, Christopher P., et al. "Understanding disentangling in beta-VAE." arXiv preprint arXiv:1804.03599 (2018).
[9] Goldstein, M. and Dengel, A., 2012. Histogram-based outlier score (hbos): A fast unsupervised anomaly detection algorithm. In KI-2012: Poster and Demo Track, pp.59-63.
[10] Gopalan, P., Sharan, V. and Wieder, U., 2019. PIDForest: Anomaly Detection via Partial Identification. In Advances in Neural Information Processing Systems, pp. 15783-15793.
[11] Hardin, J. and Rocke, D.M., 2004. Outlier detection in the multiple cluster setting using the minimum covariance determinant estimator. Computational Statistics & Data Analysis, 44(4), pp.625-638.
[12] He, Z., Xu, X. and Deng, S., 2003. Discovering cluster-based local outliers. Pattern Recognition Letters, 24(9-10), pp.1641-1650.
[13] Iglewicz, B. and Hoaglin, D.C., 1993. How to detect and handle outliers (Vol. 16). Asq Press.
[14] Janssens, J.H.M., Huszár, F., Postma, E.O. and van den Herik, H.J., 2012. Stochastic outlier selection. Technical report TiCC TR 2012-001, Tilburg University, Tilburg Center for Cognition and Communication, Tilburg, The Netherlands.
[15] Kingma, D.P. and Welling, M., 2013. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114.
[16] (1, 2) Kriegel, H.P. and Zimek, A., 2008, August. Angle-based outlier detection in high-dimensional data. In KDD '08, pp. 444-452. ACM.
[17] Kriegel, H.P., Kröger, P., Schubert, E. and Zimek, A., 2009, April. Outlier detection in axis-parallel subspaces of high dimensional data. In Pacific-Asia Conference on Knowledge Discovery and Data Mining, pp. 831-838. Springer, Berlin, Heidelberg.
[18] (1, 2) Lazarevic, A. and Kumar, V., 2005, August. Feature bagging for outlier detection. In KDD '05. 2005.
[19] Li, D., Chen, D., Jin, B., Shi, L., Goh, J. and Ng, S.K., 2019, September. MAD-GAN: Multivariate anomaly detection for time series data with generative adversarial networks. In International Conference on Artificial Neural Networks (pp. 703-716). Springer, Cham.
[20] Li, Z., Zhao, Y., Botta, N., Ionescu, C. and Hu, X. COPOD: Copula-Based Outlier Detection. IEEE International Conference on Data Mining (ICDM), 2020.
[21] Li, Z., Zhao, Y., Hu, X., Botta, N., Ionescu, C. and Chen, H. G. ECOD: Unsupervised Outlier Detection Using Empirical Cumulative Distribution Functions. arXiv preprint arXiv:2201.00382 (2021).
[22] Liu, F.T., Ting, K.M. and Zhou, Z.H., 2008, December. Isolation forest. In International Conference on Data Mining, pp. 413-422. IEEE.
[23] (1, 2) Liu, Y., Li, Z., Zhou, C., Jiang, Y., Sun, J., Wang, M. and He, X., 2019. Generative adversarial active learning for unsupervised outlier detection. IEEE Transactions on Knowledge and Data Engineering.
[24] Papadimitriou, S., Kitagawa, H., Gibbons, P.B. and Faloutsos, C., 2003, March. LOCI: Fast outlier detection using the local correlation integral. In ICDE '03, pp. 315-326. IEEE.
[25] (1, 2) Pevný, T., 2016. Loda: Lightweight on-line detector of anomalies. Machine Learning, 102(2), pp.275-304.
[26] Perini, L., Vercruyssen, V., Davis, J. Quantifying the confidence of anomaly detectors in their example-wise predictions. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases (ECML-PKDD), 2020.
[27] Ramaswamy, S., Rastogi, R. and Shim, K., 2000, May. Efficient algorithms for mining outliers from large data sets. ACM Sigmod Record, 29(2), pp. 427-438.
[28] Rousseeuw, P.J. and Driessen, K.V., 1999. A fast algorithm for the minimum covariance determinant estimator. Technometrics, 41(3), pp.212-223.
[29] Ruff, L., Vandermeulen, R., Goernitz, N., Deecke, L., Siddiqui, S.A., Binder, A., Müller, E. and Kloft, M., 2018, July. Deep one-class classification. In International conference on machine learning (pp. 4393-4402). PMLR.
[30] Scholkopf, B., Platt, J.C., Shawe-Taylor, J., Smola, A.J. and Williamson, R.C., 2001. Estimating the support of a high-dimensional distribution. Neural Computation, 13(7), pp.1443-1471.
[31] Shyu, M.L., Chen, S.C., Sarinnapakorn, K. and Chang, L., 2003. A novel anomaly detection scheme based on principal component classifier. MIAMI UNIV CORAL GABLES FL DEPT OF ELECTRICAL AND COMPUTER ENGINEERING.
[32] (1, 2) Tang, J., Chen, Z., Fu, A.W.C. and Cheung, D.W., 2002, May. Enhancing effectiveness of outlier detections for low density patterns. In Pacific-Asia Conference on Knowledge Discovery and Data Mining, pp. 535-548. Springer, Berlin, Heidelberg.
[33] Wang, X., Du, Y., Lin, S., Cui, P., Shen, Y. and Yang, Y., 2019. adVAE: A self-adversarial variational autoencoder with Gaussian anomaly prior knowledge for anomaly detection. Knowledge-Based Systems.
[34] (1, 2) Zhao, Y. and Hryniewicki, M.K. XGBOD: Improving Supervised Outlier Detection with Unsupervised Representation Learning. IEEE International Joint Conference on Neural Networks, 2018.
[35] (1, 2, 3) Zhao, Y., Nasrullah, Z., Hryniewicki, M.K. and Li, Z., 2019, May. LSCP: Locally selective combination in parallel outlier ensembles. In Proceedings of the 2019 SIAM International Conference on Data Mining (SDM), pp. 585-593. Society for Industrial and Applied Mathematics.
[36] (1, 2, 3, 4, 5, 6) Zhao, Y., Hu, X., Cheng, C., Wang, C., Wan, C., Wang, W., Yang, J., Bai, H., Li, Z., Xiao, C., Wang, Y., Qiao, Z., Sun, J. and Akoglu, L. (2021). SUOD: Accelerating Large-scale Unsupervised Heterogeneous Outlier Detection. Conference on Machine Learning and Systems (MLSys).
Comments
  • pyod fails to install using pip

    pyod fails to install using pip

    When attempting to install without nose, I receive the following error:

    (PyVi) Michael:PyVi michael$ pip install pyod
    Collecting pyod==0.5.0 (from -r requirements.txt (line 18))
      Using cached https://files.pythonhosted.org/packages/c9/8c/6774fa2e7ae6fe9c2c648114d15ba584f950002377480e14183a0999af30/pyod-0.5.0.tar.gz
        Complete output from command python setup.py egg_info:
        Traceback (most recent call last):
          File "<string>", line 1, in <module>
          File "/private/var/folders/j4/_68f6f3j4d51_2smq2mh5hyh0000gn/T/pip-install-gjdzzane/pyod/setup.py", line 2, in <module>
            from pyod import __version__
          File "/private/var/folders/j4/_68f6f3j4d51_2smq2mh5hyh0000gn/T/pip-install-gjdzzane/pyod/pyod/__init__.py", line 4, in <module>
            from . import models
          File "/private/var/folders/j4/_68f6f3j4d51_2smq2mh5hyh0000gn/T/pip-install-gjdzzane/pyod/pyod/models/__init__.py", line 2, in <module>
            from .abod import ABOD
          File "/private/var/folders/j4/_68f6f3j4d51_2smq2mh5hyh0000gn/T/pip-install-gjdzzane/pyod/pyod/models/abod.py", line 17, in <module>
            from .base import BaseDetector
          File "/private/var/folders/j4/_68f6f3j4d51_2smq2mh5hyh0000gn/T/pip-install-gjdzzane/pyod/pyod/models/base.py", line 27, in <module>
            from ..utils.utility import precision_n_scores
          File "/private/var/folders/j4/_68f6f3j4d51_2smq2mh5hyh0000gn/T/pip-install-gjdzzane/pyod/pyod/utils/__init__.py", line 2, in <module>
            from .utility import check_parameter
          File "/private/var/folders/j4/_68f6f3j4d51_2smq2mh5hyh0000gn/T/pip-install-gjdzzane/pyod/pyod/utils/utility.py", line 18, in <module>
            from sklearn.utils.testing import assert_equal
          File "/Users/michael/anaconda3/envs/PyVi/lib/python3.6/site-packages/sklearn/utils/testing.py", line 49, in <module>
            from nose.tools import raises
        ModuleNotFoundError: No module named 'nose'
        
        ----------------------------------------
    Command "python setup.py egg_info" failed with error code 1 in /private/var/folders/j4/_68f6f3j4d51_2smq2mh5hyh0000gn/T/pip-install-gjdzzane/pyod/
    
    bug 
    opened by mdlockyer 18
  • LUNAR

    LUNAR

    All Submissions Basics:

    • [ ] Have you followed the guidelines in our Contributing document?
    • [ ] Have you checked to ensure there aren't other open Pull Requests for the same update/change?
    • [ ] Have you checked all Issues to tie the PR to a specific one?

    All Submissions Cores:

    • [ ] Have you added an explanation of what your changes do and why you'd like us to include them?
    • [ ] Have you written new tests for your core changes, as applicable?
    • [ ] Have you successfully ran tests with your changes locally?
    • [ ] Does your submission pass tests, including CircleCI, Travis CI, and AppVeyor?
    • [ ] Does your submission have appropriate code coverage? The cutoff threshold is 95% by Coversall.

    New Model Submissions:

    • [ ] Have you created a .py in ~/pyod/models/?
    • [ ] Have you created a _example.py in ~/examples/?
    • [ ] Have you created a test_.py in ~/pyod/test/?
    • [ ] Have you lint your code locally prior to submission?
    opened by agoodge 17
  • KNN Mahalanobis distance error

    KNN Mahalanobis distance error

    Hi,

    When I use the Mahalanobis metric for KNN I always get the error "Must provide either V or VI for Mahalanobis distance" even when I provide V with metric_params. The same request works with sklearn.neighbors.

    
    from pyod.models.knn import KNN  
    from pyod.utils.data import generate_data
    from sklearn.neighbors import NearestNeighbors
    import numpy as np
    
    contamination = 0.1  
    n_train = 200  
    n_test = 100 
    
    X_train, y_train, X_test, y_test = generate_data(n_train=n_train, n_test=n_test, contamination=contamination)
    
    #Doesn't work (Must provide either V or VI for Mahalanobis distance)
    clf = KNN(algorithm='brute', metric='mahalanobis', metric_params={'V': np.cov(X_train)})
    clf.fit(X_train)
    
    #Works
    nn = NearestNeighbors(algorithm='brute', metric='mahalanobis', metric_params={'V': np.cov(X_train)})
    nn.fit(X_train)
    
    bug 
    opened by hanshupe 12
  • COF and SOD huge bugs?

    COF and SOD huge bugs?

    Hi all. Something is happening that I cannot understand, I'm doing hyperparameter tuning to all proximity based algorithms with RandomSearchCV form sklearn and only COF and SOD have problems. What is happening is the following: the constructor parameters are all None even though I'm passing values to it. I tried to change the code of SOD and hardcoded the values of the constructor and I'm still getting NoneType. Please help.

    opened by dcabanas 11
  • COPOD Explainability on unseen data

    COPOD Explainability on unseen data

    COPOD explain_outlier() function works only for a given data point within the dataset. Is there any approach for explainability on unseen data( data on which model has not been trained) ?

    opened by thewall27 10
  • outlier score highly correlated to over distance to points of origin

    outlier score highly correlated to over distance to points of origin

    I calculated the distance of each data points to origins at 0, by use 'np.linalg.norm(x)', while x is just one multi-variate sample, then normalize all these values to 0-1, I called this 'global_score'. When I compare the global score to scores from different methods, it turns out it's highly correlated (0.99) with PCA, autoencoder, CBLOF, KNN. So it seems all these methods are just calculating the overall distance of the samples, instead of anomalies from multiple clusters. I was very troubled by this fact and hope you can confirm whether this is true and if it is, what's the reason for this.

    Thanks

    opened by flycloudking 10
  • R-graph method implemented

    R-graph method implemented

    R-graph

    paper: https://openaccess.thecvf.com/content_cvpr_2017/papers/You_Provable_Self-Representation_Based_CVPR_2017_paper.pdf

    All Submissions Basics:

    • [x] Have you followed the guidelines in our Contributing document?
    • [x] Have you checked to ensure there aren't other open Pull Requests for the same update/change?
    • [x] Have you checked all Issues to tie the PR to a specific one?

    All Submissions Cores:

    • [x] Have you added an explanation of what your changes do and why you'd like us to include them?
    • [x] Have you written new tests for your core changes, as applicable?
    • [x] Have you successfully ran tests with your changes locally?
    • [x] Does your submission pass tests, including CircleCI, Travis CI, and AppVeyor?
    • [ ] Does your submission have appropriate code coverage? The cutoff threshold is 95% by Coversall.

    New Model Submissions:

    • [x] Have you created a .py in ~/pyod/models/?
    • [x] Have you created a _example.py in ~/examples/?
    • [x] Have you created a test_.py in ~/pyod/test/?
    • [x] Have you lint your code locally prior to submission?
    opened by mbongaerts 9
  • added functionality for scoring the individual features for the COPOD…

    added functionality for scoring the individual features for the COPOD…

    … algorithm in copod.py. This includes the single threaded and multi-threaded implementation.

    This should also resolve the issue https://github.com/yzhao062/pyod/issues/308

    This is my first pull-request in a public repo, so please let me know whether I have included enough information.

    All Submissions Basics:

    • [x] Have you followed the guidelines in our Contributing document?
    • [x] Have you checked to ensure there aren't other open Pull Requests for the same update/change?
    • [x] Have you checked all Issues to tie the PR to a specific one?

    All Submissions Cores:

    • [x] Have you added an explanation of what your changes do and why you'd like us to include them?
    • [x] Have you written new tests for your core changes, as applicable?
    • [x] Have you successfully ran tests with your changes locally?
    • [x] Does your submission pass tests, including CircleCI, Travis CI, and AppVeyor?
    • [x] Does your submission have appropriate code coverage? The cutoff threshold is 95% by Coversall.
    opened by psmgeelen 9
  • Wrong Label  on method SOGAAL and MOGAAL

    Wrong Label on method SOGAAL and MOGAAL

    opened by luisfelipe18 9
  • COF Algorithm

    COF Algorithm

    All Submissions Basics:

    Closes #7

    • [x] Have you followed the guidelines in our Contributing document?
    • [x] Have you checked to ensure there aren't other open Pull Requests for the same update/change?
    • [x] Have you checked all Issues to tie the PR to a specific one?

    All Submissions Cores:

    • [x] Have you added an explanation of what your changes do and why you'd like us to include them?
    • [x] Have you written new tests for your core changes, as applicable?
    • [x] Have you successfully ran tests with your changes locally?
    • [x] Does your submission pass tests, including CircleCI, Travis CI, and AppVeyor?
    • [x] Does your submission have appropriate code coverage? The cutoff threshold is 95% by Coversall.

    New Model Submissions:

    • [x] Have you created a .py in ~/pyod/models/?
    • [x] Have you created a _example.py in ~/examples/?
    • [x] Have you created a test_.py in ~/pyod/test/?
    • [x] Have you lint your code locally prior to submission?
    opened by John-Almardeny 9
  • SOD implementation

    SOD implementation

    All Submissions Basics:

    #60

    • [x] Have you followed the guidelines in our Contributing document?
    • [x] Have you checked to ensure there aren't other open Pull Requests for the same update/change?
    • [x] Have you checked all Issues to tie the PR to a specific one?

    All Submissions Cores:

    • [x] Have you added an explanation of what your changes do and why you'd like us to include them?
    • [x] Have you written new tests for your core changes, as applicable?
    • [x] Have you successfully ran tests with your changes locally?

    New Model Submissions:

    • [x] Have you created a .py in ~/pyod/models/?
    • [x] Have you created a test_.py in ~/pyod/test/?
    • [x] Have you lint your code locally prior to submission?
    opened by John-Almardeny 9
  • Results from LODA are not reproducible

    Results from LODA are not reproducible

    The function uses numpy.random.randn to sample the random cuts but it does not accept a random_state parameter like most of the other non-deterministic algorithms.

    opened by kgmccann 0
  • Train/Test split or not?

    Train/Test split or not?

    Hi, I have a conceptual question wether I should split my dataset in train/test or not. Given the fact that my dataset has no labels, does it make any sense to split in the first place? I mean, I could simply do something like clf.fit(data) and then get the resulting labels as clf.labels_ and since I train in an unsupervised manner the classifier should not overfit in any way, right?

    opened by lorisgir 1
  • Implementing ECDF Estimator and deleting Statsmodels dependency

    Implementing ECDF Estimator and deleting Statsmodels dependency

    Hey everyone,

    as stated in #466 and in #453, one can speed up the empirical cumulative density function in comparison to the Statsmodels ECDF functionality.

    This also makes the dependency on statsmodels obsolete and this pull request deletes the dependency.

    In this pull request the following things are done:

    1. Implementing an standalone ecdf estimator in pyod/utils/stat_models.py
    2. Writing a test that compares own implementation to statsmodels implementation on several random matrices (so in the requirements_ci.txt statsmodels is still a requirement)
    3. Deleting and replacing the functionality in ECOD and COPOD (the only places this dependency has been used

    The implementation is now faster (by 30-60%), as we will only use the ecdf for the data we estimate it from. Please get back to me if a further explanation of why exactly is necessary. I will gladly elaborate more.

    Since not anyone might want to fully submerge in the topic, I kept the statsmodels dependency in the test and compare this implementation to the statsmodels function on several random matrices. One could see that as prove that it works.

    Thanks in advance! :-)


    All Submissions Basics:

    • [x] Have you followed the guidelines in our Contributing document?
    • [x] Have you checked to ensure there aren't other open Pull Requests for the same update/change?
    • [x] Have you checked all Issues to tie the PR to a specific one?

    All Submissions Cores:

    • [x] Have you added an explanation of what your changes do and why you'd like us to include them?
    • [x] Have you written new tests for your core changes, as applicable?
    • [x] Have you successfully ran tests with your changes locally?
    • [x] Does your submission pass tests, including CircleCI, Travis CI, and AppVeyor?
    • [x] Does your submission have appropriate code coverage? The cutoff threshold is 95% by Coversall.
    opened by Lucew 3
  • Statsmodels package is only used for one function

    Statsmodels package is only used for one function

    Hey,

    as mentioned in #453 there is a possibility of reimplementing the ECDF estimator from statsmodels in a small function that also runs faster (50-60%) as we are doing the estimation in place and do not use it further.

    As you can see from this search, the statsmodels package is only ever used for its ecdf estimator.

    Once the tests are through, I will open a pull request for the updated version which does not require statsmodels any more.

    Thanks in advance Lucas

    opened by Lucew 1
  • Initializing train weights from a saved model and continuing training

    Initializing train weights from a saved model and continuing training

    I've trained a multi-class AutoEncoder model using pyod library and saved it as a pickle file. I would like to use the weights from this single, multi-class model, to initialize training of single class AutoEncoder models. How can the training weights be saved, reloaded and how can the training be resumed through PyOD?

    opened by alucic2 0
  • Is 4D (2D) geospatial data anomaly detection supported?

    Is 4D (2D) geospatial data anomaly detection supported?

    I just discovered this package and I was wondering whether is possible to perform anomaly detection of geospatial data, for example data coming from a network of weather stations which all measure air temperature.

    In the past we used either a simple statistical model, that checks if a certain station measured value falls in the IQR of the neighbouring stations, or a supervised model (using SVM) trained with values from neighbouring stations and models. I was wondering if it would be possible to apply one of the model of pyod on this kind of data to identify outliers.

    The data can be thought as 4D since we have variable[id_station, time, latitude, longitude], but in practice we always apply the model in 2D as we compare the value of a station to its neighbours. Still, it would be good to have a generalized model that can consider all dimensions at the same time.

    Thanks for any info :)

    opened by guidocioni 1
Releases(v1.0.7)
  • v1.0.7(Dec 16, 2022)

  • v1.0.6(Oct 24, 2022)

  • v1.0.5(Sep 15, 2022)

    v<1.0.5>, <07/29/2022> -- Import optimization. v<1.0.5>, <08/27/2022> -- Code optimization. v<1.0.5>, <09/14/2022> -- Add ALAD.

    AnoGAN is too slow to run. Consider a removal or refactoring.

    Source code(tar.gz)
    Source code(zip)
  • v1.0.4(Jul 29, 2022)

    v<1.0.4>, <07/29/2022> -- General improvement of code quality and test coverage. v<1.0.4>, <07/29/2022> -- Add LUNAR (#413). v<1.0.4>, <07/29/2022> -- Add LUNAR (#415).

    Source code(tar.gz)
    Source code(zip)
  • v1.0.3(Jul 5, 2022)

  • v1.0.2(Jun 23, 2022)

  • v1.0.1(May 13, 2022)

    v<1.0.1>, <04/27/2022> -- Add INNE (#396). v<1.0.1>, <05/13/2022> -- Urgent fix for iForest (#406).

    Urgent fix for

    File "lib/python3.10/site-packages/pyod/models/iforest.py", line 13, in from sklearn.utils.fixes import _joblib_parallel_args ImportError: cannot import name '_joblib_parallel_args' from 'sklearn.utils.fixes' (/lib/python3.10/site-packages/sklearn/utils/fixes.py)

    Source code(tar.gz)
    Source code(zip)
  • v1.0.0(Apr 23, 2022)

    v<1.0.0>, <04/04/2022> -- Add KDE detector (#382). v<1.0.0>, <04/06/2022> -- Disable the bias term in DeepSVDD (#385). v<1.0.0>, <04/21/2022> -- Fix a set of issues of autoencoders (#313, #390, #391). v<1.0.0>, <04/23/2022> -- Add sampling based detector (#384).

    Source code(tar.gz)
    Source code(zip)
  • v0.9.9(Apr 4, 2022)

    v<0.9.9>, <03/20/2022> -- Renovate documentation. v<0.9.9>, <03/23/2022> -- Add example for COPOD interpretability. v<0.9.9>, <03/23/2022> -- Add outlier detection by Cook’s distances. v<0.9.9>, <04/04/2022> -- Various community fix.

    Source code(tar.gz)
    Source code(zip)
  • v0.9.8(Mar 5, 2022)

    v<0.9.8>, <02/23/2022> -- Add Feature Importance for iForest. v<0.9.8>, <03/05/2022> -- Update ECOD (TKDE 2022).

    See the usage of feature importance of iforest in https://github.com/yzhao062/pyod/blob/master/examples/iforest_example.py See the new ECOD detector in https://github.com/yzhao062/pyod/blob/master/examples/ecod_example.py

    Source code(tar.gz)
    Source code(zip)
  • v0.9.7(Jan 4, 2022)

  • v0.9.6(Dec 25, 2021)

    Happy holiday!

    v<0.9.6>, <11/05/2021> -- Minor bug fix for COPOD. v<0.9.6>, <12/24/2021> -- Bug fix for MAD (#358). v<0.9.6>, <12/24/2021> -- Bug fix for COPOD plotting (#337). v<0.9.6>, <12/24/2021> -- Model persistence doc improvement.

    Source code(tar.gz)
    Source code(zip)
  • v0.9.5(Oct 27, 2021)

    In this important update, we introduce multiple important features:

    v<0.9.5>, <09/10/2021> -- Update to GitHub Action for autotest! v<0.9.5>, <09/10/2021> -- Various documentation fix. v<0.9.5>, <10/26/2021> -- MAD fix #318. v<0.9.5>, <10/26/2021> -- Automatic histogram size selection for HBOS and LODA #321. v<0.9.5>, <10/27/2021> -- Add prediction confidence #349.

    Source code(tar.gz)
    Source code(zip)
  • v0.9.4(Oct 1, 2021)

  • V0.9.3(Aug 29, 2021)

    v<0.9.3>, <08/19/2021> -- Expand test to Python 3.8 and 3.9. v<0.9.3>, <08/29/2021> -- Add SUOD.

    In this version, SUOD is integrated into PyOD, and fast training/prediction is therefore possible. See https://github.com/yzhao062/pyod/blob/master/examples/suod_example.py for more information.

    Source code(tar.gz)
    Source code(zip)
  • V0.9.2(Aug 15, 2021)

    This release mainly features a new deep model, DeepSVDD, in PyOD.

    v<0.9.2>, <08/15/2021> -- Fix ROD. v<0.9.2>, <08/15/2021> -- Add DeepSVDD (implemented by Rafał Bodziony).

    Source code(tar.gz)
    Source code(zip)
  • V0.9.1(Aug 14, 2021)

    This release incorporates a few bug fixes and enhancement.

    v<0.9.1>, <07/12/2021> -- Improve COPOD by dropping pandas dependency. v<0.9.1>, <07/19/2021> -- Add memory efficienct COF. v<0.9.1>, <08/01/2021> -- Fix Pytorch Dataset issue. v<0.9.1>, <08/14/2021> -- Synchronize scikit-learn LOF parameters.

    Source code(tar.gz)
    Source code(zip)
  • V0.9.0(Jul 7, 2021)

    v<0.9.0>, <06/20/2021> -- Add clone test for models. v<0.9.0>, <07/03/2021> -- ROD hot fix (#316). v<0.9.0>, <07/04/2021> -- Improve COPOD plot with colunms parameter.

    Source code(tar.gz)
    Source code(zip)
  • V0.8.9(Jun 12, 2021)

    v<0.8.9>, <05/17/2021> -- Turn on test for Python 3.5-3.8. v<0.8.9>, <06/10/2021> -- Add PyTorch AutoEncoder v<0.8.9>, <06/11/2021> -- Fix LMDD parameter (#307)

    Source code(tar.gz)
    Source code(zip)
  • V0.8.8(Apr 27, 2021)

    v<0.8.7>, <01/16/2021> -- Add ROD. v<0.8.7>, <02/18/2021> -- Dependency optimization. v<0.8.8>, <04/08/2021> -- COPOD optimization. v<0.8.8>, <04/08/2021> -- Add parallelization for COPOD. v<0.8.8>, <04/26/2021> -- fix XGBOD issue with xgboost 1.4.

    Source code(tar.gz)
    Source code(zip)
  • V0.8.6(Jan 12, 2021)

    Most the changes are bug-fix and performance enhancement.

    v<0.8.5>, <12/22/2020> -- Refactor test from sklearn to numpy v<0.8.5>, <12/22/2020> -- Refactor COPOD for consistency v<0.8.5>, <12/22/2020> -- Refactor due to sklearn 0.24 (issue #265) v<0.8.6>, <01/09/2021> -- Improve COF speed (PR #159) v<0.8.6>, <01/10/2021> -- Fix LMDD parameter inconsistenct. v<0.8.6>, <01/12/2021> -- Add option to specify feature names in copod explanation plot (PR #261).

    Source code(tar.gz)
    Source code(zip)
  • V0.8.4(Nov 17, 2020)

    v<0.8.4>, <10/13/2020> -- Fix COPOD code inconsistency (issue #239). v<0.8.4>, <10/24/2020> -- Fix LSCP minor bug (issue #180). v<0.8.4>, <11/02/2020> -- Add support for Tensorflow 2. v<0.8.4>, <11/12/2020> -- Merge PR #!02 for categortical data generation.

    Source code(tar.gz)
    Source code(zip)
  • V0.8.3(Sep 19, 2020)

    v<0.8.2>, <07/04/2020> -- Add a set of utility functions. v<0.8.2>, <08/30/2020> -- Add COPOD and MAD algorithm. v<0.8.3>, <09/01/2020> -- Make decision score consistent. v<0.8.3>, <09/19/2020> -- Add model persistence documentation (save and load).

    Short summary, we add two new algorithms COPOD and MAD. Moreover, we now provide a short example regrading model save and load.

    Source code(tar.gz)
    Source code(zip)
  • V0.8.1(Jul 1, 2020)

    This is a stable release. Python 2 support will be dropped in the next version.

    v<0.8.0>, <05/18/2020> -- Update test frameworks by reflecting sklearn change. v<0.8.1>, <07/11/2020> -- Bug fix and documentation update

    Source code(tar.gz)
    Source code(zip)
  • V0.7.9(May 4, 2020)

    v<0.7.8.1>, <04/07/2020> -- Hot fix for SOD. v<0.7.8.2>, <04/14/2020> -- Bug Fix for LODA. v<0.7.9>, <04/20/2020> -- Relax the number of n_neighbors in ABOD and COF. v<0.7.9>, <05/01/2020> -- Extend Vanilla VAE to Beta VAE by Dr Andrij Vasylenko. v<0.7.9>, <05/01/2020> -- Add Conda Badge.

    Source code(tar.gz)
    Source code(zip)
  • V0.7.8(Mar 17, 2020)

    Various changes have been made in these two releases:

    v<0.7.7>, <12/21/2019> -- Refactor code for combination simplification on combo. v<0.7.7>, <12/21/2019> -- Extended combination methods by median and majority vote. v<0.7.7>, <12/22/2019> -- Code optimization and documentation update. v<0.7.7>, <12/22/2019> -- Enable continuous integration for Python 3.7. v<0.7.7.1>, <12/29/2019> -- Minor update for SUOD and warning fixes. v<0.7.8>, <01/05/2019> -- Documentation update. v<0.7.8>, <01/30/2019> -- Bug fix for kNN (#158). v<0.7.8>, <03/14/2020> -- Add VAE (implemented by Dr Andrij Vasylenko). v<0.7.8>, <03/17/2020> -- Add LODA (adapted from tilitools).

    The major improvement includes the addition of VAE and LODA, along with multiple minor fixes.

    Source code(tar.gz)
    Source code(zip)
  • v0.7.6(Dec 19, 2019)

    v<0.7.6>, <12/18/2019> -- Update Isolation Forest and LOF to be consistent with sklearn 0.22. v<0.7.6>, <12/18/2019> -- Add Deviation-based Outlier Detection (LMDD).

    The major update is about the compatibility fix for the newly released sklearn 0.22, and LMDD module built by @John-Almardeny

    Source code(tar.gz)
    Source code(zip)
  • v0.7.5(Oct 13, 2019)

    This minor update includes the following items (most of them are bug fix and documentation improvement):

    v<0.7.5>, <09/24/2019> -- Fix one dimensional data error in LSCP. v<0.7.5>, <10/13/2019> -- Document kNN and Isolation Forest's incoming changes. v<0.7.5>, <10/13/2019> -- SOD optimization (created by John-Almardeny in June). v<0.7.5>, <10/13/2019> -- Documentation updates.

    Source code(tar.gz)
    Source code(zip)
  • v0.7.0(Apr 30, 2019)

    Multiple bug fixes are introduced:

    • Fix issue in CBLOF for n_cluster discrepancy.
    • Fix issue #23 that kNN fails with Mahalanobis distance.
    • Fix for sklearn new behaviour FutureWarning.

    Improved documentation:

    • Update docs with media coverage.
    • Major documentation update for JMLR.
    • Add License info and show support to 996.ICU!
    • Redesign ReadMe for clarity.

    Deprecate two key APIs: fit_predict and fit_predict_score.

    Add some new utility functions, e.g., generate_data_clusters.

    Source code(tar.gz)
    Source code(zip)
  • v.0.6.7(Jan 29, 2019)

    This release further improves package stability and comprehensiveness.

    A set of new models are added:

    • LSCP: Locally Selective Combination of Parallel Outlier Ensembles
    • XGBOD: Extreme Boosting Based Outlier Detection (Supervised)
    • SO_GAAL: Single-Objective Generative Adversarial Active Learning
    • MO_GAAL: Multiple-Objective Generative Adversarial Active Learning

    Bug fixes are also included, e.g., CBLOF.

    Last but not least, a few functions/models are redesigned/optimized:

    • Docstring is refactored to numpydoc
    • LOCI is optimized with numba
    • visualize function is redesigned
    Source code(tar.gz)
    Source code(zip)
Owner
Yue Zhao
Look for S'22 Internship (ping me)! Ph.D. Student @ CMU. ML Systems (MLSys) | Anomaly/Outlier Detection | AutoML. Top 1000 GitHuber worldwide.
Yue Zhao
Covid19-Forecasting - An interactive website that tracks, models and predicts COVID-19 Cases

Covid-Tracker This is an interactive website that tracks, models and predicts CO

Adam Lahmadi 1 Feb 01, 2022
Unofficial implementation of Pix2SEQ

Unofficial-Pix2seq: A Language Modeling Framework for Object Detection Unofficial implementation of Pix2SEQ. Please use this code with causion. Many i

159 Dec 12, 2022
Collections for the lasted paper about multi-view clustering methods (papers, codes)

Multi-View Clustering Papers Collections for the lasted paper about multi-view clustering methods (papers, codes). There also exists some repositories

Andrew Guan 10 Sep 20, 2022
Source code for the ACL-IJCNLP 2021 paper entitled "T-DNA: Taming Pre-trained Language Models with N-gram Representations for Low-Resource Domain Adaptation" by Shizhe Diao et al.

T-DNA Source code for the ACL-IJCNLP 2021 paper entitled Taming Pre-trained Language Models with N-gram Representations for Low-Resource Domain Adapta

shizhediao 17 Dec 22, 2022
Sample Prior Guided Robust Model Learning to Suppress Noisy Labels

PGDF This repo is the official implementation of our paper "Sample Prior Guided Robust Model Learning to Suppress Noisy Labels ". Citation If you use

CVSM Group - email: <a href=[email protected]"> 22 Dec 23, 2022
BARTScore: Evaluating Generated Text as Text Generation

This is the Repo for the paper: BARTScore: Evaluating Generated Text as Text Generation Updates 2021.06.28 Release online evaluation Demo 2021.06.25 R

NeuLab 196 Dec 17, 2022
EMNLP 2021: Single-dataset Experts for Multi-dataset Question-Answering

MADE (Multi-Adapter Dataset Experts) This repository contains the implementation of MADE (Multi-adapter dataset experts), which is described in the pa

Princeton Natural Language Processing 68 Jul 18, 2022
An interactive DNN Model deployed on web that predicts the chance of heart failure for a patient with an accuracy of 98%

Heart Failure Predictor About A Web UI deployed Dense Neural Network Model Made using Tensorflow that predicts whether the patient is healthy or has c

Adit Ahmedabadi 0 Jan 09, 2022
Colossal-AI: A Unified Deep Learning System for Large-Scale Parallel Training

ColossalAI An integrated large-scale model training system with efficient parallelization techniques. arXiv: Colossal-AI: A Unified Deep Learning Syst

HPC-AI Tech 7.9k Jan 08, 2023
Pytorch implementation of the popular Improv RNN model originally proposed by the Magenta team.

Pytorch Implementation of Improv RNN Overview This code is a pytorch implementation of the popular Improv RNN model originally implemented by the Mage

Sebastian Murgul 3 Nov 11, 2022
Posterior predictive distributions quantify uncertainties ignored by point estimates.

Posterior predictive distributions quantify uncertainties ignored by point estimates.

DeepMind 177 Dec 06, 2022
Dynamic Visual Reasoning by Learning Differentiable Physics Models from Video and Language (NeurIPS 2021)

VRDP (NeurIPS 2021) Dynamic Visual Reasoning by Learning Differentiable Physics Models from Video and Language Mingyu Ding, Zhenfang Chen, Tao Du, Pin

Mingyu Ding 36 Sep 20, 2022
An Unpaired Sketch-to-Photo Translation Model

Unpaired-Sketch-to-Photo-Translation We have released our code at https://github.com/rt219/Unsupervised-Sketch-to-Photo-Synthesis This project is the

38 Oct 28, 2022
Code for "Learning to Regrasp by Learning to Place"

Learning2Regrasp Learning to Regrasp by Learning to Place, CoRL 2021. Introduction We propose a point-cloud-based system for robots to predict a seque

Shuo Cheng (成硕) 18 Aug 27, 2022
Code for Learning to Segment The Tail (LST)

Learning to Segment the Tail [arXiv] In this repository, we release code for Learning to Segment The Tail (LST). The code is directly modified from th

47 Nov 07, 2022
PyTorch implementation of our ICCV 2021 paper Intrinsic-Extrinsic Preserved GANs for Unsupervised 3D Pose Transfer.

Unsupervised_IEPGAN This is the PyTorch implementation of our ICCV 2021 paper Intrinsic-Extrinsic Preserved GANs for Unsupervised 3D Pose Transfer. Ha

25 Oct 26, 2022
DeepHawkeye is a library to detect unusual patterns in images using features from pretrained neural networks

English | 简体中文 Introduction DeepHawkeye is a library to detect unusual patterns in images using features from pretrained neural networks Reference Pat

CV Newbie 28 Dec 13, 2022
Segmentation models with pretrained backbones. Keras and TensorFlow Keras.

Python library with Neural Networks for Image Segmentation based on Keras and TensorFlow. The main features of this library are: High level API (just

Pavel Yakubovskiy 4.2k Jan 09, 2023
Rank1 Conversation Emotion Detection Task

Rank1-Conversation_Emotion_Detection_Task accuracy macro-f1 recall 0.826 0.7544 0.719 基于预训练模型和时序预测模型的对话情感探测任务 1 摘要 针对对话情感探测任务,本文将其分为文本分类和时间序列预测两个子任务,分

Yuchen Han 2 Nov 28, 2021
Semi-supervised semantic segmentation needs strong, varied perturbations

Semi-supervised semantic segmentation using CutMix and Colour Augmentation Implementations of our papers: Semi-supervised semantic segmentation needs

146 Dec 20, 2022