AntroPy: entropy and complexity of (EEG) time-series in Python

Overview

https://travis-ci.org/raphaelvallat/antropy.svg?branch=master
https://github.com/raphaelvallat/antropy/blob/master/docs/pictures/logo.png?raw=true

AntroPy is a Python 3 package providing several time-efficient algorithms for computing the complexity of time-series. It can be used for example to extract features from EEG signals.

Documentation

Installation

pip install antropy

Dependencies

Functions

Entropy

import numpy as np
import antropy as ant
np.random.seed(1234567)
x = np.random.normal(size=3000)
# Permutation entropy
print(ant.perm_entropy(x, normalize=True))
# Spectral entropy
print(ant.spectral_entropy(x, sf=100, method='welch', normalize=True))
# Singular value decomposition entropy
print(ant.svd_entropy(x, normalize=True))
# Approximate entropy
print(ant.app_entropy(x))
# Sample entropy
print(ant.sample_entropy(x))
# Hjorth mobility and complexity
print(ant.hjorth_params(x))
# Number of zero-crossings
print(ant.num_zerocross(x))
# Lempel-Ziv complexity
print(ant.lziv_complexity('01111000011001', normalize=True))
0.9995371694290871
0.9940882825422431
0.9999110978316078
2.015221318528564
2.198595813245399
(1.4313385010057378, 1.215335712274099)
1531
1.3597696150205727

Fractal dimension

# Petrosian fractal dimension
print(ant.petrosian_fd(x))
# Katz fractal dimension
print(ant.katz_fd(x))
# Higuchi fractal dimension
print(ant.higuchi_fd(x))
# Detrended fluctuation analysis
print(ant.detrended_fluctuation(x))
1.0310643385753608
5.954272156665926
2.005040632258251
0.47903505674073327

Execution time

Here are some benchmarks computed on a MacBook Pro (2020).

import numpy as np
import antropy as ant
np.random.seed(1234567)
x = np.random.rand(1000)
# Entropy
%timeit ant.perm_entropy(x)
%timeit ant.spectral_entropy(x, sf=100)
%timeit ant.svd_entropy(x)
%timeit ant.app_entropy(x)  # Slow
%timeit ant.sample_entropy(x)  # Numba
# Fractal dimension
%timeit ant.petrosian_fd(x)
%timeit ant.katz_fd(x)
%timeit ant.higuchi_fd(x) # Numba
%timeit ant.detrended_fluctuation(x) # Numba
106 µs ± 5.49 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
138 µs ± 3.53 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
40.7 µs ± 303 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
2.44 ms ± 134 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
2.21 ms ± 35.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
23.5 µs ± 695 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
40.1 µs ± 2.09 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
13.7 µs ± 251 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
315 µs ± 10.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

Development

AntroPy was created and is maintained by Raphael Vallat. Contributions are more than welcome so feel free to contact me, open an issue or submit a pull request!

To see the code or report a bug, please visit the GitHub repository.

Note that this program is provided with NO WARRANTY OF ANY KIND. Always double check the results.

Acknowledgement

Several functions of AntroPy were adapted from:

All the credit goes to the author of these excellent packages.

Comments
  • Improve performance in `_xlog2x`

    Improve performance in `_xlog2x`

    Follow up to #3

    Using np.nan_to_num is advantageous because it makes use of numpy's vectorization, instead of 'if x == 0', which applies the test pointwise.

    enhancement 
    opened by jftsang 7
  • modify the _embed function to fit the 2d input

    modify the _embed function to fit the 2d input

    modify the _embed funciton, so, it can take input as 2d array. pre store the sliced signal into a list to accelerate concatenation operation. pre define the indice of sliced signal to reduce the computing in loop. add vectorized operation in loop to slice the signal for all input signal in one time.

    performance: 1e3 signal with 1000 time point, order = 3, decay=1: 0.01s

    1e4 signal with 1000 time point, order = 3, decay=1: 0.1s 1e4 signal with 1000 time point, order = 10, decay=1: 0.85s

    1e5 signal with 1000 time point, order = 3, decay=1: 1.11s 1e5 signal with 1000 time point, order = 10, decay=1: 9.82s

    5e5 signal with 1000 time point, order = 3, decay=1: 67s

    enhancement 
    opened by cheliu-computation 6
  • Handle the limit of p = 0 in p log2 p

    Handle the limit of p = 0 in p log2 p

    This patch defines a helper function, _xlog2x(x), that calculates x * log2(x) but handles the case x == 0 by returning 0 rather than nan. This is needed if the power spectrum has any component that is exactly zero: in particular, if the f = 0 component is zero.

    opened by jftsang 6
  • RuntimeWarning in _xlogx when x has zero values

    RuntimeWarning in _xlogx when x has zero values

    In the version currently on github, _xlogx uses numpy.where to return valid results based on the condition x==0, 0. However numpy.where still tries to apply the log function to all values of x before trimming the values that meet the condition, resulting in runtime warnings.

    To avoid those issues, I would suggest changing the code to something like

        xlogx = np.zeros_like(x)
        valid = np.nonzero(x)
        xlogx[valid] = x[valid] * np.log(x[valid]) / np.log(base)
        return xlogx
    

    As this strictly apply the function to the nonzero elements of x.

    If this looks good to you I could submit a PR. Let me know.

    enhancement 
    opened by guiweber 4
  • Fixed division by zero in linear regresion function (with test)

    Fixed division by zero in linear regresion function (with test)

    Hi,

    Just extending the information provided in the previous PR (https://github.com/raphaelvallat/antropy/pull/20), I provide a serie of screenshots about the problem I was facing when computing the detrended fluctuation of my signals.

    See below one of the segments of my signal where the method fails:

    Screenshot from 2022-11-17 08-24-32

    Results of the tests with this signal: Screenshot from 2022-11-17 08-36-07

    After the proposed solution: Screenshot from 2022-11-17 09-27-52

    I hope these new commits and test help to clarify the issue.

    Thanks, Tino

    enhancement 
    opened by Arritmic 3
  • conda-forge package

    conda-forge package

    Hello, I've added antropy to conda-forge; please let me know if you'd like to be added as a co-maintainer for the respective feedstock. It could also make sense to amend the installation instructions, WDYT?

    enhancement 
    opened by hoechenberger 3
  • Allow readonly arrays for higuchi_fd

    Allow readonly arrays for higuchi_fd

    The current behavior of this method changes the datatype of x as np.asarray is a wrapper for np.array where copy=False. (see here)

    I believe that this is (kind of) unexpected behavior, e.g., a user would not expect that the datatype would change when calculating a feature. Therefore, I suggest giving the user the option of not changing the datatype by adding a copy flag to the higuchi_fd function parameters. By default this flag = False, resulting in the same behavior as now (i.e., datatype of x is changed).

    When benchmarking the speed of the code, I observed no real difference. Perhaps we should even remove the flag and just use np.array instead of np.asarray?

    In [11]: x = np.random.rand(10_000).astype("float32")
    
    In [12]: %timeit ant.higuchi_fd(x)
    246 µs ± 5.24 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
    
    In [13]: x = np.random.rand(10_000).astype("float32")
    
    In [14]: %timeit ant.higuchi_fd(x, copy=True)
    242 µs ± 93.4 ns per loop (mean ± std. dev. of 7 runs, 1000 loops each)
    

    PS: I really like the fast functions in this library :smile:

    enhancement 
    opened by jvdd 3
  • The most

    The most "generic" entropy measure

    Hi,

    Is there any review paper available that overviews the performance of different entropy measures which are implemented in this library for the actual electrophysiological data? Also, what would be the measure with the smallest number of non-optional parameters that is also guaranteed to work in most cases?

    Thank you!

    documentation question 
    opened by antelk 3
  • Fixed division by zero in linear regresion function

    Fixed division by zero in linear regresion function

    I have been facing problems when computing the Detrended fluctuation analysis (DFA) using the function detrended_fluctuation(x) and the input array is relatively small (subwindows of windows).

    In some cases, len(fluctuations)=1 causing den=0 in the linear regression function. This fix solves the issue for me, having expected results.

    bug enhancement 
    opened by Arritmic 2
  • Zero-crossings

    Zero-crossings

    Hi Raph,

    Was doing some cross-checking and I have a quick question to disperse a doubt in my mind regarding the counting of the number of inversions:

    https://github.com/raphaelvallat/antropy/blob/88fea895dc464fd075f634ac81f2ae4f46b60cac/antropy/entropy.py#L908

    Shouldn't it be: np.diff(np.signbit(np.diff( here? I.e., counting the changes in sign of the consecutive differences, rather than the difference of the sign of the consecutive samples 🤔

    question 
    opened by DominiqueMakowski 2
  • Error importing with 32-bit windows 7

    Error importing with 32-bit windows 7

    Hi there,

    I've been playing with antropy on my main home machine and have come to use the same code on a 32-bit windows 7 machine which has inccured an import error.

    Currently using Python 3.8.10 32-bit. Can this be fixed or is it likely i'm in need of going to a 64-bit version?

    The traceback is as follow:

    Python 3.8.10 (tags/v3.8.10:3d8993a, May  3 2021, 11:34:34) [MSC v.1928 32 bit (
    Intel)] on win32
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import antropy
    Traceback (most recent call last):
      File "C:\Python38\lib\site-packages\numba\core\errors.py", line 776, in new_er
    ror_context
        yield
      File "C:\Python38\lib\site-packages\numba\core\lowering.py", line 235, in lowe
    r_block
        self.lower_inst(inst)
      File "C:\Python38\lib\site-packages\numba\core\lowering.py", line 380, in lowe
    r_inst
        val = self.lower_assign(ty, inst)
      File "C:\Python38\lib\site-packages\numba\core\lowering.py", line 556, in lowe
    r_assign
        return self.lower_expr(ty, value)
      File "C:\Python38\lib\site-packages\numba\core\lowering.py", line 1084, in low
    er_expr
        res = self.lower_call(resty, expr)
      File "C:\Python38\lib\site-packages\numba\core\lowering.py", line 815, in lowe
    r_call
        res = self._lower_call_normal(fnty, expr, signature)
      File "C:\Python38\lib\site-packages\numba\core\lowering.py", line 1055, in _lo
    wer_call_normal
        res = impl(self.builder, argvals, self.loc)
      File "C:\Python38\lib\site-packages\numba\core\base.py", line 1194, in __call_
    _
        res = self._imp(self._context, builder, self._sig, args, loc=loc)
      File "C:\Python38\lib\site-packages\numba\core\base.py", line 1224, in wrapper
    
        return fn(*args, **kwargs)
      File "C:\Python38\lib\site-packages\numba\np\unsafe\ndarray.py", line 31, in c
    odegen
        res = _empty_nd_impl(context, builder, arrty, shapes)
      File "C:\Python38\lib\site-packages\numba\np\arrayobj.py", line 3468, in _empt
    y_nd_impl
        arrlen_mult = builder.smul_with_overflow(arrlen, s)
      File "C:\Python38\lib\site-packages\llvmlite\ir\builder.py", line 50, in wrapp
    ed
        raise ValueError("Operands must be the same type, got (%s, %s)"
    ValueError: Operands must be the same type, got (i32, i64)
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "C:\Python38\lib\site-packages\antropy\__init__.py", line 4, in <module>
        from .fractal import *
      File "C:\Python38\lib\site-packages\antropy\fractal.py", line 304, in <module>
    
        def _dfa(x):
      File "C:\Python38\lib\site-packages\numba\core\decorators.py", line 226, in wr
    apper
        disp.compile(sig)
      File "C:\Python38\lib\site-packages\numba\core\dispatcher.py", line 979, in co
    mpile
        cres = self._compiler.compile(args, return_type)
      File "C:\Python38\lib\site-packages\numba\core\dispatcher.py", line 141, in co
    mpile
        status, retval = self._compile_cached(args, return_type)
      File "C:\Python38\lib\site-packages\numba\core\dispatcher.py", line 155, in _c
    ompile_cached
        retval = self._compile_core(args, return_type)
      File "C:\Python38\lib\site-packages\numba\core\dispatcher.py", line 168, in _c
    ompile_core
        cres = compiler.compile_extra(self.targetdescr.typing_context,
      File "C:\Python38\lib\site-packages\numba\core\compiler.py", line 686, in comp
    ile_extra
        return pipeline.compile_extra(func)
      File "C:\Python38\lib\site-packages\numba\core\compiler.py", line 428, in comp
    ile_extra
        return self._compile_bytecode()
      File "C:\Python38\lib\site-packages\numba\core\compiler.py", line 492, in _com
    pile_bytecode
        return self._compile_core()
      File "C:\Python38\lib\site-packages\numba\core\compiler.py", line 471, in _com
    pile_core
        raise e
      File "C:\Python38\lib\site-packages\numba\core\compiler.py", line 462, in _com
    pile_core
        pm.run(self.state)
      File "C:\Python38\lib\site-packages\numba\core\compiler_machinery.py", line 34
    3, in run
        raise patched_exception
      File "C:\Python38\lib\site-packages\numba\core\compiler_machinery.py", line 33
    4, in run
        self._runPass(idx, pass_inst, state)
      File "C:\Python38\lib\site-packages\numba\core\compiler_lock.py", line 35, in
    _acquire_compile_lock
        return func(*args, **kwargs)
      File "C:\Python38\lib\site-packages\numba\core\compiler_machinery.py", line 28
    9, in _runPass
        mutated |= check(pss.run_pass, internal_state)
      File "C:\Python38\lib\site-packages\numba\core\compiler_machinery.py", line 26
    2, in check
        mangled = func(compiler_state)
      File "C:\Python38\lib\site-packages\numba\core\typed_passes.py", line 396, in
    run_pass
        lower.lower()
      File "C:\Python38\lib\site-packages\numba\core\lowering.py", line 138, in lowe
    r
        self.lower_normal_function(self.fndesc)
      File "C:\Python38\lib\site-packages\numba\core\lowering.py", line 192, in lowe
    r_normal_function
        entry_block_tail = self.lower_function_body()
      File "C:\Python38\lib\site-packages\numba\core\lowering.py", line 221, in lowe
    r_function_body
        self.lower_block(block)
      File "C:\Python38\lib\site-packages\numba\core\lowering.py", line 235, in lowe
    r_block
        self.lower_inst(inst)
      File "C:\Python38\lib\contextlib.py", line 131, in __exit__
        self.gen.throw(type, value, traceback)
      File "C:\Python38\lib\site-packages\numba\core\errors.py", line 786, in new_er
    ror_context
        raise newerr.with_traceback(tb)
    numba.core.errors.LoweringError: Failed in nopython mode pipeline (step: native
    lowering)
    Operands must be the same type, got (i32, i64)
    
    File "lib\site-packages\antropy\fractal.py", line 313:
    def _dfa(x):
        <source elided>
    
        for i_n, n in enumerate(nvals):
        ^
    
    During: lowering "array.70 = call empty_func.71(size_tuple.69, func=empty_func.7
    1, args=(Var(size_tuple.69, fractal.py:313),), kws=[], vararg=None, target=None)
    " at C:\Python38\lib\site-packages\antropy\fractal.py (313)
    >>>
    
    invalid 
    opened by LMBooth 2
  • modify the entropy function be able to compute vectorizly

    modify the entropy function be able to compute vectorizly

    Hi, I have used your package to process the ECG signal and it achieve good results on classify different heart disease. Thanks a lot!

    However, so far, these functions are only can deal with one-dimensional signal like array(~, 1). May I take a try to modify the code and make it can process the data like sklearn.preprocessing.scale(X, axis=xx)? So it will be more efficient to deal with big array, because we do not need to run the foor loop or something else.

    My email is [email protected], welcome to discuss with me!

    enhancement 
    opened by cheliu-computation 2
  • Different results of different SampEn implementations

    Different results of different SampEn implementations

    My own implementation:

    import math import numpy as np from scipy.spatial.distance import pdist

    def sample_entropy(signal,m,r,dist_type='chebyshev', result = None, scale = None):

    # Check Errors
    if m > len(signal):
        raise ValueError('Embedding dimension must be smaller than the signal length (m<N).')
    if len(signal) != signal.size:
        raise ValueError('The signal parameter must be a [Nx1] vector.')
    if not isinstance(dist_type, str):
        raise ValueError('Distance type must be a string.')
    if dist_type not in ['braycurtis', 'canberra', 'chebyshev', 'cityblock',
                         'correlation', 'cosine', 'dice', 'euclidean', 'hamming',
                         'jaccard', 'jensenshannon', 'kulsinski', 'mahalanobis',
                         'matching', 'minkowski', 'rogerstanimoto', 'russellrao',
                         'seuclidean', 'sokalmichener', 'sokalsneath', 'sqeuclidean', 'yule']:
        raise ValueError('Distance type unknown.')
    
    # Useful parameters
    N = len(signal)
    sigma = np.std(signal)
    templates_m = []
    templates_m_plus_one = []
    signal = np.squeeze(signal)
    
    for i in range(N - m + 1):
        templates_m.append(signal[i:i + m])
    
    B = np.sum(pdist(templates_m, metric=dist_type) <= sigma * r)
    if B == 0:
        value = math.inf
    else:
        m += 1
        for i in range(N - m + 1):
            templates_m_plus_one.append(signal[i:i + m])
        A = np.sum(pdist(templates_m_plus_one, metric=dist_type) <= sigma * r)
    
        if A == 0:
            value = math.inf
    
        else:
            A = A/len(templates_m_plus_one)
            B = B/len(templates_m)
    
            value = -np.log((A / B))
    
    """IF A = 0 or B = 0, SamEn would return an infinite value. 
    However, the lowest non-zero conditional probability that SampEn should
    report is A/B = 2/[(N-m-1)*(N-m)]"""
    
    if math.isinf(value):
    
        """Note: SampEn has the following limits:
                - Lower bound: 0 
                - Upper bound : log(N-m) + log(N-m-1) - log(2)"""
    
        value = -np.log(2/((N-m-1)*(N-m)))
    
    if result is not None:
        result[scale-1] = value
    
    return value
    

    signal= np.random.rand(200) // rand(200,1) in Matlab parameters: m = 1; r = 0.2


    Outputs:

    My implementation: 2.1812 Implementation adapted: 2.1969 Neurokit 2 entropy_sample function: 2.5316 Your implementation: 2.2431 Different implementation from GitHub: 1.0488

    invalid question 
    opened by dmarcos97 4
  • Speed up importing antropy

    Speed up importing antropy

    Create a file called import.py with the single line import antropy. On my machine (Linux VM), this takes at least 10 seconds to run.

    Using pyinstrument tells me that most of the time is spent importing numba. Is there any possibility of speeding this up? Seems like this is a known issue with numba, though: see e.g. https://github.com/numba/numba/issues/4927.

    $ pyinstrument import.py 
    
      _     ._   __/__   _ _  _  _ _/_   Recorded: 16:36:28  Samples:  7842
     /_//_/// /_\ / //_// / //_'/ //     Duration: 12.368    CPU time: 11.963
    /   _/                      v3.4.1
    
    Program: import.py
    
    12.368 <module>  import.py:1
    └─ 12.368 <module>  antropy/__init__.py:2
       ├─ 6.711 <module>  antropy/fractal.py:1
       │  └─ 6.711 wrapper  numba/core/decorators.py:191
       │        [14277 frames hidden]  numba, llvmlite, contextlib, pickle, ...
       ├─ 3.034 <module>  antropy/entropy.py:1
       │  ├─ 2.390 wrapper  numba/core/decorators.py:191
       │  │     [5009 frames hidden]  numba, abc, llvmlite, inspect, contex...
       │  └─ 0.522 <module>  sklearn/__init__.py:14
       │        [374 frames hidden]  sklearn, scipy, inspect, enum, numpy,...
       └─ 2.618 <module>  antropy/utils.py:1
          ├─ 1.584 wrapper  numba/core/decorators.py:191
          │     [5027 frames hidden]  numba, abc, functools, llvmlite, insp...
          ├─ 0.895 <module>  numba/__init__.py:3
          │     [1444 frames hidden]  numba, llvmlite, pkg_resources, warni...
          └─ 0.138 <module>  numpy/__init__.py:106
                [190 frames hidden]  numpy, pathlib, urllib, collections, ...
    
    To view this report with different options, run:
        pyinstrument --load-prev 2021-06-17T16-36-28 [options]
    
    
    enhancement 
    opened by jftsang 4
  • Allow users to pass signal in frequency domain in spectral entropy

    Allow users to pass signal in frequency domain in spectral entropy

    Currently, antropy.spectral_entropy only allows x to be in time-domain. We should add freqs=None and psd=None as possible input if users want to calculate the spectral entropy of a pre-computed power spectrum. We should also add an example of how to calculate the spectral entropy from a multitaper power spectrum.

    enhancement 
    opened by raphaelvallat 0
Releases(v0.1.5)
  • v0.1.5(Dec 17, 2022)

    This is a minor release.

    What's Changed

    • Handle the limit of p = 0 in p log2 p by @jftsang in https://github.com/raphaelvallat/antropy/pull/3
    • Correlation between entropy/FD metrics for data traces from Hodgin-Huxley model by @antelk in https://github.com/raphaelvallat/antropy/pull/5
    • Fix docstrings and rerun by @antelk in https://github.com/raphaelvallat/antropy/pull/7
    • Improve performance in _xlog2x by @jftsang in https://github.com/raphaelvallat/antropy/pull/8
    • Prevent invalid operations in xlogx by @guiweber in https://github.com/raphaelvallat/antropy/pull/11
    • Allow readonly arrays for higuchi_fd by @jvdd in https://github.com/raphaelvallat/antropy/pull/13
    • modify the _embed function to fit the 2d input by @cheliu-computation in https://github.com/raphaelvallat/antropy/pull/15
    • Fixed division by zero in linear regresion function (with test) by @Arritmic in https://github.com/raphaelvallat/antropy/pull/21
    • Add conda install instructions by @raphaelvallat in https://github.com/raphaelvallat/antropy/pull/19

    New Contributors

    • @jftsang made their first contribution in https://github.com/raphaelvallat/antropy/pull/3
    • @antelk made their first contribution in https://github.com/raphaelvallat/antropy/pull/5
    • @guiweber made their first contribution in https://github.com/raphaelvallat/antropy/pull/11
    • @jvdd made their first contribution in https://github.com/raphaelvallat/antropy/pull/13
    • @cheliu-computation made their first contribution in https://github.com/raphaelvallat/antropy/pull/15
    • @Arritmic made their first contribution in https://github.com/raphaelvallat/antropy/pull/21
    • @raphaelvallat made their first contribution in https://github.com/raphaelvallat/antropy/pull/19

    Full Changelog: https://github.com/raphaelvallat/antropy/compare/v0.1.4...v0.1.5

    Source code(tar.gz)
    Source code(zip)
  • v0.1.4(Apr 1, 2021)

Owner
Raphael Vallat
French research scientist specialized in sleep and dreaming | Strong interest in stats and signal processing | Python lover
Raphael Vallat
Multi-modal Text Recognition Networks: Interactive Enhancements between Visual and Semantic Features

Multi-modal Text Recognition Networks: Interactive Enhancements between Visual and Semantic Features | paper | Official PyTorch implementation for Mul

48 Dec 28, 2022
Official Implementation for the "An Empirical Investigation of 3D Anomaly Detection and Segmentation" paper.

An Empirical Investigation of 3D Anomaly Detection and Segmentation Project | Paper Official PyTorch Implementation for the "An Empirical Investigatio

Eliahu Horwitz 55 Dec 14, 2022
Raster Vision is an open source Python framework for building computer vision models on satellite, aerial, and other large imagery sets

Raster Vision is an open source Python framework for building computer vision models on satellite, aerial, and other large imagery sets (including obl

Azavea 1.7k Dec 22, 2022
PlaidML is a framework for making deep learning work everywhere.

A platform for making deep learning work everywhere. Documentation | Installation Instructions | Building PlaidML | Contributing | Troubleshooting | R

PlaidML 4.5k Jan 02, 2023
Implementation of "The Power of Scale for Parameter-Efficient Prompt Tuning"

Prompt-Tuning Implementation of "The Power of Scale for Parameter-Efficient Prompt Tuning" Currently, we support the following huggigface models: Bart

Andrew Zeng 36 Dec 19, 2022
With this package, you can generate mixed-integer linear programming (MIP) models of trained artificial neural networks (ANNs) using the rectified linear unit (ReLU) activation function

With this package, you can generate mixed-integer linear programming (MIP) models of trained artificial neural networks (ANNs) using the rectified linear unit (ReLU) activation function. At the momen

ChemEngAI 40 Dec 27, 2022
Pytorch implementation of the paper DocEnTr: An End-to-End Document Image Enhancement Transformer.

DocEnTR Description Pytorch implementation of the paper DocEnTr: An End-to-End Document Image Enhancement Transformer. This model is implemented on to

Mohamed Ali Souibgui 74 Jan 07, 2023
maximal update parametrization (µP)

Maximal Update Parametrization (μP) and Hyperparameter Transfer (μTransfer) Paper link | Blog link In Tensor Programs V: Tuning Large Neural Networks

Microsoft 694 Jan 03, 2023
A user-friendly research and development tool built to standardize RL competency assessment for custom agents and environments.

Built with ❤️ by Sam Showalter Contents Overview Installation Dependencies Usage Scripts Standard Execution Environment Development Environment Benchm

SRI-AIC 1 Nov 18, 2021
This is the repository for the NeurIPS-21 paper [Contrastive Graph Poisson Networks: Semi-Supervised Learning with Extremely Limited Labels].

CGPN This is the repository for the NeurIPS-21 paper [Contrastive Graph Poisson Networks: Semi-Supervised Learning with Extremely Limited Labels]. Req

10 Sep 12, 2022
Model Serving Made Easy

The easiest way to build Machine Learning APIs BentoML makes moving trained ML models to production easy: Package models trained with any ML framework

BentoML 4.4k Jan 08, 2023
Python Library for Signal/Image Data Analysis with Transport Methods

PyTransKit Python Transport Based Signal Processing Toolkit Website and documentation: https://pytranskit.readthedocs.io/ Installation The library cou

24 Dec 23, 2022
Data loaders and abstractions for text and NLP

torchtext This repository consists of: torchtext.datasets: The raw text iterators for common NLP datasets torchtext.data: Some basic NLP building bloc

3.2k Jan 08, 2023
MemStream: Memory-Based Anomaly Detection in Multi-Aspect Streams with Concept Drift

MemStream Implementation of MemStream: Memory-Based Anomaly Detection in Multi-Aspect Streams with Concept Drift . Siddharth Bhatia, Arjit Jain, Shivi

Stream-AD 61 Dec 02, 2022
Learning Neural Network Subspaces

Learning Neural Network Subspaces Welcome to the codebase for Learning Neural Network Subspaces by Mitchell Wortsman, Maxwell Horton, Carlos Guestrin,

Apple 117 Nov 17, 2022
An pytorch implementation of Masked Autoencoders Are Scalable Vision Learners

An pytorch implementation of Masked Autoencoders Are Scalable Vision Learners This is a coarse version for MAE, only make the pretrain model, the fine

FlyEgle 214 Dec 29, 2022
Age and Gender prediction using Keras

cnn_age_gender Age and Gender prediction using Keras Dataset example : Description : UTKFace dataset is a large-scale face dataset with long age span

XN3UR0N 58 May 03, 2022
PyTorch implementation of Trust Region Policy Optimization

PyTorch implementation of TRPO Try my implementation of PPO (aka newer better variant of TRPO), unless you need to you TRPO for some specific reasons.

Ilya Kostrikov 366 Nov 15, 2022
Official PyTorch Implementation of Mask-aware IoU and maYOLACT Detector [BMVC2021]

The official implementation of Mask-aware IoU and maYOLACT detector. Our implementation is based on mmdetection. Mask-aware IoU for Anchor Assignment

Kemal Oksuz 46 Sep 29, 2022
Confidence Propagation Cluster aims to replace NMS-based methods as a better box fusion framework in 2D/3D Object detection

CP-Cluster Confidence Propagation Cluster aims to replace NMS-based methods as a better box fusion framework in 2D/3D Object detection, Instance Segme

Yichun Shen 41 Dec 08, 2022