Stats, linear algebra and einops for xarray

Overview

xarray-einstats

Documentation Status Code style: black PyPI

Stats, linear algebra and einops for xarray

⚠️ Caution: This project is still in a very early development stage

Installation

To install, run

(.venv) $ pip install xarray-einstats

Overview

As stated in their website:

xarray makes working with multi-dimensional labeled arrays simple, efficient and fun!

The code is often more verbose, but it is generally because it is clearer and thus less error prone and intuitive. Here are some examples of such trade-off:

numpy xarray
a[2, 5] da.sel(drug="paracetamol", subject=5)
a.mean(axis=(0, 1)) da.mean(dim=("chain", "draw"))
`` ``

In some other cases however, using xarray can result in overly verbose code that often also becomes less clear. xarray-einstats provides wrappers around some numpy and scipy functions (mostly numpy.linalg and scipy.stats) and around einops with an api and features adapted to xarray.

% ⚠️ Attention: A nicer rendering of the content below is available at our documentation

Data for examples

The examples in this overview page use the DataArrays from the Dataset below (stored as ds variable) to illustrate xarray-einstats features:


   
    
Dimensions:  (dim_plot: 50, chain: 4, draw: 500, team: 6)
Coordinates:
  * chain    (chain) int64 0 1 2 3
  * draw     (draw) int64 0 1 2 3 4 5 6 7 8 ... 492 493 494 495 496 497 498 499
  * team     (team) object 'Wales' 'France' 'Ireland' ... 'Italy' 'England'
Dimensions without coordinates: dim_plot
Data variables:
    x_plot   (dim_plot) float64 0.0 0.2041 0.4082 0.6122 ... 9.592 9.796 10.0
    atts     (chain, draw, team) float64 0.1063 -0.01913 ... -0.2911 0.2029
    sd_att   (draw) float64 0.272 0.2685 0.2593 0.2612 ... 0.4112 0.2117 0.3401

   

Stats

xarray-einstats provides two wrapper classes {class}xarray_einstats.XrContinuousRV and {class}xarray_einstats.XrDiscreteRV that can be used to wrap any distribution in {mod}scipy.stats so they accept {class}~xarray.DataArray as inputs.

We can evaluate the logpdf using inputs that wouldn't align if using numpy in a couple lines:

norm_dist = xarray_einstats.XrContinuousRV(scipy.stats.norm)
norm_dist.logpdf(ds["x_plot"], ds["atts"], ds["sd_att"])

which returns:


   
    
array([[[[ 3.06470249e-01,  3.80373065e-01,  2.56575936e-01,
...
          -4.41658154e+02, -4.57599982e+02, -4.14709280e+02]]]])
Coordinates:
  * chain    (chain) int64 0 1 2 3
  * draw     (draw) int64 0 1 2 3 4 5 6 7 8 ... 492 493 494 495 496 497 498 499
  * team     (team) object 'Wales' 'France' 'Ireland' ... 'Italy' 'England'
Dimensions without coordinates: dim_plot

   

einops

only rearrange wrapped for now

einops uses a convenient notation inspired in Einstein notation to specify operations on multidimensional arrays. It uses spaces as a delimiter between dimensions, parenthesis to indicate splitting or stacking of dimensions and -> to separate between input and output dim specification. einstats uses an adapted notation then translates to einops and calls {func}xarray.apply_ufunc under the hood.

Why change the notation? There are three main reasons, each concerning one of the elements respectively: ->, space as delimiter and parenthesis:

  • In xarray dimensions are already labeled. In many cases, the left side in the einops notation is only used to label the dimensions. In fact, 5/7 examples in https://einops.rocks/api/rearrange/ fall in this category. This is not necessary when working with xarray objects.
  • In xarray dimension names can be any {term}xarray:hashable. xarray-einstats only supports strings as dimension names, but the space can't be used as delimiter.
  • In xarray dimensions are labeled and the order doesn't matter. This might seem the same as the first reason but it is not. When splitting or stacking dimensions you need (and want) the names of both parent and children dimensions. In some cases, for example stacking, we can autogenerate a default name, but in general you'll want to give a name to the new dimension. After all, dimension order in xarray doesn't matter and there isn't much to be done without knowing the dimension names.

xarray-einstats uses two separate arguments, one for the input pattern (optional) and another for the output pattern. Each is a list of dimensions (strings) or dimension operations (lists or dictionaries). Some examples:

We can combine the chain and draw dimensions and name the resulting dimension sample using a list with a single dictionary. The team dimension is not present in the pattern and is not modified.

rearrange(ds.atts, [{"sample": ("chain", "draw")}])

Out:


   
    
array([[ 0.10632395,  0.1538294 ,  0.17806237, ...,  0.16744257,
         0.14927569,  0.21803568],
         ...,
       [ 0.30447644,  0.22650416,  0.25523419, ...,  0.28405435,
         0.29232681,  0.20286656]])
Coordinates:
  * team     (team) object 'Wales' 'France' 'Ireland' ... 'Italy' 'England'
Dimensions without coordinates: sample

   

Note that following xarray convention, new dimensions and dimensions on which we operated are moved to the end. This only matters when you access the underlying array with .values or .data and you can always transpose using {meth}xarray.Dataset.transpose, but it can matter. You can change the pattern to enforce the output dimension order:

rearrange(ds.atts, [{"sample": ("chain", "draw")}, "team"])

Out:


   
    
array([[ 0.10632395, -0.01912607,  0.13671159, -0.06754783, -0.46083807,
         0.30447644],
       ...,
       [ 0.21803568, -0.11394285,  0.09447937, -0.11032643, -0.29111234,
         0.20286656]])
Coordinates:
  * team     (team) object 'Wales' 'France' 'Ireland' ... 'Italy' 'England'
Dimensions without coordinates: sample

   

Now to a more complicated pattern. We will split the chain and draw dimension, then combine those split dimensions between them.

rearrange(
    ds.atts,
    # combine split chain and team dims between them
    # here we don't use a dict so the new dimensions get a default name
    out_dims=[("chain1", "team1"), ("team2", "chain2")],
    # use dicts to specify which dimensions to split, here we *need* to use a dict
    in_dims=[{"chain": ("chain1", "chain2")}, {"team": ("team1", "team2")}],
    # set the lengths of split dimensions as kwargs
    chain1=2, chain2=2, team1=2, team2=3
)

Out:


   
    
array([[[ 1.06323952e-01,  2.47005252e-01, -1.91260714e-02,
         -2.55769582e-02,  1.36711590e-01,  1.23165119e-01],
...
        [-2.76616968e-02, -1.10326428e-01, -3.99582340e-01,
         -2.91112341e-01,  1.90714405e-01,  2.02866563e-01]]])
Coordinates:
  * draw     (draw) int64 0 1 2 3 4 5 6 7 8 ... 492 493 494 495 496 497 498 499
Dimensions without coordinates: chain1,team1, team2,chain2

   

More einops examples at {ref}einops

Linear Algebra

Still missing in the package

There is no one size fits all solution, but knowing the function we are wrapping we can easily make the code more concise and clear. Without xarray-einstats, to invert a batch of matrices stored in a 4d array you have to do:

inv = xarray.apply_ufunc(   # output is a 4d labeled array
    numpy.linalg.inv,
    batch_of_matrices,      # input is a 4d labeled array
    input_core_dims=[["matrix_dim", "matrix_dim_bis"]],
    output_core_dims=[["matrix_dim", "matrix_dim_bis"]]
)

to calculate it's norm instead, it becomes:

norm = xarray.apply_ufunc(  # output is a 2d labeled array
    numpy.linalg.norm,
    batch_of_matrices,      # input is a 4d labeled array
    input_core_dims=[["matrix_dim", "matrix_dim_bis"]],
)

With xarray-einstats, those operations become:

inv = xarray_einstats.inv(batch_of_matrices, dim=("matrix_dim", "matrix_dim_bis"))
norm = xarray_einstats.norm(batch_of_matrices, dim=("matrix_dim", "matrix_dim_bis"))

Similar projects

Here we list some similar projects we know of. Note that all of them are complementary and don't overlap:

Comments
  • distutils.errors.DistutilsOptionError: No configuration found for dynamic 'description'.

    distutils.errors.DistutilsOptionError: No configuration found for dynamic 'description'.

    Build fails on FreeBSD:

    /usr/local/lib/python3.8/site-packages/setuptools/config/pyprojecttoml.py:102: _ExperimentalProjectMetadata: Support for project metadata in `pyproject.toml` is still experimental and may be removed (or change) in future releases.
      warnings.warn(msg, _ExperimentalProjectMetadata)
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "setup.py", line 1, in <module>
        import setuptools; setuptools.setup()
      File "/usr/local/lib/python3.8/site-packages/setuptools/__init__.py", line 87, in setup
        return distutils.core.setup(**attrs)
      File "/usr/local/lib/python3.8/site-packages/setuptools/_distutils/core.py", line 122, in setup
        dist.parse_config_files()
      File "/usr/local/lib/python3.8/site-packages/setuptools/dist.py", line 854, in parse_config_files
        pyprojecttoml.apply_configuration(self, filename, ignore_option_errors)
      File "/usr/local/lib/python3.8/site-packages/setuptools/config/pyprojecttoml.py", line 54, in apply_configuration
        config = read_configuration(filepath, True, ignore_option_errors, dist)
      File "/usr/local/lib/python3.8/site-packages/setuptools/config/pyprojecttoml.py", line 134, in read_configuration
        return expand_configuration(asdict, root_dir, ignore_option_errors, dist)
      File "/usr/local/lib/python3.8/site-packages/setuptools/config/pyprojecttoml.py", line 189, in expand_configuration
        return _ConfigExpander(config, root_dir, ignore_option_errors, dist).expand()
      File "/usr/local/lib/python3.8/site-packages/setuptools/config/pyprojecttoml.py", line 236, in expand
        self._expand_all_dynamic(dist, package_dir)
      File "/usr/local/lib/python3.8/site-packages/setuptools/config/pyprojecttoml.py", line 271, in _expand_all_dynamic
        obtained_dynamic = {
      File "/usr/local/lib/python3.8/site-packages/setuptools/config/pyprojecttoml.py", line 272, in <dictcomp>
        field: self._obtain(dist, field, package_dir)
      File "/usr/local/lib/python3.8/site-packages/setuptools/config/pyprojecttoml.py", line 309, in _obtain
        self._ensure_previously_set(dist, field)
      File "/usr/local/lib/python3.8/site-packages/setuptools/config/pyprojecttoml.py", line 295, in _ensure_previously_set
        raise OptionError(msg)
    distutils.errors.DistutilsOptionError: No configuration found for dynamic 'description'.
    Some dynamic fields need to be specified via `tool.setuptools.dynamic`
    others must be specified via the equivalent attribute in `setup.py`.
    *** Error code 1
    

    Version: 0.2.2 Python-3.8 FreeBSD 13.1

    opened by yurivict 6
  • test stats on datasets

    test stats on datasets

    If using the right subset of dimensions, the summary stats already work on datasets. This PR adds tests to make sure this behaviour always works. This is very convenient for MCMC output for example to take the mad over the chain and draw dimension of all the variables in a dataset at once.

    opened by OriolAbril 2
  • update readme, index and install pages

    update readme, index and install pages

    Update the installation page to add conda, and separate the readme and the index page to have slightly different content (expecting different audiences in each page)


    :books: Documentation preview :books:: https://xarray-einstats--33.org.readthedocs.build/en/33/

    opened by OriolAbril 1
  • Use Read the Docs action v1

    Use Read the Docs action v1

    Read the Docs repository was renamed from readthedocs/readthedocs-preview to readthedocs/actions/. Now, the preview action is under readthedocs/actions/preview and is tagged as v1


    :books: Documentation preview :books:: https://xarray-einstats--31.org.readthedocs.build/en/31/

    opened by humitos 1
  • Catch positive definite error

    Catch positive definite error

    Even though the docstring from https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.multivariate_normal.html says

    Symmetric positive (semi)definite covariance matrix of the distribution.

    It also accepts matrices whose determinant is close to 0 but negative, generally due to numerical issues. It doesn't expect users to solve those numerical issues themselves before passing the covariance matrix to the multivariate_normal class or methods. This adds a try/except to catch the error related to this issue and checks if adding an identity matrix with 1e-10 to the provided covariance matrix solves the issue.

    cc @symeneses

    opened by OriolAbril 1
  • Check behaviour on groupby objects

    Check behaviour on groupby objects

    I think that most functions in the stats module can be used on xarray groupby objects, like az.hdi as shown here. If that is the case we should document that and add tests to prevent future changes from removing that feature.

    opened by OriolAbril 1
  • try preferred citation to add the doi in generated citation

    try preferred citation to add the doi in generated citation

    The cite this repository button generated by github currenly copies the following text to the clipboard:

    @software{Abril_Pla_xarray-einstats,
    author = {Abril Pla, Oriol},
    license = {Apache-2.0},
    title = {{xarray-einstats}},
    url = {https://github.com/arviz-devs/xarray-einstats}
    }
    

    which completely ignores the doi even though it is provided as identifier. There is also an option for "preferred citation" that can be used to point to an article instead. This PR tries to use that to generate still a software citation but with the general Zenodo doi for this repository.

    Note: Zenodo and releases are a cyclic dependency. The doi is only generated once the release is crafted, so the released code can't include the right doi.

    This branch currently generates this:

    @software{Abril-Pla_xarray-einstats_2022,
    author = {Abril-Pla, Oriol},
    doi = {10.5281/zenodo.5895451},
    license = {Apache-2.0},
    title = {{xarray-einstats}},
    url = {https://github.com/arviz-devs/xarray-einstats},
    year = {2022}
    }
    

    will think about what should be present and update accordingly. Some info like the publisher is ignored (I guess for software type citations) even if provided inside the preferred_ctation section.

    opened by OriolAbril 1
  • Change the monkeypatch thing to using with contexts?

    Change the monkeypatch thing to using with contexts?

    It would be nice to be able to do things like

    with matrix_dims(["dim1", "dim3"]):
        chol = xe.linalg.cholesky(da)
        eig = xe.linalg.eig(da)
    

    instead of having to use the monkeypatch trick (currently documented) or needing to pass the dimensions every time.

    opened by OriolAbril 0
  • Tests fail: AttributeError: partially initialized module 'einops' has no attribute '_backends'

    Tests fail: AttributeError: partially initialized module 'einops' has no attribute '_backends'

    collected 140 items / 2 errors                                                                                                                                                               
    
    =========================================================================================== ERRORS ===========================================================================================
    _________________________________________________________________ ERROR collecting src/xarray_einstats/tests/test_einops.py __________________________________________________________________
    tests/test_einops.py:6: in <module>
        from xarray_einstats.einops import raw_rearrange, raw_reduce, rearrange, reduce, translate_pattern
    einops.py:9: in <module>
        import einops
    einops.py:407: in <module>
        class DaskBackend(einops._backends.AbstractBackend):  # pylint: disable=protected-access
    E   AttributeError: partially initialized module 'einops' has no attribute '_backends' (most likely due to a circular import)
    __________________________________________________________________ ERROR collecting src/xarray_einstats/tests/test_numba.py __________________________________________________________________
    tests/test_numba.py:6: in <module>
        from xarray_einstats.numba import histogram
    numba.py:2: in <module>
        import numba
    numba.py:9: in <module>
        @numba.guvectorize(
    E   AttributeError: partially initialized module 'numba' has no attribute 'guvectorize' (most likely due to a circular import)
    !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 2 errors during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
    ===================================================================================== 2 errors in 1.83s ======================================================================================
    *** Error code 2
    
    

    OS: FreeBSD 13.1

    opened by yurivict 3
  • Add logsumexp wrapper

    Add logsumexp wrapper

    I think it is the only function in scipy.special worth wrapping, so it might be worth adding a misc module for this and other possible "loose ends" functions to wrap

    opened by OriolAbril 0
Releases(v0.4.0)
  • v0.4.0(Dec 9, 2022)

  • v0.3.0(Jun 19, 2022)

  • v0.2.2(Apr 2, 2022)

    Patch release to include the license and changelog files in the pypi package, now using the PEP 621 metadata in pyproject.toml. Packaging the license is needed to add an xarray-einstats feedstock to conda forge.

    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Apr 2, 2022)

    Patch release to use a manifest file to include the license and changelog files in the pypi package. Packaging the license is needed to add an xarray-einstats feedstock to conda forge.

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Apr 1, 2022)

  • v0.1.1(Jan 24, 2022)

    Initial release of xarray_einstats.

    xarray_einstats extends array manipulation libraries to use with xarray. It starts with 4 modules:

    • linalg -> extends functionality from numpy.linalg module
    • stats -> extends functionality from scipy.stats module
    • einops -> extends einops library, which needs to be installed
    • numba -> miscellaneous extensions (numpy.histogram for now only) that need numba to accelerate and/or vectorize the functions. numba needs to be installed to use it

    v0.1.1 indicates the second try at uploading to pypi

    Source code(tar.gz)
    Source code(zip)
Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques

Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques such as online, hashing, allreduce, reductions, learning2search, active, and interactive learn

Vowpal Wabbit 8.1k Dec 30, 2022
Combines MLflow with a database (PostgreSQL) and a reverse proxy (NGINX) into a multi-container Docker application

Combines MLflow with a database (PostgreSQL) and a reverse proxy (NGINX) into a multi-container Docker application (with docker-compose).

Philip May 2 Dec 03, 2021
Meerkat provides fast and flexible data structures for working with complex machine learning datasets.

Meerkat makes it easier for ML practitioners to interact with high-dimensional, multi-modal data. It provides simple abstractions for data inspection, model evaluation and model training supported by

Robustness Gym 115 Dec 12, 2022
Highly interpretable classifiers for scikit learn, producing easily understood decision rules instead of black box models

Highly interpretable, sklearn-compatible classifier based on decision rules This is a scikit-learn compatible wrapper for the Bayesian Rule List class

Tamas Madl 482 Nov 19, 2022
High performance, easy-to-use, and scalable machine learning (ML) package, including linear model (LR), factorization machines (FM), and field-aware factorization machines (FFM) for Python and CLI interface.

What is xLearn? xLearn is a high performance, easy-to-use, and scalable machine learning package that contains linear model (LR), factorization machin

Chao Ma 3k Jan 08, 2023
Tool for producing high quality forecasts for time series data that has multiple seasonality with linear or non-linear growth.

Prophet: Automatic Forecasting Procedure Prophet is a procedure for forecasting time series data based on an additive model where non-linear trends ar

Facebook 15.4k Jan 07, 2023
Credit Card Fraud Detection, used the credit card fraud dataset from Kaggle

Credit Card Fraud Detection, used the credit card fraud dataset from Kaggle

Sean Zahller 1 Feb 04, 2022
Class-imbalanced / Long-tailed ensemble learning in Python. Modular, flexible, and extensible

IMBENS: Class-imbalanced Ensemble Learning in Python Language: English | Chinese/中文 Links: Documentation | Gallery | PyPI | Changelog | Source | Downl

Zhining Liu 176 Jan 04, 2023
A Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.

Master status: Development status: Package information: TPOT stands for Tree-based Pipeline Optimization Tool. Consider TPOT your Data Science Assista

Epistasis Lab at UPenn 8.9k Jan 09, 2023
Module is created to build a spam filter using Python and the multinomial Naive Bayes algorithm.

Naive-Bayes Spam Classificator Module is created to build a spam filter using Python and the multinomial Naive Bayes algorithm. Main goal is to code a

Viktoria Maksymiuk 1 Jun 27, 2022
Azure Cloud Advocates at Microsoft are pleased to offer a 12-week, 24-lesson curriculum all about Machine Learning

Azure Cloud Advocates at Microsoft are pleased to offer a 12-week, 24-lesson curriculum all about Machine Learning

Microsoft 43.4k Jan 04, 2023
Uber Open Source 1.6k Dec 31, 2022
Interactive Web App with Streamlit and Scikit-learn that applies different Classification algorithms to popular datasets

Interactive Web App with Streamlit and Scikit-learn that applies different Classification algorithms to popular datasets Datasets Used: Iris dataset,

Samrat Mitra 2 Nov 18, 2021
A python fast implementation of the famous SVD algorithm popularized by Simon Funk during Netflix Prize

⚡ funk-svd funk-svd is a Python 3 library implementing a fast version of the famous SVD algorithm popularized by Simon Funk during the Neflix Prize co

Geoffrey Bolmier 171 Dec 19, 2022
Kalman filter library

The kalman filter framework described here is an incredibly powerful tool for any optimization problem, but particularly for visual odometry, sensor fusion localization or SLAM.

comma.ai 276 Jan 01, 2023
A simple python program that draws a tree for incrementing values using the Collatz Conjecture.

Collatz Conjecture A simple python program that draws a tree for incrementing values using the Collatz Conjecture. Values which can be edited: Length

davidgasinski 1 Oct 28, 2021
Mesh TensorFlow: Model Parallelism Made Easier

Mesh TensorFlow - Model Parallelism Made Easier Introduction Mesh TensorFlow (mtf) is a language for distributed deep learning, capable of specifying

1.3k Dec 26, 2022
A Python-based application demonstrating various search algorithms, namely Depth-First Search (DFS), Breadth-First Search (BFS), and A* Search (Manhattan Distance Heuristic)

A Python-based application demonstrating various search algorithms, namely Depth-First Search (DFS), Breadth-First Search (BFS), and the A* Search (using the Manhattan Distance Heuristic)

17 Aug 14, 2022
AI and Machine Learning with Kubeflow, Amazon EKS, and SageMaker

Data Science on AWS - O'Reilly Book Get the book on Amazon.com Book Outline Quick Start Workshop (4-hours) In this quick start hands-on workshop, you

Data Science on AWS 2.8k Jan 03, 2023
(3D): LeGO-LOAM, LIO-SAM, and LVI-SAM installation and application

SLAM-application: installation and test (3D): LeGO-LOAM, LIO-SAM, and LVI-SAM Tested on Quadruped robot in Gazebo ● Results: video, video2 Requirement

EungChang-Mason-Lee 203 Dec 26, 2022