Stats, linear algebra and einops for xarray

Overview

xarray-einstats

Documentation Status Code style: black PyPI

Stats, linear algebra and einops for xarray

⚠️ Caution: This project is still in a very early development stage

Installation

To install, run

(.venv) $ pip install xarray-einstats

Overview

As stated in their website:

xarray makes working with multi-dimensional labeled arrays simple, efficient and fun!

The code is often more verbose, but it is generally because it is clearer and thus less error prone and intuitive. Here are some examples of such trade-off:

numpy xarray
a[2, 5] da.sel(drug="paracetamol", subject=5)
a.mean(axis=(0, 1)) da.mean(dim=("chain", "draw"))
`` ``

In some other cases however, using xarray can result in overly verbose code that often also becomes less clear. xarray-einstats provides wrappers around some numpy and scipy functions (mostly numpy.linalg and scipy.stats) and around einops with an api and features adapted to xarray.

% ⚠️ Attention: A nicer rendering of the content below is available at our documentation

Data for examples

The examples in this overview page use the DataArrays from the Dataset below (stored as ds variable) to illustrate xarray-einstats features:


   
    
Dimensions:  (dim_plot: 50, chain: 4, draw: 500, team: 6)
Coordinates:
  * chain    (chain) int64 0 1 2 3
  * draw     (draw) int64 0 1 2 3 4 5 6 7 8 ... 492 493 494 495 496 497 498 499
  * team     (team) object 'Wales' 'France' 'Ireland' ... 'Italy' 'England'
Dimensions without coordinates: dim_plot
Data variables:
    x_plot   (dim_plot) float64 0.0 0.2041 0.4082 0.6122 ... 9.592 9.796 10.0
    atts     (chain, draw, team) float64 0.1063 -0.01913 ... -0.2911 0.2029
    sd_att   (draw) float64 0.272 0.2685 0.2593 0.2612 ... 0.4112 0.2117 0.3401

   

Stats

xarray-einstats provides two wrapper classes {class}xarray_einstats.XrContinuousRV and {class}xarray_einstats.XrDiscreteRV that can be used to wrap any distribution in {mod}scipy.stats so they accept {class}~xarray.DataArray as inputs.

We can evaluate the logpdf using inputs that wouldn't align if using numpy in a couple lines:

norm_dist = xarray_einstats.XrContinuousRV(scipy.stats.norm)
norm_dist.logpdf(ds["x_plot"], ds["atts"], ds["sd_att"])

which returns:


   
    
array([[[[ 3.06470249e-01,  3.80373065e-01,  2.56575936e-01,
...
          -4.41658154e+02, -4.57599982e+02, -4.14709280e+02]]]])
Coordinates:
  * chain    (chain) int64 0 1 2 3
  * draw     (draw) int64 0 1 2 3 4 5 6 7 8 ... 492 493 494 495 496 497 498 499
  * team     (team) object 'Wales' 'France' 'Ireland' ... 'Italy' 'England'
Dimensions without coordinates: dim_plot

   

einops

only rearrange wrapped for now

einops uses a convenient notation inspired in Einstein notation to specify operations on multidimensional arrays. It uses spaces as a delimiter between dimensions, parenthesis to indicate splitting or stacking of dimensions and -> to separate between input and output dim specification. einstats uses an adapted notation then translates to einops and calls {func}xarray.apply_ufunc under the hood.

Why change the notation? There are three main reasons, each concerning one of the elements respectively: ->, space as delimiter and parenthesis:

  • In xarray dimensions are already labeled. In many cases, the left side in the einops notation is only used to label the dimensions. In fact, 5/7 examples in https://einops.rocks/api/rearrange/ fall in this category. This is not necessary when working with xarray objects.
  • In xarray dimension names can be any {term}xarray:hashable. xarray-einstats only supports strings as dimension names, but the space can't be used as delimiter.
  • In xarray dimensions are labeled and the order doesn't matter. This might seem the same as the first reason but it is not. When splitting or stacking dimensions you need (and want) the names of both parent and children dimensions. In some cases, for example stacking, we can autogenerate a default name, but in general you'll want to give a name to the new dimension. After all, dimension order in xarray doesn't matter and there isn't much to be done without knowing the dimension names.

xarray-einstats uses two separate arguments, one for the input pattern (optional) and another for the output pattern. Each is a list of dimensions (strings) or dimension operations (lists or dictionaries). Some examples:

We can combine the chain and draw dimensions and name the resulting dimension sample using a list with a single dictionary. The team dimension is not present in the pattern and is not modified.

rearrange(ds.atts, [{"sample": ("chain", "draw")}])

Out:


   
    
array([[ 0.10632395,  0.1538294 ,  0.17806237, ...,  0.16744257,
         0.14927569,  0.21803568],
         ...,
       [ 0.30447644,  0.22650416,  0.25523419, ...,  0.28405435,
         0.29232681,  0.20286656]])
Coordinates:
  * team     (team) object 'Wales' 'France' 'Ireland' ... 'Italy' 'England'
Dimensions without coordinates: sample

   

Note that following xarray convention, new dimensions and dimensions on which we operated are moved to the end. This only matters when you access the underlying array with .values or .data and you can always transpose using {meth}xarray.Dataset.transpose, but it can matter. You can change the pattern to enforce the output dimension order:

rearrange(ds.atts, [{"sample": ("chain", "draw")}, "team"])

Out:


   
    
array([[ 0.10632395, -0.01912607,  0.13671159, -0.06754783, -0.46083807,
         0.30447644],
       ...,
       [ 0.21803568, -0.11394285,  0.09447937, -0.11032643, -0.29111234,
         0.20286656]])
Coordinates:
  * team     (team) object 'Wales' 'France' 'Ireland' ... 'Italy' 'England'
Dimensions without coordinates: sample

   

Now to a more complicated pattern. We will split the chain and draw dimension, then combine those split dimensions between them.

rearrange(
    ds.atts,
    # combine split chain and team dims between them
    # here we don't use a dict so the new dimensions get a default name
    out_dims=[("chain1", "team1"), ("team2", "chain2")],
    # use dicts to specify which dimensions to split, here we *need* to use a dict
    in_dims=[{"chain": ("chain1", "chain2")}, {"team": ("team1", "team2")}],
    # set the lengths of split dimensions as kwargs
    chain1=2, chain2=2, team1=2, team2=3
)

Out:


   
    
array([[[ 1.06323952e-01,  2.47005252e-01, -1.91260714e-02,
         -2.55769582e-02,  1.36711590e-01,  1.23165119e-01],
...
        [-2.76616968e-02, -1.10326428e-01, -3.99582340e-01,
         -2.91112341e-01,  1.90714405e-01,  2.02866563e-01]]])
Coordinates:
  * draw     (draw) int64 0 1 2 3 4 5 6 7 8 ... 492 493 494 495 496 497 498 499
Dimensions without coordinates: chain1,team1, team2,chain2

   

More einops examples at {ref}einops

Linear Algebra

Still missing in the package

There is no one size fits all solution, but knowing the function we are wrapping we can easily make the code more concise and clear. Without xarray-einstats, to invert a batch of matrices stored in a 4d array you have to do:

inv = xarray.apply_ufunc(   # output is a 4d labeled array
    numpy.linalg.inv,
    batch_of_matrices,      # input is a 4d labeled array
    input_core_dims=[["matrix_dim", "matrix_dim_bis"]],
    output_core_dims=[["matrix_dim", "matrix_dim_bis"]]
)

to calculate it's norm instead, it becomes:

norm = xarray.apply_ufunc(  # output is a 2d labeled array
    numpy.linalg.norm,
    batch_of_matrices,      # input is a 4d labeled array
    input_core_dims=[["matrix_dim", "matrix_dim_bis"]],
)

With xarray-einstats, those operations become:

inv = xarray_einstats.inv(batch_of_matrices, dim=("matrix_dim", "matrix_dim_bis"))
norm = xarray_einstats.norm(batch_of_matrices, dim=("matrix_dim", "matrix_dim_bis"))

Similar projects

Here we list some similar projects we know of. Note that all of them are complementary and don't overlap:

Comments
  • distutils.errors.DistutilsOptionError: No configuration found for dynamic 'description'.

    distutils.errors.DistutilsOptionError: No configuration found for dynamic 'description'.

    Build fails on FreeBSD:

    /usr/local/lib/python3.8/site-packages/setuptools/config/pyprojecttoml.py:102: _ExperimentalProjectMetadata: Support for project metadata in `pyproject.toml` is still experimental and may be removed (or change) in future releases.
      warnings.warn(msg, _ExperimentalProjectMetadata)
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "setup.py", line 1, in <module>
        import setuptools; setuptools.setup()
      File "/usr/local/lib/python3.8/site-packages/setuptools/__init__.py", line 87, in setup
        return distutils.core.setup(**attrs)
      File "/usr/local/lib/python3.8/site-packages/setuptools/_distutils/core.py", line 122, in setup
        dist.parse_config_files()
      File "/usr/local/lib/python3.8/site-packages/setuptools/dist.py", line 854, in parse_config_files
        pyprojecttoml.apply_configuration(self, filename, ignore_option_errors)
      File "/usr/local/lib/python3.8/site-packages/setuptools/config/pyprojecttoml.py", line 54, in apply_configuration
        config = read_configuration(filepath, True, ignore_option_errors, dist)
      File "/usr/local/lib/python3.8/site-packages/setuptools/config/pyprojecttoml.py", line 134, in read_configuration
        return expand_configuration(asdict, root_dir, ignore_option_errors, dist)
      File "/usr/local/lib/python3.8/site-packages/setuptools/config/pyprojecttoml.py", line 189, in expand_configuration
        return _ConfigExpander(config, root_dir, ignore_option_errors, dist).expand()
      File "/usr/local/lib/python3.8/site-packages/setuptools/config/pyprojecttoml.py", line 236, in expand
        self._expand_all_dynamic(dist, package_dir)
      File "/usr/local/lib/python3.8/site-packages/setuptools/config/pyprojecttoml.py", line 271, in _expand_all_dynamic
        obtained_dynamic = {
      File "/usr/local/lib/python3.8/site-packages/setuptools/config/pyprojecttoml.py", line 272, in <dictcomp>
        field: self._obtain(dist, field, package_dir)
      File "/usr/local/lib/python3.8/site-packages/setuptools/config/pyprojecttoml.py", line 309, in _obtain
        self._ensure_previously_set(dist, field)
      File "/usr/local/lib/python3.8/site-packages/setuptools/config/pyprojecttoml.py", line 295, in _ensure_previously_set
        raise OptionError(msg)
    distutils.errors.DistutilsOptionError: No configuration found for dynamic 'description'.
    Some dynamic fields need to be specified via `tool.setuptools.dynamic`
    others must be specified via the equivalent attribute in `setup.py`.
    *** Error code 1
    

    Version: 0.2.2 Python-3.8 FreeBSD 13.1

    opened by yurivict 6
  • test stats on datasets

    test stats on datasets

    If using the right subset of dimensions, the summary stats already work on datasets. This PR adds tests to make sure this behaviour always works. This is very convenient for MCMC output for example to take the mad over the chain and draw dimension of all the variables in a dataset at once.

    opened by OriolAbril 2
  • update readme, index and install pages

    update readme, index and install pages

    Update the installation page to add conda, and separate the readme and the index page to have slightly different content (expecting different audiences in each page)


    :books: Documentation preview :books:: https://xarray-einstats--33.org.readthedocs.build/en/33/

    opened by OriolAbril 1
  • Use Read the Docs action v1

    Use Read the Docs action v1

    Read the Docs repository was renamed from readthedocs/readthedocs-preview to readthedocs/actions/. Now, the preview action is under readthedocs/actions/preview and is tagged as v1


    :books: Documentation preview :books:: https://xarray-einstats--31.org.readthedocs.build/en/31/

    opened by humitos 1
  • Catch positive definite error

    Catch positive definite error

    Even though the docstring from https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.multivariate_normal.html says

    Symmetric positive (semi)definite covariance matrix of the distribution.

    It also accepts matrices whose determinant is close to 0 but negative, generally due to numerical issues. It doesn't expect users to solve those numerical issues themselves before passing the covariance matrix to the multivariate_normal class or methods. This adds a try/except to catch the error related to this issue and checks if adding an identity matrix with 1e-10 to the provided covariance matrix solves the issue.

    cc @symeneses

    opened by OriolAbril 1
  • Check behaviour on groupby objects

    Check behaviour on groupby objects

    I think that most functions in the stats module can be used on xarray groupby objects, like az.hdi as shown here. If that is the case we should document that and add tests to prevent future changes from removing that feature.

    opened by OriolAbril 1
  • try preferred citation to add the doi in generated citation

    try preferred citation to add the doi in generated citation

    The cite this repository button generated by github currenly copies the following text to the clipboard:

    @software{Abril_Pla_xarray-einstats,
    author = {Abril Pla, Oriol},
    license = {Apache-2.0},
    title = {{xarray-einstats}},
    url = {https://github.com/arviz-devs/xarray-einstats}
    }
    

    which completely ignores the doi even though it is provided as identifier. There is also an option for "preferred citation" that can be used to point to an article instead. This PR tries to use that to generate still a software citation but with the general Zenodo doi for this repository.

    Note: Zenodo and releases are a cyclic dependency. The doi is only generated once the release is crafted, so the released code can't include the right doi.

    This branch currently generates this:

    @software{Abril-Pla_xarray-einstats_2022,
    author = {Abril-Pla, Oriol},
    doi = {10.5281/zenodo.5895451},
    license = {Apache-2.0},
    title = {{xarray-einstats}},
    url = {https://github.com/arviz-devs/xarray-einstats},
    year = {2022}
    }
    

    will think about what should be present and update accordingly. Some info like the publisher is ignored (I guess for software type citations) even if provided inside the preferred_ctation section.

    opened by OriolAbril 1
  • Change the monkeypatch thing to using with contexts?

    Change the monkeypatch thing to using with contexts?

    It would be nice to be able to do things like

    with matrix_dims(["dim1", "dim3"]):
        chol = xe.linalg.cholesky(da)
        eig = xe.linalg.eig(da)
    

    instead of having to use the monkeypatch trick (currently documented) or needing to pass the dimensions every time.

    opened by OriolAbril 0
  • Tests fail: AttributeError: partially initialized module 'einops' has no attribute '_backends'

    Tests fail: AttributeError: partially initialized module 'einops' has no attribute '_backends'

    collected 140 items / 2 errors                                                                                                                                                               
    
    =========================================================================================== ERRORS ===========================================================================================
    _________________________________________________________________ ERROR collecting src/xarray_einstats/tests/test_einops.py __________________________________________________________________
    tests/test_einops.py:6: in <module>
        from xarray_einstats.einops import raw_rearrange, raw_reduce, rearrange, reduce, translate_pattern
    einops.py:9: in <module>
        import einops
    einops.py:407: in <module>
        class DaskBackend(einops._backends.AbstractBackend):  # pylint: disable=protected-access
    E   AttributeError: partially initialized module 'einops' has no attribute '_backends' (most likely due to a circular import)
    __________________________________________________________________ ERROR collecting src/xarray_einstats/tests/test_numba.py __________________________________________________________________
    tests/test_numba.py:6: in <module>
        from xarray_einstats.numba import histogram
    numba.py:2: in <module>
        import numba
    numba.py:9: in <module>
        @numba.guvectorize(
    E   AttributeError: partially initialized module 'numba' has no attribute 'guvectorize' (most likely due to a circular import)
    !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 2 errors during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
    ===================================================================================== 2 errors in 1.83s ======================================================================================
    *** Error code 2
    
    

    OS: FreeBSD 13.1

    opened by yurivict 3
  • Add logsumexp wrapper

    Add logsumexp wrapper

    I think it is the only function in scipy.special worth wrapping, so it might be worth adding a misc module for this and other possible "loose ends" functions to wrap

    opened by OriolAbril 0
Releases(v0.4.0)
  • v0.4.0(Dec 9, 2022)

  • v0.3.0(Jun 19, 2022)

  • v0.2.2(Apr 2, 2022)

    Patch release to include the license and changelog files in the pypi package, now using the PEP 621 metadata in pyproject.toml. Packaging the license is needed to add an xarray-einstats feedstock to conda forge.

    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Apr 2, 2022)

    Patch release to use a manifest file to include the license and changelog files in the pypi package. Packaging the license is needed to add an xarray-einstats feedstock to conda forge.

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Apr 1, 2022)

  • v0.1.1(Jan 24, 2022)

    Initial release of xarray_einstats.

    xarray_einstats extends array manipulation libraries to use with xarray. It starts with 4 modules:

    • linalg -> extends functionality from numpy.linalg module
    • stats -> extends functionality from scipy.stats module
    • einops -> extends einops library, which needs to be installed
    • numba -> miscellaneous extensions (numpy.histogram for now only) that need numba to accelerate and/or vectorize the functions. numba needs to be installed to use it

    v0.1.1 indicates the second try at uploading to pypi

    Source code(tar.gz)
    Source code(zip)
SIMD-accelerated bitwise hamming distance Python module for hexidecimal strings

hexhamming What does it do? This module performs a fast bitwise hamming distance of two hexadecimal strings. This looks like: DEADBEEF = 1101111010101

Michael Recachinas 12 Oct 14, 2022
The code from the Machine Learning Bookcamp book and a free course based on the book

The code from the Machine Learning Bookcamp book and a free course based on the book

Alexey Grigorev 5.5k Jan 09, 2023
GRaNDPapA: Generator of Rad Names from Decent Paper Acronyms

Generator of Rad Names from Decent Paper Acronyms

264 Nov 08, 2022
This is my implementation on the K-nearest neighbors algorithm from scratch using Python

K Nearest Neighbors (KNN) algorithm In this Machine Learning world, there are various algorithms designed for classification problems such as Logistic

sonny1902 1 Jan 08, 2022
WAGMA-SGD is a decentralized asynchronous SGD for distributed deep learning training based on model averaging.

WAGMA-SGD is a decentralized asynchronous SGD based on wait-avoiding group model averaging. The synchronization is relaxed by making the collectives externally-triggerable, namely, a collective can b

Shigang Li 6 Jun 18, 2022
Titanic Traveller Survivability Prediction

The aim of the mini project is predict whether or not a passenger survived based on attributes such as their age, sex, passenger class, where they embarked and more.

John Phillip 0 Jan 20, 2022
CrayLabs and user contibuted examples of using SmartSim for various simulation and machine learning applications.

SmartSim Example Zoo This repository contains CrayLabs and user contibuted examples of using SmartSim for various simulation and machine learning appl

Cray Labs 14 Mar 30, 2022
JMP is a Mixed Precision library for JAX.

Mixed precision training [0] is a technique that mixes the use of full and half precision floating point numbers during training to reduce the memory bandwidth requirements and improve the computatio

DeepMind 108 Dec 31, 2022
Provide an input CSV and a target field to predict, generate a model + code to run it.

automl-gs Give an input CSV file and a target field you want to predict to automl-gs, and get a trained high-performing machine learning or deep learn

Max Woolf 1.8k Jan 04, 2023
MLReef is an open source ML-Ops platform that helps you collaborate, reproduce and share your Machine Learning work with thousands of other users.

The collaboration platform for Machine Learning MLReef is an open source ML-Ops platform that helps you collaborate, reproduce and share your Machine

MLReef 1.4k Dec 27, 2022
Machine Learning Techniques using python.

👋 Hi, I’m Fahad from TEXAS TECH. 👀 I’m interested in Optimization / Machine Learning/ Statistics 🌱 I’m currently learning Machine Learning and Stat

FAHAD MOSTAFA 1 Jan 19, 2022
Book Recommender System Using Sci-kit learn N-neighbours

Model-Based-Recommender-Engine I created a book Recommender System using Sci-kit learn's N-neighbours algorithm for my model and the streamlit library

1 Jan 13, 2022
(3D): LeGO-LOAM, LIO-SAM, and LVI-SAM installation and application

SLAM-application: installation and test (3D): LeGO-LOAM, LIO-SAM, and LVI-SAM Tested on Quadruped robot in Gazebo ● Results: video, video2 Requirement

EungChang-Mason-Lee 203 Dec 26, 2022
Diabetes Prediction with Logistic Regression

Diabetes Prediction with Logistic Regression Exploratory Data Analysis Data Preprocessing Model & Prediction Model Evaluation Model Validation: Holdou

AZİZE SULTAN PALALI 2 Oct 23, 2021
STUMPY is a powerful and scalable Python library for computing a Matrix Profile, which can be used for a variety of time series data mining tasks

STUMPY STUMPY is a powerful and scalable library that efficiently computes something called the matrix profile, which can be used for a variety of tim

TD Ameritrade 2.5k Jan 06, 2023
Apple-voice-recognition - Machine Learning

Apple-voice-recognition Machine Learning How does Siri work? Siri is based on large-scale Machine Learning systems that employ many aspects of data sc

Harshith VH 1 Oct 22, 2021
PLUR is a collection of source code datasets suitable for graph-based machine learning.

PLUR (Programming-Language Understanding and Repair) is a collection of source code datasets suitable for graph-based machine learning. We provide scripts for downloading, processing, and loading the

Google Research 76 Nov 25, 2022
Pandas DataFrames and Series as Interactive Tables in Jupyter

Pandas DataFrames and Series as Interactive Tables in Jupyter Star Turn pandas DataFrames and Series into interactive datatables in both your notebook

Marc Wouts 364 Jan 04, 2023
Predicting India’s COVID-19 Third Wave with LSTM

Predicting India’s COVID-19 Third Wave with LSTM Complete project of predicting new COVID-19 cases in the next 90 days with LSTM India is seeing a ste

Samrat Dutta 4 Jan 27, 2022
AtsPy: Automated Time Series Models in Python (by @firmai)

Automated Time Series Models in Python (AtsPy) SSRN Report Easily develop state of the art time series models to forecast univariate data series. Simp

Derek Snow 465 Jan 02, 2023