Multiple Pairwise Comparisons (Post Hoc) Tests in Python

Overview

images/logo.png


https://img.shields.io/circleci/build/github/maximtrp/scikit-posthocs https://app.codacy.com/project/badge/Grade/50d2a82a6dd84b51b515cebf931067d7 https://pepy.tech/badge/scikit-posthocs

scikit-posthocs is a Python package that provides post hoc tests for pairwise multiple comparisons that are usually performed in statistical data analysis to assess the differences between group levels if a statistically significant result of ANOVA test has been obtained.

scikit-posthocs is tightly integrated with Pandas DataFrames and NumPy arrays to ensure fast computations and convenient data import and storage.

This package will be useful for statisticians, data analysts, and researchers who use Python in their work.

Background

Python statistical ecosystem comprises multiple packages. However, it still has numerous gaps and is surpassed by R packages and capabilities.

SciPy (version 1.2.0) offers Student, Wilcoxon, and Mann-Whitney tests that are not adapted to multiple pairwise comparisons. Statsmodels (version 0.9.0) features TukeyHSD test that needs some extra actions to be fluently integrated into a data analysis pipeline. Statsmodels also has good helper methods: allpairtest (adapts an external function such as scipy.stats.ttest_ind to multiple pairwise comparisons) and multipletests (adjusts p values to minimize type I and II errors). PMCMRplus is a very good R package that has no rivals in Python as it offers more than 40 various tests (including post hoc tests) for factorial and block design data. PMCMRplus was an inspiration and a reference for scikit-posthocs.

scikit-posthocs attempts to improve Python statistical capabilities by offering a lot of parametric and nonparametric post hoc tests along with outliers detection and basic plotting methods.

Features

Tests Flowchart

  • Omnibox tests:
    • Durbin test (for BIBD).
  • Parametric pairwise multiple comparisons tests:
    • Scheffe test.
    • Student T test.
    • Tamhane T2 test.
    • TukeyHSD test.
  • Non-parametric tests for factorial design:
    • Conover test.
    • Dunn test.
    • Dwass, Steel, Critchlow, and Fligner test.
    • Mann-Whitney test.
    • Nashimoto and Wright (NPM) test.
    • Nemenyi test.
    • van Waerden test.
    • Wilcoxon test.
  • Non-parametric tests for block design:
    • Conover test.
    • Durbin and Conover test.
    • Miller test.
    • Nemenyi test.
    • Quade test.
    • Siegel test.
  • Other tests:
    • Anderson-Darling test.
    • Mack-Wolfe test.
    • Hayter (OSRT) test.
  • Outliers detection tests:
    • Simple test based on interquartile range (IQR).
    • Grubbs test.
    • Tietjen-Moore test.
    • Generalized Extreme Studentized Deviate test (ESD test).
  • Plotting functionality (e.g. significance plots).

All post hoc tests are capable of p adjustments for multiple pairwise comparisons.

Dependencies

Compatibility

Package is compatible with only Python 3.

Install

You can install the package from PyPi:

$ pip install scikit-posthocs

Examples

Parametric ANOVA with post hoc tests

Here is a simple example of the one-way analysis of variance (ANOVA) with post hoc tests used to compare sepal width means of three groups (three iris species) in iris dataset.

To begin, we will import the dataset using statsmodels get_rdataset() method.

>>> import statsmodels.api as sa
>>> import statsmodels.formula.api as sfa
>>> import scikit_posthocs as sp
>>> df = sa.datasets.get_rdataset('iris').data
>>> df.columns = df.columns.str.replace('.', '')
>>> df.head()
    SepalLength   SepalWidth   PetalLength   PetalWidth Species
0           5.1          3.5           1.4          0.2  setosa
1           4.9          3.0           1.4          0.2  setosa
2           4.7          3.2           1.3          0.2  setosa
3           4.6          3.1           1.5          0.2  setosa
4           5.0          3.6           1.4          0.2  setosa

Now, we will build a model and run ANOVA using statsmodels ols() and anova_lm() methods. Columns Species and SepalWidth contain independent (predictor) and dependent (response) variable values, correspondingly.

>>> lm = sfa.ols('SepalWidth ~ C(Species)', data=df).fit()
>>> anova = sa.stats.anova_lm(lm)
>>> print(anova)
               df     sum_sq   mean_sq         F        PR(>F)
C(Species)    2.0  11.344933  5.672467  49.16004  4.492017e-17
Residual    147.0  16.962000  0.115388       NaN           NaN

The results tell us that there is a significant difference between groups means (p = 4.49e-17), but does not tell us the exact group pairs which are different in means. To obtain pairwise group differences, we will carry out a posteriori (post hoc) analysis using scikits-posthocs package. Student T test applied pairwisely gives us the following p values:

>>> sp.posthoc_ttest(df, val_col='SepalWidth', group_col='Species', p_adjust='holm')
                  setosa    versicolor     virginica
setosa     -1.000000e+00  5.535780e-15  8.492711e-09
versicolor  5.535780e-15 -1.000000e+00  1.819100e-03
virginica   8.492711e-09  1.819100e-03 -1.000000e+00

Remember to use a FWER controlling procedure, such as Holm procedure, when making multiple comparisons. As seen from this table, significant differences in group means are obtained for all group pairs.

Non-parametric ANOVA with post hoc tests

If normality and other assumptions are violated, one can use a non-parametric Kruskal-Wallis H test (one-way non-parametric ANOVA) to test if samples came from the same distribution.

Let's use the same dataset just to demonstrate the procedure. Kruskal-Wallis test is implemented in SciPy package. scipy.stats.kruskal method accepts array-like structures, but not DataFrames.

>>> import scipy.stats as ss
>>> import statsmodels.api as sa
>>> import scikit_posthocs as sp
>>> df = sa.datasets.get_rdataset('iris').data
>>> df.columns = df.columns.str.replace('.', '')
>>> data = [df.loc[ids, 'SepalWidth'].values for ids in df.groupby('Species').groups.values()]

data is a list of 1D arrays containing sepal width values, one array per each species. Now we can run Kruskal-Wallis analysis of variance.

>>> H, p = ss.kruskal(*data)
>>> p
1.5692820940316782e-14

P value tells us we may reject the null hypothesis that the population medians of all of the groups are equal. To learn what groups (species) differ in their medians we need to run post hoc tests. scikit-posthocs provides a lot of non-parametric tests mentioned above. Let's choose Conover's test.

>>> sp.posthoc_conover(df, val_col='SepalWidth', group_col='Species', p_adjust = 'holm')
                  setosa    versicolor     virginica
setosa     -1.000000e+00  2.278515e-18  1.293888e-10
versicolor  2.278515e-18 -1.000000e+00  1.881294e-03
virginica   1.293888e-10  1.881294e-03 -1.000000e+00

Pairwise comparisons show that we may reject the null hypothesis (p < 0.01) for each pair of species and conclude that all groups (species) differ in their sepal widths.

Block design

In block design case, we have a primary factor (e.g. treatment) and a blocking factor (e.g. age or gender). A blocking factor is also called a nuisance factor, and it is usually a source of variability that needs to be accounted for.

An example scenario is testing the effect of four fertilizers on crop yield in four cornfields. We can represent the results with a matrix in which rows correspond to the blocking factor (field) and columns correspond to the primary factor (yield).

The following dataset is artificial and created just for demonstration of the procedure:

>>> data = np.array([[ 8.82, 11.8 , 10.37, 12.08],
                     [ 8.92,  9.58, 10.59, 11.89],
                     [ 8.27, 11.46, 10.24, 11.6 ],
                     [ 8.83, 13.25,  8.33, 11.51]])

First, we need to perform an omnibus test — Friedman rank sum test. It is implemented in scipy.stats subpackage:

>>> import scipy.stats as ss
>>> ss.friedmanchisquare(*data.T)
FriedmanchisquareResult(statistic=8.700000000000003, pvalue=0.03355726870553798)

We can reject the null hypothesis that our treatments have the same distribution, because p value is less than 0.05. A number of post hoc tests are available in scikit-posthocs package for unreplicated block design data. In the following example, Nemenyi's test is used:

>>> import scikit_posthocs as sp
>>> sp.posthoc_nemenyi_friedman(data)
          0         1         2         3
0 -1.000000  0.220908  0.823993  0.031375
1  0.220908 -1.000000  0.670273  0.823993
2  0.823993  0.670273 -1.000000  0.220908
3  0.031375  0.823993  0.220908 -1.000000

This function returns a DataFrame with p values obtained in pairwise comparisons between all treatments. One can also pass a DataFrame and specify the names of columns containing dependent variable values, blocking and primary factor values. The following code creates a DataFrame with the same data:

>>> data = pd.DataFrame.from_dict({'blocks': {0: 0, 1: 1, 2: 2, 3: 3, 4: 0, 5: 1, 6:
2, 7: 3, 8: 0, 9: 1, 10: 2, 11: 3, 12: 0, 13: 1, 14: 2, 15: 3}, 'groups': {0:
0, 1: 0, 2: 0, 3: 0, 4: 1, 5: 1, 6: 1, 7: 1, 8: 2, 9: 2, 10: 2, 11: 2, 12: 3,
13: 3, 14: 3, 15: 3}, 'y': {0: 8.82, 1: 8.92, 2: 8.27, 3: 8.83, 4: 11.8, 5:
9.58, 6: 11.46, 7: 13.25, 8: 10.37, 9: 10.59, 10: 10.24, 11: 8.33, 12: 12.08,
13: 11.89, 14: 11.6, 15: 11.51}})
>>> data
    blocks  groups      y
0        0       0   8.82
1        1       0   8.92
2        2       0   8.27
3        3       0   8.83
4        0       1  11.80
5        1       1   9.58
6        2       1  11.46
7        3       1  13.25
8        0       2  10.37
9        1       2  10.59
10       2       2  10.24
11       3       2   8.33
12       0       3  12.08
13       1       3  11.89
14       2       3  11.60
15       3       3  11.51

This is a melted and ready-to-use DataFrame. Do not forget to pass melted argument:

>>> sp.posthoc_nemenyi_friedman(data, y_col='y', block_col='blocks', group_col='groups', melted=True)
          0         1         2         3
0 -1.000000  0.220908  0.823993  0.031375
1  0.220908 -1.000000  0.670273  0.823993
2  0.823993  0.670273 -1.000000  0.220908
3  0.031375  0.823993  0.220908 -1.000000

Data types

Internally, scikit-posthocs uses NumPy ndarrays and pandas DataFrames to store and process data. Python lists, NumPy ndarrays, and pandas DataFrames are supported as input data types. Below are usage examples of various input data structures.

Lists and arrays

>>> x = [[1,2,1,3,1,4], [12,3,11,9,3,8,1], [10,22,12,9,8,3]]
>>> # or
>>> x = np.array([[1,2,1,3,1,4], [12,3,11,9,3,8,1], [10,22,12,9,8,3]])
>>> sp.posthoc_conover(x, p_adjust='holm')
          1         2         3
1 -1.000000  0.057606  0.007888
2  0.057606 -1.000000  0.215761
3  0.007888  0.215761 -1.000000

You can check how it is processed with a hidden function __convert_to_df():

>>> sp.__convert_to_df(x)
(    vals  groups
 0      1       1
 1      2       1
 2      1       1
 3      3       1
 4      1       1
 5      4       1
 6     12       2
 7      3       2
 8     11       2
 9      9       2
 10     3       2
 11     8       2
 12     1       2
 13    10       3
 14    22       3
 15    12       3
 16     9       3
 17     8       3
 18     3       3, 'vals', 'groups')

It returns a tuple of a DataFrame representation and names of the columns containing dependent (vals) and independent (groups) variable values.

Block design matrix passed as a NumPy ndarray is processed with a hidden __convert_to_block_df() function:

>>> data = np.array([[ 8.82, 11.8 , 10.37, 12.08],
                     [ 8.92,  9.58, 10.59, 11.89],
                     [ 8.27, 11.46, 10.24, 11.6 ],
                     [ 8.83, 13.25,  8.33, 11.51]])
>>> sp.__convert_to_block_df(data)
(    blocks groups      y
 0        0      0   8.82
 1        1      0   8.92
 2        2      0   8.27
 3        3      0   8.83
 4        0      1  11.80
 5        1      1   9.58
 6        2      1  11.46
 7        3      1  13.25
 8        0      2  10.37
 9        1      2  10.59
 10       2      2  10.24
 11       3      2   8.33
 12       0      3  12.08
 13       1      3  11.89
 14       2      3  11.60
 15       3      3  11.51, 'y', 'groups', 'blocks')

DataFrames

If you are using DataFrames, you need to pass column names containing variable values to a post hoc function:

>>> import statsmodels.api as sa
>>> import scikit_posthocs as sp
>>> df = sa.datasets.get_rdataset('iris').data
>>> df.columns = df.columns.str.replace('.', '')
>>> sp.posthoc_conover(df, val_col='SepalWidth', group_col='Species', p_adjust = 'holm')

val_col and group_col arguments specify the names of the columns containing dependent (response) and independent (grouping) variable values.

Significance plots

P values can be plotted using a heatmap:

>>> pc = sp.posthoc_conover(x, val_col='values', group_col='groups')
>>> heatmap_args = {'linewidths': 0.25, 'linecolor': '0.5', 'clip_on': False, 'square': True, 'cbar_ax_bbox': [0.80, 0.35, 0.04, 0.3]}
>>> sp.sign_plot(pc, **heatmap_args)

images/plot-conover.png

Custom colormap applied to a plot:

>>> pc = sp.posthoc_conover(x, val_col='values', group_col='groups')
>>> # Format: diagonal, non-significant, p<0.001, p<0.01, p<0.05
>>> cmap = ['1', '#fb6a4a',  '#08306b',  '#4292c6', '#c6dbef']
>>> heatmap_args = {'cmap': cmap, 'linewidths': 0.25, 'linecolor': '0.5', 'clip_on': False, 'square': True, 'cbar_ax_bbox': [0.80, 0.35, 0.04, 0.3]}
>>> sp.sign_plot(pc, **heatmap_args)

images/plot-conover-custom-cmap.png

Citing

If you want to cite scikit-posthocs, please refer to the publication in the Journal of Open Source Software:

Terpilowski, M. (2019). scikit-posthocs: Pairwise multiple comparison tests in Python. Journal of Open Source Software, 4(36), 1169, https://doi.org/10.21105/joss.01169

@ARTICLE{Terpilowski2019,
  title    = {scikit-posthocs: Pairwise multiple comparison tests in Python},
  author   = {Terpilowski, Maksim},
  journal  = {The Journal of Open Source Software},
  volume   = {4},
  number   = {36},
  pages    = {1169},
  year     = {2019},
  doi      = {10.21105/joss.01169}
}

Acknowledgement

Thorsten Pohlert, PMCMR author and maintainer

Comments
  • Added norm=colors.NoNorm() to ColorbarBase for Significance Plots

    Added norm=colors.NoNorm() to ColorbarBase for Significance Plots

    A quick fix for Issue #51 is by adding norm=colors.NoNorm() to ColorbarBase. Probably should be double-checked to ensure the behaviour is correct for all cases.

    import matplotlib.pyplot as plt
    import scikit_posthocs as sp
    import statsmodels.api as sa
    
    x = sa.datasets.get_rdataset('iris').data
    x.columns = x.columns.str.replace('.', '')
    pc = sp.posthoc_ttest(x, val_col='SepalWidth', group_col='Species', p_adjust='holm')
    heatmap_args = {'linewidths': 0.25, 'linecolor': '0.5', 'clip_on': False, 'square': True,
                    'cbar_ax_bbox': [0.82, 0.35, 0.04, 0.3]}
    sp.sign_plot(pc, **heatmap_args)
    plt.show()
    

    expected

    opened by MattyB95 6
  • function outliers_gesd has a bug when outliers > 1

    function outliers_gesd has a bug when outliers > 1

    Describe the bug outliers_gesd has a bug ,when the outliers increase and abs_d numpy array size decrease;

    in file '_outliers.py' line

          # Masked values
            lms = ms[-1] if len(ms) > 0 else []
            ms.append(lms + [np.argmax(abs_d)])
    

    the abs_d's size is not data's size any more, so the np.argmax(abs_d) is not the true outlier index in data numpy array.

    bug 
    opened by zhoul14 6
  • Results differ slightly from PMCMR

    Results differ slightly from PMCMR

    First off, congrats on the great idea of porting this to Python. I just ran it on R's InsectSprays data (with no p_adjust method) and I'm getting slightly different p-values than those from PMCMR for a couple of the group comparisons (B & C and F & C). Do you know why that might be? Thanks!

    question 
    opened by dyballa 6
  • posthoc_dscf Calculation

    posthoc_dscf Calculation

    in definition posthoc_dscf, ni and nj have been replaced. CURRENT

    def posthoc_dscf(a, val_col=None, group_col=None, sort=False):
    ....
      def compare(i, j):
        ....
                u = np.array([nj * ni + (nj * (nj + 1) / 2), 
                nj * ni + (ni * (ni + 1) / 2)]) - r
        ....
    

    TRUTH

     u = np.array([nj * ni + (ni * (ni + 1) / 2), 
          nj * ni + (nj* (nj + 1) / 2)]) - r
    
    bug 
    opened by Tsuchihashi-ryo 5
  • sign_plot significance order

    sign_plot significance order

    Hi there!

    Thanks for the nice library!

    A small thing to suggest on sign_plot method, I believe it's better to put legend in the order to ['NS', 'p<0.05', 'p<0.01', 'p<0.001'] rather than the current version with 'NS' at the end. Because 'NS' is the situation where p>0.05, and it's more logical to sort the colormap on either descending or ascending order.

    Best

    enhancement 
    opened by tarikaltuncu 5
  • Post-hocs test for dataframes with different group / block / y column names break

    Post-hocs test for dataframes with different group / block / y column names break

    Hi,

    I cannot use post-hocs test for dataframes with melted = True and group_col != 'groups', block_col != 'blocks' and y_col != 'y'. Basically, anything which deviates from the example

    sp.posthoc_nemenyi_friedman(data, y_col='y', block_col='blocks', group_col='groups', melted=True)
    

    breaks the code. The error is likely due to __convert_to_block_df (https://github.com/maximtrp/scikit-posthocs/blob/master/scikit_posthocs/_posthocs.py) which returns the old y_col, group_col, block_col values but assigns the column names "groups" / "blocks" / "y"

    def __convert_to_block_df(a, y_col=None, group_col=None, block_col=None, melted=False):
        # ...
        elif isinstance(a, DataFrame) and melted:
            x = DataFrame.from_dict({'groups': a[group_col],
                                     'blocks': a[block_col],
                                     'y': a[y_col]})**
        # ...
        return x, y_col, group_col, block_col
    

    On a somewhat related note: I wanted to implement / use these tests to plot CD diagrams as suggested in "J. Demsar (2006), Statistical comparisons of classifiers over multiple data sets, Journal of Machine Learning Research, 7, 1-30." which you also cite in the documentation. However, I have a difficult time to understand what "block", "groups", and "y" mean in this context. More specifically, are blocks (or groups?) different classifiers or datasets and is y the ranks or the accuracies? You dont happen to have some example code and or explanation how to plot CD diagrams?

    Thank

    bug 
    opened by sbuschjaeger 4
  • Invalid  Synthax

    Invalid Synthax

    Describe the bug A clear and concise description of what the bug is.

    Dataset Please provide a link to the dataset you get the bug with.

    To Reproduce Steps to reproduce the behavior:

    1. Go to '...'
    2. Click on '....'
    3. Scroll down to '....'
    4. See error

    Expected behavior A clear and concise description of what you expected to happen.

    System and package information (please complete the following information):

    • OS: (e.g. Linux 4.20.0-arch1-1-ARCH x86_64 GNU/Linux)
    • Package version: (e.g. 0.4.0)

    Additional context Add any other context about the problem here.

    bug 
    opened by aliyurtsevenn 4
  • Error when running Friedman Conover test.

    Error when running Friedman Conover test.

    Hi

    When I try to run a Conover posthoc on my pandas dataframe I get the following error: /opt/conda/lib/python3.6/site-packages/scikit_posthocs/_posthocs.py:705: RuntimeWarning: invalid value encountered in sqrt tval = dif / np.sqrt(A / B)

    I also suspect the Nemenyi to be affected as I am getting values of 0.001 exactly for every conditions.

    Best,

    bug 
    opened by Leicas 4
  • Comparing to a control?

    Comparing to a control?

    Hi there,

    Thanks for the package! I am using friedman tests paired with the Nemenyi test, and this works very nicely for looking at all pairwise comparisons.

    I was wondering if there was a way to compare vs a control method using say the Holm method? I can see there is a Holm option with both conover and siegel, but I do not believe these compare against a control (correct me if Im wrong).

    Thank you

    question 
    opened by benjaminpatrickevans 4
  • No results for Likert-type scale items

    No results for Likert-type scale items

    I'm really excited to use the new posthocs package, and have been trying to run the Dunn test (posthoc_dunn) on results from a survey over the last few days (I have about 1100 respondents). I have no problem when I run it on results that represent the difference between two feeling thermometers (a variable that ranges from -100 to 100). But every time I try to run it on a Likert-type scale item that takes values of 1 through 3 or 1 through 5, it returns a table full of null results (NaN in all cells except the diagonal). This comes along with a series of warnings as follows:

    1. _posthocs.py:191: RuntimeWarning: invalid value encountered in sqrt: z_value = diff / np.sqrt((A - x_ties) * B)
    2. multitest.py:176: RuntimeWarning: invalid value encountered in greater notreject = pvals > alphaf / np.arange(ntests, 0, -1)
    3. multitest.py:251: RuntimeWarning: invalid value encountered in greater pvals_corrected[pvals_corrected>1] = 1

    I am not a programming expert, but my impression is that what is happening here is that the compare_dunn function (lines 187-193 in posthoc.py) is not returning valid p-values, and I am guessing that this is because (A - x_ties) is negative for some reason and so the np.sqrt function isn't computing a value for the z_value.

    I played around with some groups of small arrays involving combinations of values ranging from 1 to 3 and 1 to 5, on the same scale as my data. Sometimes these had no problem returning valid results and other times they yielded the same NaNs that I get with my full dataset. I'm wondering if the issue has something to do with the total number or overall proportion of ties in the data. Obviously with Likert-type scale items there are a lot of ties. I'd love your thoughts on whether it's something that can be fixed to make analysis on this type of data possible. Thanks!!

    bug 
    opened by andreaeverett 4
  • fail to import scikit_posthocs

    fail to import scikit_posthocs

    Hi,

    I have been using scikit_posthocs(Version: 0.6.6) in Python 3.7.1 for a while. Today, when I tried to import scikit_posthocs, I got an error as ModuleNotFoundError: No module named 'statsmodels.stats.libqsturng'

    I checked with pip show statsmodels and Version: 0.12.0 was shown. I reinstalled by 'pip install statsmodels' but got the same error message. I reinstalled by 'pip install scikit-posthocs' but problem remained. Running out of ideas. What could be the reason?

    Thanks!

    opened by yiexu 3
  • Use posthoc tests to plot critical difference diagram

    Use posthoc tests to plot critical difference diagram

    It seems there are many post-hoc tests implemented and will be really good if there's an easy interface to plot critical difference diagrams with any chosen pairwise tests.

    In Orange framework, there's only few types of tests implemented and the interface isn't very intuitive.

    enhancement 
    opened by bangxiangyong 1
  • Solving ValueError; 'All numbers are identical in mannwhitneyu'

    Solving ValueError; 'All numbers are identical in mannwhitneyu'

    Hi,

    I often use your _posthoc_mannwhitneyu and I get ValueError 'All numbers are identical in mannwhitneyu' when two groups are composed from idential numbers. But I thought we should adjust p-values including the p-value(=1.0) from those comparisons, so I modified the code in _pothoc.py like this.

    def _posthoc_mannwhitney(
            a: Union[list, np.ndarray, DataFrame],
            val_col: str = None,
            group_col: str = None,
            use_continuity: bool = True,
            alternative: str = 'two-sided',
            p_adjust: str = None,
            sort: bool = True) -> DataFrame:
        '''Pairwise comparisons with Mann-Whitney rank test.
    
        Parameters
        ----------
        a : array_like or pandas DataFrame object
            An array, any object exposing the array interface or a pandas
            DataFrame. Array must be two-dimensional.
    
        val_col : str, optional
            Name of a DataFrame column that contains dependent variable values (test
            or response variable). Values should have a non-nominal scale. Must be
            specified if `a` is a pandas DataFrame object.
    
        group_col : str, optional
            Name of a DataFrame column that contains independent variable values
            (grouping or predictor variable). Values should have a nominal scale
            (categorical). Must be specified if `a` is a pandas DataFrame object.
    
        use_continuity : bool, optional
            Whether a continuity correction (1/2.) should be taken into account.
            Default is True.
    
        alternative : ['two-sided', 'less', or 'greater'], optional
            Whether to get the p-value for the one-sided hypothesis
            ('less' or 'greater') or for the two-sided hypothesis ('two-sided').
            Defaults to 'two-sided'.
    
        p_adjust : str, optional
            Method for adjusting p values.
            See statsmodels.sandbox.stats.multicomp for details.
            Available methods are:
            'bonferroni' : one-step correction
            'sidak' : one-step correction
            'holm-sidak' : step-down method using Sidak adjustments
            'holm' : step-down method using Bonferroni adjustments
            'simes-hochberg' : step-up method  (independent)
            'hommel' : closed method based on Simes tests (non-negative)
            'fdr_bh' : Benjamini/Hochberg  (non-negative)
            'fdr_by' : Benjamini/Yekutieli (negative)
            'fdr_tsbh' : two stage fdr correction (non-negative)
            'fdr_tsbky' : two stage fdr correction (non-negative)
    
        sort : bool, optional
            Specifies whether to sort DataFrame by group_col or not. Recommended
            unless you sort your data manually.
    
        Returns
        -------
        result : pandas.DataFrame
            P values.
    
        Notes
        -----
        Refer to `scipy.stats.mannwhitneyu` reference page for further details.
    
        Examples
        --------
        >>> x = [[1,2,3,4,5], [35,31,75,40,21], [10,6,9,6,1]]
        >>> sp.posthoc_mannwhitney(x, p_adjust = 'holm')
        '''
        x, _val_col, _group_col = __convert_to_df(a, val_col, group_col)
        x = x.sort_values(by=[_group_col, _val_col], ascending=True) if sort else x
    
        groups = x[_group_col].unique()
        x_len = groups.size
        vs = np.zeros((x_len, x_len))
        xg = x.groupby(_group_col)[_val_col]
        tri_upper = np.triu_indices(vs.shape[0], 1)
        tri_lower = np.tril_indices(vs.shape[0], -1)
        vs[:, :] = 0
    
        combs = it.combinations(range(x_len), 2)
    
        for i, j in combs: ##I modified this section##
            try:
                vs[i, j] = ss.mannwhitneyu(
                    xg.get_group(groups[i]),
                    xg.get_group(groups[j]),
                    use_continuity=use_continuity,
                    alternative=alternative)[1]
            except ValueError as e:
                if str(e)=="All numbers are identical in mannwhitneyu":
                    vs[i, j] =1.0
                else:
                    raise e
    
        if p_adjust:
            vs[tri_upper] = multipletests(vs[tri_upper], method=p_adjust)[1]
    
        vs[tri_lower] = np.transpose(vs)[tri_lower]
        np.fill_diagonal(vs, 1)
        return DataFrame(vs, index=groups, columns=groups)
    

    Is this a right solution?

    I'm not sure but this error may not occur with other versions of scipy.stats.

    bug check needed 
    opened by fMizki 2
  • Question: Post-hoc dunn return non-significant

    Question: Post-hoc dunn return non-significant

    Hi! Thanks a lot for creating this analysis tool.

    I would like to check if it is normal that a post-hoc analysis using Dunn test, after Kruskal-Wallis, returns no significant result at all between the pairwise comparisons?

    Another question, does Dunn test require multiple comparison correction? Either way (with or without correction), I don't get any significant even though Kruskal-Wallis test rejects the null hypothesis.

    question check needed 
    opened by darrencl 3
  • Results grouping after post-hoc test

    Results grouping after post-hoc test

    Hi, I was wondering if there is any chance to include a feature where post hoc results are grouped according to their relationship. I know that in R there are the packages multcompLetters and multcompview, which offer such feature. I could find some people looking for a feature like this, but no feasible was found.

    Example: https://stackoverflow.com/questions/48841650/python-algorithm-on-letter-based-representation-of-all-pairwise-comparisons

    There is a solution attempt at those topics, but I could not reproduce them: https://stackoverflow.com/questions/43987651/tukey-test-grouping-and-plotting-in-scipy https://stackoverflow.com/questions/49963138/label-groups-from-tuekys-test-results-according-to-significant

    It looks like there is a paper describing the algorithm for implementing this: Hans-Peter Piepho (2004) An Algorithm for a Letter-Based Representation of All-Pairwise Comparisons, Journal of Computational and Graphical Statistics, 13:2, 456-466, DOI: 10.1198/1061860043515

    By the way, thanks for this project, it is awesome!

    enhancement 
    opened by helenocampos 3
  • Structure for the tests

    Structure for the tests

    Hi Maksim!

    Nice package, thanks for sharing! I am trying to use it and hoping to contribute.

    Before I use it, I'd like to understand how it is tested. For example, if you look at https://github.com/maximtrp/scikit-posthocs/blob/4709cf2821ce98dffef542b9b916f7c4a5f00ff4/tests/test_posthocs.py#L27

    or any other tests, you seem have to pre-selected the outputs/results to compare.. Where do they come from? From the PMCMR R-package? I tried to look into the PMCMR package to look at their tests - I can't see them in the package distributed, do you know if it is tested?

    What are the other principles behind the tests you wrote?

    enhancement help wanted 
    opened by raamana 10
Releases(v0.7.0)
Owner
Maksim Terpilowski
Senior research scientist
Maksim Terpilowski
Performance analysis of predictive (alpha) stock factors

Alphalens Alphalens is a Python Library for performance analysis of predictive (alpha) stock factors. Alphalens works great with the Zipline open sour

Quantopian, Inc. 2.5k Jan 09, 2023
ASOUL直播间弹幕抓取&&数据分析

ASOUL直播间弹幕抓取&&数据分析(更新中) 这些文件用于爬取ASOUL直播间的弹幕(其他直播间也可以)和其他信息,以及简单的数据分析生成。

159 Dec 10, 2022
t-SNE and hierarchical clustering are popular methods of exploratory data analysis, particularly in biology.

tree-SNE t-SNE and hierarchical clustering are popular methods of exploratory data analysis, particularly in biology. Building on recent advances in s

Isaac Robinson 61 Nov 21, 2022
Data Intelligence Applications - Online Product Advertising and Pricing with Context Generation

Data Intelligence Applications - Online Product Advertising and Pricing with Context Generation Overview Consider the scenario in which advertisement

Manuel Bressan 2 Nov 18, 2021
Powerful, efficient particle trajectory analysis in scientific Python.

freud Overview The freud Python library provides a simple, flexible, powerful set of tools for analyzing trajectories obtained from molecular dynamics

Glotzer Group 195 Dec 20, 2022
This mini project showcase how to build and debug Apache Spark application using Python

Spark app can't be debugged using normal procedure. This mini project showcase how to build and debug Apache Spark application using Python programming language. There are also options to run Spark a

Denny Imanuel 1 Dec 29, 2021
Making the DAEN information accessible.

The purpose of this repository is to make the information on Australian COVID-19 adverse events accessible. The Therapeutics Goods Administration (TGA) keeps a database of adverse reactions to medica

10 May 10, 2022
Approximate Nearest Neighbor Search for Sparse Data in Python!

Approximate Nearest Neighbor Search for Sparse Data in Python! This library is well suited to finding nearest neighbors in sparse, high dimensional spaces (like text documents).

Meta Research 906 Jan 01, 2023
Ejercicios Panda usando Pandas

Readme Below we add configuration details to locally test your application To co

1 Jan 22, 2022
University Challenge 2021 With Python

University Challenge 2021 This repository contains: The TeX file of the technical write-up describing the University / HYPER Challenge 2021 under late

2 Nov 27, 2021
ETL pipeline on movie data using Python and postgreSQL

Movies-ETL ETL pipeline on movie data using Python and postgreSQL Overview This project consisted on a automated Extraction, Transformation and Load p

Juan Nicolas Serrano 0 Jul 07, 2021
Retail-Sim is python package to easily create synthetic dataset of retaile store.

Retailer's Sale Data Simulation Retail-Sim is python package to easily create synthetic dataset of retaile store. Simulation Model Simulator consists

Corca AI 7 Sep 30, 2022
PostQF is a user-friendly Postfix queue data filter which operates on data produced by postqueue -j.

PostQF Copyright © 2022 Ralph Seichter PostQF is a user-friendly Postfix queue data filter which operates on data produced by postqueue -j. See the ma

Ralph Seichter 11 Nov 24, 2022
Stochastic Gradient Trees implementation in Python

Stochastic Gradient Trees - Python Stochastic Gradient Trees1 by Henry Gouk, Bernhard Pfahringer, and Eibe Frank implementation in Python. Based on th

John Koumentis 2 Nov 18, 2022
Pipeline and Dataset helpers for complex algorithm evaluation.

tpcp - Tiny Pipelines for Complex Problems A generic way to build object-oriented datasets and algorithm pipelines and tools to evaluate them pip inst

Machine Learning and Data Analytics Lab FAU 3 Dec 07, 2022
International Space Station data with Python research 🌎

International Space Station data with Python research 🌎 Plotting ISS trajectory, calculating the velocity over the earth and more. Plotting trajector

Facundo Pedaccio 41 Jun 16, 2022
Code for the DH project "Dhimmis & Muslims – Analysing Multireligious Spaces in the Medieval Muslim World"

Damast This repository contains code developed for the digital humanities project "Dhimmis & Muslims – Analysing Multireligious Spaces in the Medieval

University of Stuttgart Visualization Research Center 2 Jul 01, 2022
Amundsen is a metadata driven application for improving the productivity of data analysts, data scientists and engineers when interacting with data.

Amundsen is a metadata driven application for improving the productivity of data analysts, data scientists and engineers when interacting with data.

Amundsen 3.7k Jan 03, 2023
2019 Data Science Bowl

Kaggle-2019-Data-Science-Bowl-Solution - Here i present my solution to kaggle 2019 data science bowl and how i improved it to win a silver medal in that competition.

Deepak Nandwani 1 Jan 01, 2022
Helper tools to construct probability distributions built from expert elicited data for use in monte carlo simulations.

Elicited Helper tools to construct probability distributions built from expert elicited data for use in monte carlo simulations. Credit to Brett Hoove

Ryan McGeehan 3 Nov 04, 2022