The Non-parametric Friedman Test
The Friedman test is a non-parametric test used to test for differences between groups when the dependent variable is at least ordinal (could be continuous). The Friedman test is the non-parametric alternative to the one-way ANOVA with repeated measures (or the complete block design and a special case of the Durbin test). If the data is significantly different than normally distributed this becomes the preferred test over using an ANOVA.
The test procedure ranks each row (block) together, then considers the values of ranks by columns. The data is organized in to a matrix with B rows (blocks) and T columns (treatments) with a single operation in each cell of the matrix.
As with nearly any statistical test, there are assumptions to consider. Here let’s illuminate four elements to consider:
- There is one group of test subjects that are measured on three or more different occasions.
- The group is a random sample from the population.
- The dependent variable is at least an ordinal or continuous (Likert scales, time, intelligent, percentage correct, etc.)
- The samples need not be normally distributed.
Setting up the Hypotheses
The null hypothesis is median treatment effects of the population are all the same. In short, the treatments have no effect.
The alternative hypothesis is the effects are not all the same. Indicating there is a discernible difference in treatment effects.
The data we’re dealing with reflects the situation where we want to compare T treatments with N subjects. The subjects are assigned randomly to the various groups. The comparison is within each group and not between groups.
The Test Statistic
The comparison is of the ranked results of the ordinal or continuous data, assigning a ranking value from 1, 2, to T for each of the B rows or treatments.
Since the null hypothesis is the treatments have no effect the rankings the sum of the ranking for each column (treatment) should all be equal.
The total sum of ranks is BT(T+1)/2, thus each treatment’s sum of ranks, if equal, should be relatively close to B(T+1)/2. Therefore the test statistic is a function of the sum of squares of deviations between treatment rank sums (R1, R2, …, RT) and the expected B(T+1)/2 value.
The test statistic, S, is
The Critical Value
Now we need to compare the test statistic to the critical value to determine the deviation are deviating enough to conclude that treatments are not all equal. Here software comes in handy, like Minitab, R, or some other package the has the tables built in.
Here is an excepted table for three or four treatments. If your experiment has more treatments or a large sample size you could approximate the critical value using a chi-squared distribution (more on that another time).
If the test statistic value, S, is larger than the critical value found in the table then we reject the null hypothesis and conclude there is conviencing evidence that the treatments are different.
Originally published at Accendo Reliability.