The Importance of Due Diligence

Lisa Lacouette
Human Systems Data
Published in
4 min readApr 5, 2017

The class blog assignment for this week is to explore analysis of variance (ANOVA). I started my research by stepping through the tutorial that was assigned. I did what I have done many times this semester, I opened my SPSS textbook from last semester (Field, 2013). This textbook has some great (detailed!) information about the most commonly used statistical methods. I reread the chapter on ANOVA and was surprised by the insight I gained after all that has been learned this semester. Field (2013) described the similarities (and differences) between ANOVA and multiple regression. He discussed the historical reason why ANOVA and regression are thought of as being separate tests. Two distinct branches emerged from social sciences, correlational and experimental research to developed their own tests; ANOVA was used more widely by researchers interested in controlled experiments, while multiple regression was favored more by those looking for real-world relationships. This information gave me a different perspective, and a bit more understanding, of the two tests.

ANOVA, similar to multiple regression, involves a predicted model. The F statistic is a test of the fit of the observed data to the predicted model. It compares how good the model is to how bad it is. ANOVA is typically used to compare more than two means, but it can be used to compare two means as effectively as an independent t-test.

I ran an ANOVA on the States data in the car package. This data contains academic and financial information about all 50 states and Washington D.C. I completed a one-way ANOVA and determined that region of the country was related to math SAT scores. That may or may not surprise some people. I confirmed my values in SPSS. The two-way factorial design (image 1) was next. This ANOVA revealed that region and teacher salary were significantly related to higher math SAT scores (again, likely no surprise).

(image 1: two-way factorial design)

I decided next to tackle assumptions because I have always been unclear on these.

Homogeneity of variance assumption is satisfied when the group sizes are equal. ANOVA statistics are not reliable if assumptions are not met. A Levene’s test can be run to determine if this assumption is not being met. The other important assumption is normality. This is satisfied when the distribution is not too tall/narrow or too shallow/wide, this is called kurtosis, also if the distribution is not positively or negatively skewed. Both of these situations could occur if the sample population was not normally distributed or if outliers have a greater effect on the distribution. The Bartlett test, Fligner-Killeen test, and Levene’s test were run to test homogeneity on the States data (image 2). Levene’s and Fligner both showed significance greater than 0.05 so assumptions have not been met. The Bartlett test showed significance of 0.002, this does signify the assumption has been met. The statistics website (Engineering, Assessing, 2017), gave a possible explanation for the difference between the Bartlett and Levene’s test. “Bartlett’s test is sensitive to departures from normality. That is, if your samples come from non-normal distributions, then Bartlett’s test may simply be testing for non-normality.”

(image 2: homogeneity of variance tests)

In light of the homogeneity test data shown in image 2 the assumptions have not been met, so the ANOVA statistics are not reliable even though they make sense. I guess that would be my take-away from this exercise, even if the statistics come back as significant and the results make logical sense, it is still important to verify the assumptions are met.

References

Engineering Statistics Handbook: Bartlett’s test. (n.d.). Retrieved from http://www.itl.nist.gov/div898/handbook/eda/section3/eda357.htm

Engineering Statistics Handbook: Levene’s test. (n.d.). Retrieved from http://www.itl.nist.gov/div898/handbook/eda/section3/eda35a.htm

Field, A. (2013). Discovering statistics using IBM SPSS statistics. Sage.

Kabacoff, R.I. (2017). Quick-R: Assessing the power of r. ANOVA. Retrieved from http://www.statmethods.net/stats/anova.html

Kabacoff, R.I. (2017). Quick-R: Assessing the power of r. Assessing classical test assumptions. Retrieved from http://www.statmethods.net/stats/anovaAssumptions.html

--

--