# Anscombe’s quartet in R

Anscombe’s quartet comprises four data sets that have nearly identical simple descriptive statistics, yet have very different distributions and appear very different when graphed. Each dataset consists of eleven (x,y) points. They were constructed in 1973 by the statistician Francis Anscombe to demonstrate both the importance of graphing data before analyzing it and the effect of outliers and other influential observations on statistical properties. He described the article as being intended to counter the impression among statisticians that “numerical calculations are exact, but graphs are rough.

The objectives of this problem set is to orient you to a number of activities in R. And to conduct a thoughtful exercise in appreciating the importance of data visualization. For each question create a code chunk or text response that completes/answers the activity or question requested. Photo by Mick Haupt on Unsplash

Anscombes quartet is a set of 4 x,y data sets that were published by Francis Anscombe in a 1973 paper Graphs in statistical analysis. For this first question load the anscombe data that is part of the library(datasets) in R. And assign that data to a new object called data.

`#install.packages("fBasics")library(fBasics)library(ggplot2)library(grid)library(gridExtra)library(datasets)datasets::anscombeLoading required package: timeDateAttaching package: ‘timeDate’The following object is masked from ‘package:xtable’:    alignLoading required package: timeSeries`

library(xtable)

xtable(head(anscombe, 10))

`summary(anscombe)x1             x2             x3             x4           y1         Min.   : 4.0   Min.   : 4.0   Min.   : 4.0   Min.   : 8   Min.   : 4.260   1st Qu.: 6.5   1st Qu.: 6.5   1st Qu.: 6.5   1st Qu.: 8   1st Qu.: 6.315   Median : 9.0   Median : 9.0   Median : 9.0   Median : 8   Median : 7.580   Mean   : 9.0   Mean   : 9.0   Mean   : 9.0   Mean   : 9   Mean   : 7.501   3rd Qu.:11.5   3rd Qu.:11.5   3rd Qu.:11.5   3rd Qu.: 8   3rd Qu.: 8.570   Max.   :14.0   Max.   :14.0   Max.   :14.0   Max.   :19   Max.   :10.840         y2              y3              y4         Min.   :3.100   Min.   : 5.39   Min.   : 5.250   1st Qu.:6.695   1st Qu.: 6.25   1st Qu.: 6.170   Median :8.140   Median : 7.11   Median : 7.040   Mean   :7.501   Mean   : 7.50   Mean   : 7.501   3rd Qu.:8.950   3rd Qu.: 7.98   3rd Qu.: 8.190   Max.   :9.260   Max.   :12.74   Max.   :12.500str(anscombe)'data.frame':	11 obs. of  8 variables: \$ x1: num  10 8 13 9 11 14 6 4 12 7 ... \$ x2: num  10 8 13 9 11 14 6 4 12 7 ... \$ x3: num  10 8 13 9 11 14 6 4 12 7 ... \$ x4: num  8 8 8 8 8 8 8 19 8 8 ... \$ y1: num  8.04 6.95 7.58 8.81 8.33 ... \$ y2: num  9.14 8.14 8.74 8.77 9.26 8.1 6.13 3.1 9.13 7.26 ... \$ y3: num  7.46 6.77 12.74 7.11 7.81 ... \$ y4: num  6.58 5.76 7.71 8.84 8.47 7.04 5.25 12.5 5.56 7.91 ...funModeling::df_status(anscombe)variable q_zeros p_zeros q_na p_na q_inf p_inf    type unique1       x1       0       0    0    0     0     0 numeric     112       x2       0       0    0    0     0     0 numeric     113       x3       0       0    0    0     0     0 numeric     114       x4       0       0    0    0     0     0 numeric      25       y1       0       0    0    0     0     0 numeric     116       y2       0       0    0    0     0     0 numeric     117       y3       0       0    0    0     0     0 numeric     118       y4       0       0    0    0     0     0 numeric     11`

Summarise the data by calculating the mean, variance, for each column and the correlation between each pair (eg. x1 and y1, x2 and y2, etc) (Hint: use the fBasics() package!)

`fBasics::basicStats(anscombe)`
`# Meansapply(1:8, function(x) mean(anscombe[ , x]))`
1. 9
2. 9
3. 9
4. 9
5. 7.50090909090909
6. 7.50090909090909
7. 7.5
8. 7.50090909090909
`# Variancesapply(1:8, function(x) var(anscombe[ , x]))`
1. 11
2. 11
3. 11
4. 11
5. 4.12726909090909
6. 4.12762909090909
7. 4.12262
8. 4.12324909090909
`# Coorelationsapply(1:4, function(x) cor(anscombe[ , x], anscombe[ , x+4]))`
1. 0.81642051634484
2. 0.816236506000243
3. 0.816286739489598
4. 0.816521436888503

Create scatter plots for each x,y pair of data.

`p1 <- ggplot(anscombe) +  geom_point(aes(x1, y1), color = "darkorange", size = 1.5) +  scale_x_continuous(breaks = seq(0,20,2)) +  scale_y_continuous(breaks = seq(0,12,2)) +  expand_limits(x = 0, y = 0) +  labs(x = "x1", y = "y1",       title = "Dataset 1" ) +  theme_bw()p1`
`p2 <- ggplot(anscombe) +  geom_point(aes(x2, y2), color = "darkorange", size = 1.5) +  scale_x_continuous(breaks = seq(0,20,2)) +  scale_y_continuous(breaks = seq(0,12,2)) +  expand_limits(x = 0, y = 0) +  labs(x = "x2", y = "y2",       title = "Dataset 2" ) +  theme_bw()p2`
`p3 <- ggplot(anscombe) +  geom_point(aes(x3, y3), color = "darkorange", size = 1.5) +  scale_x_continuous(breaks = seq(0,20,2)) +  scale_y_continuous(breaks = seq(0,12,2)) +  expand_limits(x = 0, y = 0) +  labs(x = "x3", y = "y3",       title = "Dataset 3" ) +  theme_bw()p3`
`p4 <- ggplot(anscombe) +  geom_point(aes(x4, y4), color = "darkorange", size = 1.5) +  scale_x_continuous(breaks = seq(0,20,2)) +  scale_y_continuous(breaks = seq(0,12,2)) +  expand_limits(x = 0, y = 0) +  labs(x = "x4", y = "y4",       title = "Dataset 4" ) +  theme_bw()p4`

Now change the symbols on the scatter plots to solid circles and plot them together as a 4 panel graphic

`grid.arrange(grobs = list(p1, p2, p3, p4),              ncol = 2,              top = "Anscombe's Quartet")`

Now fit a linear model to each data set using the lm() function.

`lm1 <- lm(y1 ~ x1, data = anscombe)lm1Call:lm(formula = y1 ~ x1, data = anscombe)Coefficients:(Intercept)           x1       3.0001       0.5001lm2 <- lm(y2 ~ x2, data = anscombe)lm2Call:lm(formula = y2 ~ x2, data = anscombe)Coefficients:(Intercept)           x2        3.001        0.500lm3 <- lm(y3 ~ x3, data = anscombe)lm3Call:lm(formula = y3 ~ x3, data = anscombe)Coefficients:(Intercept)           x3       3.0025       0.4997lm4 <- lm(y4 ~ x4, data = anscombe)lm4Call:lm(formula = y4 ~ x4, data = anscombe)Coefficients:(Intercept)           x4       3.0017       0.4999`

Now combine the last two tasks. Create a four panel scatter plot matrix that has both the data points and the regression lines. (hint: the model objects will carry over chunks!)

`p1_fitted <- p1 + geom_abline(intercept = 3.0001, slope = 0.5001, color = "blue")p2_fitted <- p2 + geom_abline(intercept = 3.001, slope = 0.500, color = "blue")p3_fitted <- p3 + geom_abline(intercept = 3.0025, slope = 0.4997, color = "blue")p4_fitted <- p4 + geom_abline(intercept = 3.0017, slope = 0.499, color = "blue")grid.arrange(grobs = list(p1_fitted, p2_fitted,                          p3_fitted, p4_fitted),              ncol = 2,              top = "Anscombe's Quartet")`

Based on the figure from Dataset 1, the linear regression model seems to fit the data quite closely. However, for the figure from Dataset 2, the data seems to be of a curvilinear nature, possibly quadratic and the linear model fitting is inappropriate. Similarly, the linear model on the figure based on Dataset 3 is also erroneous; only 1 data point passes through the fitted line and one point is far away from the regression fitted line. For the figure from Dataset 4, one point is a clear outlier, while all the other points are clustered at the same x value. Hence, one should check the validity of the data. Additionally, if the data is accurate, then the linear model fit should be reported as-is. However, one should mention that one of the data points have played a crtical role in the linear regression model fitting of the data.

In text, summarize the lesson of Anscombe’s Quartet and what it says about the value of data visualization. Anscombe’s quartet provides a quick way to the idea that sometimes the visual dimension can reveal a story that simple numerical analysis appears to deny.

I hope you like it.

No matter what books or blogs or courses or videos one learns from, when it comes to implementation everything can look like “Outside the Curriculum”.

The best way to learn is by doing! The best way to learn is by teaching what you have learned!ç

See you on Linkedin!

Master in Data Science. Passionate about learning new skills. Former branch risk analyst. https://www.linkedin.com/in/oscar-rojo-martin/. www.oscarrojo.es

## More from Oscar Rojo

Master in Data Science. Passionate about learning new skills. Former branch risk analyst. https://www.linkedin.com/in/oscar-rojo-martin/. www.oscarrojo.es