# Inspired by teaching the Intro to Stats: A short reading list on Statistical Inference (in an extended sense) and a bit more

Finding “right books” on a certain topic takes considerable time and is a lot of pain. Obviously, this is a personal issue: what works for one person may not fit another. This short list below I wrote as a follow-up for my students who took L24 Math 1011: Introduction to Statistics that I taught in the summer school of University College at WashU.

This course is of an introductory level. Despite that it has just “Statistics” in its name, in the end, the principal idea was to accustom my students with the taste of different perspectives of Data Analysis in a broader sense. Hence, a better name for it would have been “Intro to Data Analysis”. The average level of the students’ Math skills was very high (for an Intro course). Hence, I tried to teach them here and there the small bits of “real stuff” instead of proposing them a very adapted version that will be of no much use later (most of them were future economics, mathematics, and computer science majors). Even for people who may be not so intended to go into heavy Math, it is important to get some tools to examine actual studies in their disciplines. So they do not easily ‘’buy’’ any research design and always first think critical… importantly, not being scared of formulas and common statistical jargon.

In brief, the course starts with Statistics and Probability, then proceeds to Experimental Design/Causal Inference, and wraps up with some Machine Learning.

The list below is very personal for me, in the sense that I arrived at it through a ‘’massive” trial and error process. I looked for something that I could use as a textbook to learn and then to be able to return later for reference and advice. These books ‘’worked’’ for me. Hopefully, they will work for somebody else as well. Enjoy!

**Statistics/Probability/Econometrics**

This volume provides a very clear and concise overview of all basic level graduate Econometrics (the best I could find) together with the necessary background (Hypothesis testing, Probability, Linear Algebra). I love the order of the material especially in the 4th edition, that is why I recommend it. It requires the knowledge of Linear Algebra and may be relatively advanced in parts. However, the major ‘flow’’ of the book is logical and accessible enough.

2.** Experimental Design/Causal Inference**

The theoretical study of causal inference is not for everybody. Meanwhile, experiments and observational studies are something that you would come across in almost any discipline. That is why having some basic literacy regarding the core concepts such as the difference between experiment and observational studies, what is external and internal validity, what are the possible major threats to validity, may be useful… almost independent of major.

Shadish et al. (2012) is a beautiful intellectual journey into all possible notions of research design covering both experimental and quasi-experimental perspectives. Not the latest but still most fundamental and comprehensive in my taste, this was a huge revelation for me six years ago during my masters in PolSci in Central European University in Budapest. It hugely inspired and affected my master thesis on the pitfalls of natural experiments, the short version of which was summarized in this article.

…Meanwhile, Gerber&Green (2014) is what you are looking for if you are interested in the most recent textbook on the topic of experimental design, or *experimental field* design, to be more precise. More focused and ‘’mathematical’’ than Shadish et al. (2012), it addresses the major problems of running experiments in ``real conditions’’. Another good point about this book is that it uses this almost canonical (though slightly modified) Neyman–Rubin causal model notation. Hence, it takes a short time to learn it and start to formulate your potential design problems in clear and precise ‘’language’’.

**3. Machine Learning**

remark: This part was updated on 7/3/2019.

A few students in my class were looking forward to this particular part of the course, *Machine Learning*. This is understandable because now this area is booming and *sexy* and there are still more jobs than people who can do it.

It is an important question whether you need to understand ``the gears’’ behind the algorithms that you apply (the teaser: yes, you should)… The funny thing about *Machine Learning *is that now the entry costs are very low: you can easily run a simple script and predict something. And, here comes the problem: the interpretation of your results. If you do it not for mere curiosity you soon understand that to interpret what you get, you need to understand the Math behind… At least to some extent. Indeed, even if you know the Math, the results are not easily substantively interpretable. Without a doubt, interpretation of the results is an Art. However, to do the Art, you need to know the ``gears’’.

Murphy (2012) is a book that aims to cover everything in ML. I love this about this volume, and I think that as a reference and even as a textbook (for a prepared reader), the author does a great job. Probably, it is not the best pick to sit and learn all ML from scratch. In some places, the author moves too quickly. However, once you have some preliminary training (for example, this course), this book is an excellent choice for the primary reference volume. It describes the Math and the major intuition of the core models. Unfortunately, it has already ‘’aged’’ a bit, and it does not cover some newer inventions, such as Decision Jungles, but overall it describes pretty much everything.

I love this small textbook from Abu-Mostafa et al. (2012) for exactly the opposite reasons to why I love Murphy (2012). The latter is huge in all senses. It covers everything briefly, giving the impression that it is too short sometimes. Meanwhile, the former is a small and beautifully detailed textbook. It does not cover a lot of topics but covers them well. Be careful; it is Math-heavy, but it has its cute pieces of examples and intuition that help to deal with the scary Math. This book can be proposed as a starting textbook for a mathematically prepared student. It starts simply: with the description of the perceptron. However, you soon see that even it is not that obvious when you think about how you would code it.

UPD: Currently I got a new favorite one on ML:

This is a book that would walk you through the whole history of the Statistical Inference. It may provide a totally different perspective on numerous seemingly separate topics you have studied… And the final stop will be so-called Machine Learning. This volume does require some background training in (Math)Statistics and Econometrics. I would not normally recommend it to people just familiar with the most recent part of the story — Machine Learning. However, for those brave enough (even without much background in Stats) it may the start of a more comprehensive perception of the modern inferential techniques since it introduces them as a part of a long story.

I was hugely inspired by this book when I was drafting my brief course on (Extended) ML as a part of the Zuerich site of the Summer Institute in Computational Social Science. One of my next blog posts is going to be dedicated to this topic as well. Stay tuned :-).

Last but not least: a short book on the interpretability of ML.

The problem faced by most people who dig in into ML is the lack of the interpretability of the results you get from the cutting edge approaches such as powerful Deep Learning. They are too ‘’black box’’ and if you want to go beyond the prediction task you may feel stuck. Luckily some people have already started to think about it but there is a lot of room for improvement. This book will be especially interesting for people working at the intersection of ML and Causal Inference.