Why sample size and effect size increase the power of a statistical test

Yeonjoo Yoo
The Startup
Published in
4 min readOct 11, 2019

--

Power analysis is important in experimental design. It is to determine a sample size required to discover an effect size, a measure of a change or a difference that are being tested, with a given degree of confidence. That means that the power (1- a type II error) of a statistical test involves with a sample size, a type I error, and an effect size. In my previous article, I explained how type I and type II errors are related: as a type I error (α ) increases corresponding type II error (β) decreases; thus the power increases. In this article, I will explain how a sample size and a size are related to a power of a statistical test.

Sample size and power of a statistical test

Let’s consider a simplest example, one sample z-test.

Example: we have a sample of people’s weights whose mean and standard deviation are 168 lbs and 7.2 lbs. We want to test if the mean of the population where this sample is taken is 165 lb.

H₀: μ = 165

Hₐ : μ ≠ 165

How we test this hypothesis is that we calculate the test statistics and p value then compare with α.

z = (168–165) ÷ (7.2/(square root of sample size))

= (168–165)×square root of sample size/7.2

As the sample size gets larger, the z value increases therefore we will more likely to reject the null hypothesis; less likely to…

--

--