# Cracking the Code: Unraveling Hypothesis with Powerful Statistical Tests

A decision is made based on the analysis result of a specific case, problem, or challenge. To validate the findings, conducting a statistical test is essential to ensure the reliability and validity of the conclusions drawn from the data.

Statistical tests are employed to test hypotheses and determine whether the observed data provide substantial evidence to support or reject a hypothesis.

The purpose of this article is to discuss commonly used statistical tests in the industry and demonstrate their implementation in Python.

# Statistical Test and Hypothesis Testing

There are several statistical tests that are used in the industry to reveal information or insight, such as t-test, z-test, ANOVA, and ANCOVA to make inferences about the population. Here is a brief explanation of each test:

- T-test: A t-test is used to determine whether there is a significant difference between the means of the two groups. It is commonly used when the sample size is small (typically less than 30) and the population standard deviation is unknown.
- Z-test: A z-test is similar to a t-test but is used when the sample size is large (typically greater than 30) and the population standard deviation is known. It is based on the standard normal distribution and compares the difference between a sample mean and a known population mean.
- ANOVA (Analysis of Variance): ANOVA is a statistical test used to determine whether there are any statistically significant differences between the means of three or more groups. It analyzes the variance between groups and within groups to make this determination.
- ANCOVA (Analysis of Covariance): ANCOVA is an extension of ANOVA that incorporates one or more continuous variables, called covariates, into the analysis. It examines whether there are significant differences in the means of multiple groups while controlling for the influence of the covariates. ANCOVA combines elements of both ANOVA and regression analysis.

# T-test and Z-test Overview

In the t-test and z-test, commonly there are two types of tests: **one-tailed** (or one-sided) and **two-tailed** (or two-sided) tests. The difference lies in the directionality of the hypothesis.

A one-tailed hypothesis test is used to find that **one group has a difference in a particular direction** from another group. It also specifies a directional difference or effect either a greater-than or less-than relationship. There are two types of one-tailed hypothesis tests, **left-tailed** and **right-tailed**.

For example, a product manager builds a product/feature on the website to get leads of users into two groups (using form or WhatsApp)

## Left-tailed hypothesis

left-tailed if one group has an effect lower than another group. The left-tailed hypotheses could be:

- Null hypothesis (H0): The effect of WhatsApp brings more leads than the existing method (form)
- Alternative hypothesis (Ha): The effect of WhatsApp on the leads is less than the existing method (form)

## Right-tailed hypothesis

Right-tailed if one group has an effect greater than another group. The right-tailed hypothesis could be:

- Null hypothesis (H0): The effect of WhatsApp brings less lead than the existing method (form).
- Alternative hypothesis (Ha): The effect of WhatsApp on the leads is greater than the existing method (form)

## Two-tailed hypothesis

Two-tailed hypothesis tests are also known as non-directional, meaning we are testing for a difference or relationship specifying the direction. The two-tailed hypothesis could be:

- Null hypothesis (H0): The mean leads generation of a form and WhatsApp is equal
- Alternative hypothesis (Ha): The mean lead generation using the form is significantly different than using WhatsApp.

## Implementation of t-test and z-test in Python

The Python code is quite similar to t-test and z-test, the difference is from the number of samples being used.

**For the t-test:**

`import scipy.stats as stats`

import numpy as np

# Generate sample data for two groups

group1 = np.array([4, 5, 6, 7, 8, 6, 4, 5, 5, 7])

group2 = np.array([2, 3, 4, 5, 6, 7, 6, 6, 5, 4])

# One-tailed z-test

t_statistic_one_tailed, p_value_one_tailed = stats.ttest_ind(group1, group2, alternative='greater')

print("One-tailed t-test p-value:", p_value_one_tailed)

# Two-tailed z-test

t_statistic_two_tailed, p_value_two_tailed = stats.ttest_ind(group1, group2)

print("Two-tailed t-test p-value:", p_value_two_tailed)

Alternative hypothesis and following options (default is ‘two-sided’):

- ‘two-sided’: the means of the distribution underlying the samples are unequal.
- ‘less’: the mean of the distribution underlying the first sample is less than the mean of the distribution underlying the second sample.
- ‘right’: the mean of the distribution underlying the first sample is greater than the mean of the distribution underlying the second sample.

**For the z-test:**

`import scipy.stats as stats`

import numpy as np

# Generate sample data for two groups (n>30)

group1 = np.random.choice(np.arange(1, 31, 1), size=30, replace=False)

group2 = np.random.choice(np.arange(1, 31, 1), size=30, replace=False)

# One-tailed z-test

z_statistic_one_tailed, p_value_one_tailed = stats.ztest(group1, group2, alternative='larger')

print("One-tailed z-test p-value:", p_value_one_tailed)

# Two-tailed z-test

z_statistic_two_tailed, p_value_two_tailed = stats.ztest(group1, group2)

print("Two-tailed z-test p-value:", p_value_two_tailed)

Alternative hypothesis and following options (default is ‘two-sided’):

- ‘two-sided’: the means of the distribution underlying the samples are unequal.
- ‘smaller’: the mean of the distribution underlying the first sample is less than the mean of the distribution underlying the second sample.
- ‘larger’: the mean of the distribution underlying the first sample is greater than the mean of the distribution underlying the second sample.

# ANOVA Overview

ANOVA by definition above is to compare the means of two or more groups to determine if there are any significant differences among them. It analyzes the variation between group means and within-group variation to assess the statistical significance of group differences.

For example, to determine the significant difference in the revenue from the different numbers of marketing communication sent to the users. The hypothesis could be:

- Null Hypothesis (H0): There is no significant difference between the means of revenue among the number of marketing communication.
- Alternative Hypothesis (Ha): There is a significant difference in revenue means across the various number of marketing communication.

## Implementation of ANOVA in Python

`import statsmodels.api as sm`

from statsmodels.formula.api import ols

import pandas as pd

# Create a sample dataset with multiple groups

data = {'NumberCommunication': [1, 1, 1, 2, 2, 2, 3, 3, 3],

'Value': [10000, 12000, 14000, 9000, 11000, 13000, 8000, 10000, 12000]}

df = pd.DataFrame(data)

# Fit the ANOVA model

model = ols('Value ~ NumberCommunication', data=df).fit()

anova_table = sm.stats.anova_lm(model)

# Print the ANOVA table

print("ANOVA results:")

print(anova_table)

In this code, we create a sample dataset with three different of communications and their corresponding values. Then, we use `ols`

a function to specify the model, which includes the dependent variable `Value`

and the independent variable `NumberCommunication`

. Next, fit the model using `fit()`

to estimate the model parameters.

The `anova_lm`

function is used to perform the ANOVA analysis and generate an ANOVA table. It calculates the sum of squares, degrees of freedom, F-statistic, and p-value.

# ANCOVA Overview

ANCOVA by the definition above is to assess the difference in group means while controlling for the effects of one or more continuous covariates (variable that have an influence on the outcome variable but is not the primary variable of interest).

For example, researchers want to evaluate the effect of different teaching methods (independent variable) on student’s test scores (dependent variable), while controlling for the student’s pre-test scores (covariate). The hypothesis could be:

- Null Hypothesis (H0): There is no significant difference between the means of test scores among the teaching methods after controlling for pre-test scores.
- Alternative Hypothesis (Ha): There is a significant difference in test scores among the teaching methods after controlling for pre-test scores.

## Implementation of ANCOVA in Python

`import statsmodels.api as sm`

import pandas as pd

# Test scores for each teaching method

scores_method_A = [85, 92, 88, 79, 95]

scores_method_B = [75, 82, 80, 81, 78]

scores_method_C = [90, 88, 85, 87, 92]

# Covariate variable (prior knowledge)

prior_knowledge = [70, 65, 75, 72, 68, 73, 68, 70, 71, 69, 74, 72, 66, 68, 70]

# Combine the data into a dataframe

df = pd.DataFrame({'Scores': scores_method_A + scores_method_B + scores_method_C,

'Teaching_Method': ['A'] * 5 + ['B'] * 5 + ['C'] * 5,

'Prior_Knowledge': prior_knowledge})

# Step 1: Formulate hypotheses

# H0: The mean scores of all teaching methods are equal after controlling for the effect of the covariate.

# H1: At least one of the teaching methods has a different mean score after controlling for the effect of the covariate.

# Step 2: Set significance level (α)

alpha = 0.05

# Step 3: Fit the ANCOVA model

model = sm.formula.ols('Scores ~ Teaching_Method + Prior_Knowledge', data=df).fit()

ancova_table = sm.stats.anova_lm(model)

# Step 4: Determine critical value and make a decision

# Alternatively, you can use the p-value instead of comparing with the critical value.

critical_value = stats.f.ppf(1 - alpha, dfn=2, dfd=10) # dfn = number of groups - 1, dfd = total sample size - number of groups - number of covariates

f_value = ancova_table['F'][0]

p_value = ancova_table['PR(>F)'][0]

ancova_table = pd.DataFrame(ancova_table)

ancova_table

From the above code, we create the dataset that contains teaching methods, scores, and pre-test (prior knowledge), then use `ols`

function to specify the ANCOVA model formula. The `anova_lm`

is used to generate an ANCOVA table which contains information about the sources of variation, degrees of freedom, sum of squares, mean squares, F-statistic, and p-value associated with the teaching methods and the covariate.

# Conclusion

Statistical tests play a crucial role in data analysis and research. They provide a framework for making evidence-based decisions, drawing valid conclusions, and generalizing findings to populations. Statistical tests enable hypothesis testing, allowing analysts or researchers to evaluate the strength of evidence for or against a specific hypothesis.

Overall, statistical tests contribute to scientific rigor, validity, and reliable research findings across diverse fields of study. Have you applied statistical tests in your analysis?

Feel free to connect with me on LinkedIn. I would be delighted to share my experiences in the tech industry, data science, marketing, and product.