Predicting Sepsis Occurrence with Machine Learning Classification Models

Alidu Abubakari
AI Science
Published in
15 min readJun 17, 2023

Check the EDA part here: https://medium.com/@alidu143/uncovering-sepsis-occurrence-secrets-through-exploratory-data-analysis-8cffeee7e760

Introduction

Sepsis is a life-threatening condition that arises when the body’s response to an infection damages its own tissues and organs. It is a major healthcare challenge worldwide, with high mortality rates if not detected and treated promptly. Early detection of sepsis is crucial for improving patient outcomes and reducing the burden on healthcare systems.

Predicting sepsis occurrence using machine learning classification models offers a promising approach to identify individuals at risk and enable targeted interventions. By leveraging patient data and advanced algorithms, these models can help healthcare professionals identify patterns and indicators that may precede the onset of sepsis. Early detection allows for timely interventions, such as initiating appropriate antibiotic therapy, fluid resuscitation, and monitoring vital signs, which can significantly improve patient outcomes and potentially save lives.

Traditionally, sepsis diagnosis has relied on clinical judgment and standardized criteria based on vital signs, laboratory values, and clinical indicators. However, these criteria may not capture subtle changes in patient condition or account for individual variability. Machine learning models have the potential to overcome these limitations by incorporating a wide range of variables and learning complex patterns from large datasets.

Moreover, predicting sepsis occurrence can help optimize healthcare resource allocation. By identifying patients at higher risk, healthcare providers can allocate resources more efficiently, such as prioritizing intensive care unit beds, initiating early consultations with specialists, and ensuring adequate staffing levels.

In this article, we will explore the complete process of predicting sepsis occurrence using machine learning classification models. From data preprocessing to model evaluation and selection, we will delve into each step to demonstrate the potential of these models in enabling early detection and targeted interventions for sepsis. By shedding light on this critical topic, we aim to contribute to the ongoing efforts to improve sepsis management and ultimately save lives.

Data Preprocessing

Data Imbalance Check

Data imbalance is a common challenge in machine learning, where the classes or categories in a dataset are not represented equally. This can lead to models that are biased towards the majority class and struggle to accurately predict the minority class.

In the context of sepsis prediction, data imbalance means that the occurrence of sepsis cases is significantly lower than non-sepsis cases as indicated above. This is because sepsis is a relatively rare condition compared to the overall population.

To address the data imbalance, we will be using the following techniques:

  • Resampling: This involves oversampling the minority class or undersampling the majority class to balance the dataset.
  • Using evaluation metrics that account for imbalanced data: These metrics, such as F1 score or area under the precision-recall curve, are more sensitive to the minority class than accuracy.

By assessing the class distribution within the dataset and using appropriate techniques to address imbalance, we can improve the performance of machine learning models on imbalanced data.

Drop Duplicates:

Removing duplicate records from the dataset is important to ensure the integrity and quality of the data. Duplicate records can introduce bias and inaccuracies in the analysis, leading to misleading results.

To identify and drop duplicates, we can use the duplicated() method in pandas. This method returns a boolean series indicating whether each row is a duplicate or not. We can then use this series to filter the dataset and drop the duplicate records using the drop_duplicates() method. By doing so, we ensure that each observation in the dataset is unique, eliminating any redundancy and improving the reliability of our analysis.

def check_duplicate_rows(data):
duplicate_rows = data.duplicated()
num_duplicates = duplicate_rows.sum()
print("Number of duplicate rows:", num_duplicates)

# Check duplicate rows in train data
check_duplicate_rows(train)

# Check duplicate rows in test data
check_duplicate_rows(test)
Number of duplicate rows: 0
Number of duplicate rows: 0

Check Missing Values:

Handling missing values in the dataset is also crucial as they can affect the accuracy and reliability of the analysis. Missing values can lead to biased results and hinder the performance of machine learning models.

To identify and assess missing values in my sepsis dataset, we use the isna().sum() method in pandas.

def check_missing_values(data):
missing_values = data.isna().sum()
print("Missing values:\n", missing_values)

# Check missing values in train data
check_missing_values(train)

# Check missing values in test data
check_missing_values(test)
Missing values:
ID 0
PRG 0
PL 0
PR 0
SK 0
TS 0
M11 0
BD2 0
Age 0
Insurance 0
Sepssis 0
dtype: int64

Missing values:
ID 0
PRG 0
PL 0
PR 0
SK 0
TS 0
M11 0
BD2 0
Age 0
Insurance 0
dtype: int64

This method returns the count of missing values for each column in the dataset. By examining the missing value counts, we can determine which columns have missing values and the extent of the ‘missingness’. This information helps us decide on appropriate strategies for handling missing values, such as imputation or deletion, to ensure the integrity of the data for further analysis.

Feature Encoding

In machine learning models, categorical features need to be encoded into numerical values as most algorithms work with numerical data. Label encoding is one of the techniques used to encode categorical features.

Label encoding converts target categorical values into numerical labels, where each unique category is assigned a unique integer value. This allows the machine learning model to understand the ordinal relationship between different categories. In the instance of sepsis data, we have categories like “positive,” and “negative,” label encoding may assign them the numerical labels 0 and 1 respectively.

def encode_target_variable(data, target_variable):
# Encode the target variable using LabelEncoder
label_encoder = LabelEncoder()
encoded_target = label_encoder.fit_transform(data[target_variable])
target_encoded = pd.DataFrame(encoded_target, columns=[target_variable])

# Combine the features and the encoded target variable
data_encoded = pd.concat([data.iloc[:, :-1], target_encoded], axis=1)
data_encoded.drop('ID', axis=1, inplace=True)

return data_encoded

# Encode target variable in train data
train_encoded = encode_target_variable(train, 'Sepssis')

The code snippet provided demonstrates the use of LabelEncoder from scikit-learn to encode the target variable. It converts the target variable into encoded labels and creates a new DataFrame with the encoded target variable. Finally, the encoded target variable is combined with the original features, excluding the ‘ID’ column, to create the encoded dataset. Label encoding is a simple and effective way to handle categorical features in machine learning models.

Data Splitting

The process of splitting the dataset into training and testing subsets involves dividing the data into two separate sets to evaluate the performance of a machine learning model.

def split_data(X, y, test_size, random_state=42, stratify=None):
# Split the data into train and validation sets
X_train, X_eval, y_train, y_eval = train_test_split(X, y, test_size=test_size, random_state=random_state, stratify=stratify)

return X_train, X_eval, y_train, y_eval

# Split the data into train and validation sets for both X and y
X_train, X_eval, y_train, y_eval = split_data(train_encoded.iloc[:, :-1], train_encoded.iloc[:, -1:], test_size=0.2, random_state=42, stratify=train_encoded.iloc[:, -1:])

In the provided code snippet, the split_data function takes in the features (X) and the target variable (y), along with the desired test size, random state, and optional stratification parameter. It utilizes the train_test_split function from scikit-learn to split the data into training and validation sets.

The split is performed with consideration for maintaining the class distribution in the split, indicated by the stratify parameter. This ensures that the proportion of different classes remains balanced in both the training and validation subsets.

By splitting the data, we can train the model on the training set and assess its performance on the unseen validation set. This helps in estimating how well the model will generalize to new, unseen data.

Imputing Missing Values:

Imputing missing values is an important step in data preprocessing to handle incomplete data. In the provided code snippet, the SimpleImputer class from scikit-learn is used for imputation.

# Creating imputer variables
numerical_imputer = SimpleImputer(strategy = "mean")

numerical_imputer.fit(X_train)

X_train_imputed = numerical_imputer.transform(X_train)
X_eval_imputed = numerical_imputer.transform(X_eval)
X_test_imputed = numerical_imputer.transform(test)

The selected imputation technique is mean strategy, indicated by strategy="mean". This strategy replaces missing values with the mean value of the respective feature/column.

The fit method of the SimpleImputer is called on the training set (X_train) to calculate the mean value of each feature. This step allows the imputer to learn the mean values for each feature.

Then, the transform method is used to replace the missing values in the training set (X_train_imputed), validation set (X_eval_imputed), and test set (X_test_imputed) with the learned mean values.

The rationale behind using the mean strategy is that it provides a simple and effective approach for imputing missing numerical values. By using the mean value of a feature, we can preserve the overall distribution and central tendency of the data. Imputing missing values ensures that the data is complete and ready for further analysis or model training, as many machine learning algorithms cannot handle missing values in the input data.

Feature Scaling

Feature scaling is an essential step in preparing data for machine learning models. It involves transforming the numerical features to a common scale, ensuring that no particular feature dominates the learning process due to its larger values. In the code snippet below, the StandardScaler from scikit-learn is used for feature scaling.

scaler = StandardScaler()
scaler.fit(X_train_imputed)

columns = ['PRG','PL','PR','SK','TS','M11','BD2','Age','Insurance']

def scale_data(data, scaler, columns):
scaled_data = scaler.transform(data)
scaled_df = pd.DataFrame(scaled_data, columns=columns)
return scaled_df
# Scale the data
X_train_df = scale_data(X_train_imputed, scaler, columns)
X_eval_df = scale_data(X_eval_imputed, scaler, columns)
X_test = scale_data(X_test_imputed, scaler, columns)

The StandardScaler scales the features by subtracting the mean and dividing by the standard deviation, resulting in a distribution with a mean of 0 and a standard deviation of 1.

The fit method of the StandardScaler is called on the training set (X_train_imputed) to calculate the mean and standard deviation for each feature. This step allows the scaler to learn the scaling parameters.

Then, the transform method is used to scale the features in the training set (X_train_df), validation set (X_eval_df), and test set (X_test) based on the learned parameters.

By scaling the data, we ensure that all features contribute equally to the learning process. This is particularly important for algorithms that rely on distance calculations or gradient-based optimization, such as k-nearest neighbors, support vector machines, and neural networks.

In the provided code, the specified columns (['PRG', 'PL', 'PR', 'SK', 'TS', 'M11', 'BD2', 'Age', 'Insurance']) are scaled using the StandardScaler. The resulting scaled data is returned as a DataFrame with the same column names.

Machine Learning Modeling

In this project, several classification models were evaluated: Decision Tree, Logistic Regression, Naive Bayes, Support Vector Machines (SVM), and Random Forest. The selection criteria for evaluating these models were based on their F1 score and Area Under the Curve (AUC) score. The general process for implementing classification models can be summarized as follows:

  1. Instantiate the classifier: Create an instance of the desired classification algorithm, such as Logistic Regression, Decision Tree, Naive Bayes, SVM, or Random Forest.
  2. Fit the model: Train the classifier on the training data by calling the fit method and providing the feature matrix (X_train) and the corresponding target variable (y_train).
  3. Make predictions: Use the trained model to make predictions on the evaluation or test data by calling the predict method and passing the feature matrix (X_eval or X_test).
  4. Evaluate performance: Assess the performance of the model using appropriate evaluation metrics. For classification tasks, common metrics include F1 score, accuracy, precision, recall, and AUC score. These metrics can be calculated using functions such as f1_score, accuracy_score, precision_score, recall_score, and roc_auc_score from the scikit-learn library.
  5. Optionally, analyze the model’s predictions: You can further analyze the model’s predictions by examining metrics like the ROC curve, confusion matrix, or feature importance, depending on the specific requirements of your project.

By following this general process, you can implement and evaluate different classification algorithms, comparing their performance based on the chosen evaluation metrics.

Logistic Regression

# Call the function and get the outputs
lr_model, lr_preds, lr_f1_score, fpr, tpr, thresholds, lr_auc_score = logistic_regression_model(X_train_df, y_train, X_eval_df, y_eval)

print("F1 Score:", lr_f1_score)
print("AUC Score:", lr_auc_score)
F1 Score on Training Set: 0.6486486486486487
F1 Score on Evaluation Set: 0.6265060240963854

Checking for overfitting

# Calculate F1 scores for training and evaluation sets
lr_train_f1_score = calculate_f1_score(lr_model, X_train_df, y_train)
lr_eval_f1_score = calculate_f1_score(lr_model, X_eval_df, y_eval)

# Print the F1 scores
print("F1 Score on Training Set:", lr_train_f1_score)
print("F1 Score on Evaluation Set:", lr_eval_f1_score)
F1 Score on Training Set: 0.6486486486486487
F1 Score on Evaluation Set: 0.6265060240963854

Overall, the F1 scores indicate that the logistic regression model is performing reasonably well on both the training and evaluation sets.

Decision Tree Model

# Call the function with your train and evaluation data
dt_model, dt_pred, dt_f1_score, dt_auc_score = evaluate_decision_tree(X_train_df, y_train, X_eval_df, y_eval)

print("F1 Score:", dt_f1_score)
print("AUC Score:", dt_auc_score)
F1 Score: 0.6024096385542169
AUC Score: 0.6950549450549451

Checking for overfitting

# Calculate F1 scores for training and evaluation sets
dt_train_f1_score = calculate_f1_score(dt_model, X_train_df, y_train)
dt_eval_f1_score = calculate_f1_score(dt_model, X_eval_df, y_eval)

# Print the F1 scores
print("F1 Score on Training Set:", dt_train_f1_score)
print("F1 Score on Evaluation Set:", dt_eval_f1_score)
F1 Score on Training Set: 1.0
F1 Score on Evaluation Set: 0.6024096385542169

Based on these results, it appears that the model is overfitting the training data, resulting in poor generalization to the evaluation set.

Random Forest Model

rf_model, rf_preds, rf_f1_score, fpr, tpr, thresholds, rf_auc_score = random_forest_model(X_train, y_train, X_eval, y_eval)

print("F1 Score:", rf_f1_score)
print("AUC Score:", rf_auc_score)
F1 Score: 0.5783132530120483
AUC Score: 0.6767399267399268

Checking for overfitting

# Calculate F1 scores for training and evaluation sets
rf_train_f1_score = calculate_f1_score(rf_model, X_train_df, y_train)
rf_eval_f1_score = calculate_f1_score(rf_model, X_eval_df, y_eval)

# Print the F1 scores
print("F1 Score on Training Set based on Random Forest:", rf_train_f1_score)
print("F1 Score on Evaluation Set based on Random Forest:", rf_eval_f1_score)
F1 Score on Training Set based on Random Forest: 0.0
F1 Score on Evaluation Set based on Random Forest: 0.0

XGBoost Classifier

xgb_model, xgb_preds, xgb_f1_score, fpr, tpr, thresholds, xgb_auc_score = xgboost_model(X_train_df, y_train, X_eval_df, y_eval)

# Print the F1 score and AUC score
print("F1 Score on Evaluation Set based on XGBoost:", xgb_f1_score)
print("AUC Score on Evaluation Set based on XGBoost:", xgb_auc_score)
F1 Score on Evaluation Set based on XGBoost: 0.5365853658536585
AUC Score on Evaluation Set based on XGBoost: 0.6465201465201464

Checking for overfitting

# Calculate F1 scores for training and evaluation sets
xgb_train_f1_score = calculate_f1_score(xgb_model, X_train_df, y_train)
xgb_eval_f1_score = calculate_f1_score(xgb_model, X_eval_df, y_eval)

# Print the F1 scores
print("F1 Score on Training Set based on XGboost:", xgb_train_f1_score)
print("F1 Score on Evaluation Set based on XGboost:", xgb_eval_f1_score)

Based on these results, it appears that the XGBoost model is overfitting the training data, resulting in poor generalization to the evaluation set.

Naive Bayes model

nb_model, nb_preds, nb_f1_score, fpr, tpr, thresholds, nb_auc_score = naive_bayes_model(X_train_df, y_train, X_eval_df, y_eval)

# Print the F1 score and AUC score
print("F1 Score on Evaluation Set based on Naive Bayes:", nb_f1_score)
print("AUC Score on Evaluation Set based on Naive Bayes:", nb_auc_score)
F1 Score on Evaluation Set based on Naive Bayes: 0.574712643678161
AUC Score on Evaluation Set based on Naive Bayes: 0.6694139194139194

Checking for overfitting

# Calculate F1 scores for training and evaluation sets
nb_train_f1_score = calculate_f1_score(nb_model, X_train_df, y_train)
nb_eval_f1_score = calculate_f1_score(nb_model, X_eval_df, y_eval)

# Print the F1 scores
print("F1 Score on Training Set based on Naive Bayes:", nb_train_f1_score)
print("F1 Score on Evaluation Set based on Naive Bayes:", nb_eval_f1_score)
F1 Score on Training Set based on Naive Bayes: 0.6730769230769231
F1 Score on Evaluation Set based on Naive Bayes: 0.574712643678161

Based on the provided F1 scores, the Naive Bayes model shows some level of overfitting.

Stochastic Gradient Descent

sgd_model, sgd_preds, sgd_f1_score, fpr, tpr, thresholds, sgd_auc_score = sgd_model_func(X_train_df, y_train, X_eval_df, y_eval)

# Print the F1 score and AUC score
print("F1 Score on Evaluation Set based on SGD:", sgd_f1_score)
print("AUC Score on Evaluation Set based on SGD:", sgd_auc_score)
F1 Score on Evaluation Set based on SGD: 0.4782608695652174
AUC Score on Evaluation Set based on SGD: 0.5824175824175823

Checking for overfitting

sgd_train_f1_score = calculate_f1_score(sgd_model, X_train_df, y_train)
sgd_eval_f1_score = calculate_f1_score(sgd_model, X_eval_df, y_eval)

# Print the F1 scores
print("F1 Score on Training Set based on SGDClassifier:", sgd_train_f1_score)
print("F1 Score on Evaluation Set based on SGDClassifier:", sgd_eval_f1_score)
F1 Score on Training Set based on SGDClassifier: 0.5485714285714285
F1 Score on Evaluation Set based on SGDClassifier: 0.4782608695652174

Based on the provided F1 scores, the SGDClassifier model shows signs of underfitting

Model Comparison

Based on the provided F1 scores and AUC scores, here is an analysis of the different models:

Logistic Regression

  • F1 Score: 0.6265
  • AUC Score: 0.7134

The logistic regression model shows a relatively higher F1 score and AUC score compared to other models. It demonstrates decent performance in terms of both precision and recall, as well as a good ability to distinguish between positive and negative instances.

Decision Tree

  • F1 Score: 0.6024
  • AUC Score: 0.6951

The decision tree model achieves a reasonably high F1 score and AUC score. It indicates that the model captures some of the underlying patterns in the data and is effective in making predictions. However, it may not be as strong as logistic regression in terms of overall performance.

Random Forest

  • F1 Score: 0.5783
  • AUC Score: 0.6767

The random forest model shows a slightly lower F1 score and AUC score compared to logistic regression and decision tree. Random forest is an ensemble model that combines multiple decision trees, and its performance is influenced by the number of trees and other hyperparameters. Adjusting these parameters could potentially improve the model’s performance.

Naive Bayes

  • F1 Score: 0.5747
  • AUC Score: 0.6694

The Naive Bayes model achieves a moderate F1 score and AUC score. Naive Bayes is a probabilistic classifier based on Bayes’ theorem and assumes independence among features. It may not capture complex relationships as effectively as other models, but it can still provide useful predictions in certain scenarios.

XGBoost

  • F1 Score: 0.5366
  • AUC Score: 0.6465

The XGBoost model shows a relatively lower F1 score and AUC score compared to other models. XGBoost is a powerful gradient boosting algorithm, but its performance depends heavily on hyperparameter tuning and feature engineering. Adjusting these aspects may help improve the model’s effectiveness.

SGBoost

  • F1 Score: 0.4783
  • AUC Score: 0.5824

The SGBoost model exhibits the lowest F1 score and AUC score among all the models. It suggests that the model is underperforming and struggling to capture the patterns in the data. Consider revisiting the model configuration, feature selection, or exploring other algorithms to improve performance.

Based on these results, the logistic regression model performs relatively well compared to other models.

Best Model Selection

The selection of the best model for sepsis prediction depends on various factors such as the specific goals of the prediction task, the importance of different evaluation metrics, interpretability requirements, computational efficiency, and the available resources.

Based on the provided F1 scores and AUC scores, the logistic regression model stands out as the top-performing model among the options listed. It demonstrates a relatively higher F1 score (0.6265) and AUC score (0.7134) compared to the other models.

The F1 score is a combined measure of precision and recall, which indicates the balance between correctly identifying positive instances (sepsis cases) and minimizing false positives and false negatives. The logistic regression model achieves a decent F1 score, implying a good trade-off between precision and recall for sepsis prediction.

The AUC score measures the model’s ability to distinguish between positive and negative instances. With an AUC score of 0.7134, the logistic regression model shows a relatively good discriminative ability in identifying sepsis cases.

Furthermore, logistic regression is a well-established and interpretable model that provides insights into the importance of features and the direction of their influence on predictions. This interpretability can be valuable in the medical domain, where understanding the underlying factors contributing to sepsis can aid in decision-making and patient care.

Considering the overall performance, interpretability, and the importance of balancing precision and recall in sepsis prediction, the logistic regression model appears to be the most suitable choice based on the provided evaluation metrics.

Conclusion

The process of predicting sepsis occurrence using machine learning classification models involved several key steps to achieve accurate predictions. These steps include data preprocessing, model evaluation, and hyperparameter tuning.

Data preprocessing played a crucial role in preparing the dataset for analysis. This involved handling missing values, addressing outliers, and transforming variables as necessary. Preprocessing techniques such as feature scaling and one-hot encoding were applied to ensure that the data was in a suitable format for the machine learning models.

Model evaluation was an essential step in assessing the performance of the different classification models. Evaluation metrics such as F1 score and AUC score were used to measure the models’ precision, recall, and discriminative ability. By comparing the performance of various models, we gained insights into their strengths and weaknesses in predicting sepsis occurrence.

Hyperparameter tuning was another important aspect of the process. It involved adjusting the settings of the machine learning algorithms to optimize their performance. By fine-tuning hyperparameters, we aimed to find the best configuration for each model, maximizing their predictive power for sepsis detection.

Overall, machine learning models demonstrated great potential in aiding healthcare professionals in sepsis detection and intervention. Accurate prediction of sepsis occurrence can help healthcare providers identify patients at risk in a timely manner, enabling early intervention and potentially saving lives. By leveraging machine learning techniques, healthcare professionals can benefit from advanced data analysis capabilities, allowing for more accurate and efficient sepsis detection and treatment.

Check out my github for more information.

Read the previous article here:

Uncovering Sepsis Occurrence Secrets through Exploratory Data Analysis | by Alidu Abubakari | Jun, 2023 | Medium.

--

--