Cardiopulmonary exercise testing (CPET) is a valuable tool for evaluating an individual’s capacity to respond to increasing exercise intensity. During a typical test the following variables are measured: oxygen uptake (VO2), exhaled carbon dioxide (VCO2), ventilation (VE), respiratory frequency (Rf), end-tidal oxygen (PetO2) and carbon dioxide (PetCO2), and heart rate (HR).
The pattern of variation of the CPET variables during a test can be used to identify important physiological markers such as the first and second exercise thresholds, although the process of correctly identifying these breakpoints can be complex and lead to disagreement among experts.
Numerous automatic and visual methods have been developed to aid in exercise threshold identification, but the accuracy of these methods is difficult to evaluate due to the unknown true location of the thresholds. Simulated data can provide a solution to this issue, typically involving the generation of continuous profiles of CPET variables using multi-linear models, followed by the addition of Gaussian noise to simulate the act of breathing in a face mask. However, current simulated data do not closely resemble real data.
Recently, a generative model capable of producing an unlimited number of realistic-but-fake CPETs was developed, offering a potential solution to the limitations of simulated data. In a previous story, I talked about how deep learning is changing the game when it comes to automatically interpreting CPET. In this story, I’m going to dive deeper into the problem of creating fake-but-realistic CPET data using the Python package pyoxynet.
To provide further detail, pyoxynet utilises a conditional generative adversarial neural network (cGAN) to produce synthetic data that corresponds to a specific label (hence its conditional nature). The data generated pertains to a CPET (which includes VO2, VCO2, and the other parameters listed previously), while the label represents the reference exercise intensity range. During a CPET, exercise thresholds define the boundaries between different intensity domains, which means that it’s not sufficient to generate just any CPET data; instead, you must generate data that aligns with a custom-defined exercise intensity range.
In a very quick example, we are going to generate a fake CPET test. The first step is about installing the pyoxynet Python package. There are many ways in which you can do that, the following is the command from the terminal. Please notice that pyoxynet requires Python 3.8.
pip install pyoxynet
Now, in the actual Python script want to import the package needed for replicating this short exercise.
from pyoxynet import *
generator = load_tf_generator()
df_gen, dict_gen = generate_CPET(generator)
What you get printed out is something like:
Data generated for a FEMALE individual with MEDIUM fitness capacity.
Weight: 75 kg
Height: 1.87 m
Age: 20 y
Noise factor: 2.0
So you can basically read the most important characteristics of the CPET you just generated. The function generate_CPET returns a Pandas data frame and a dictionary. The data frame has the CPET data interpolated at one second (and it’s better suited for saving the data in a csv file or to further process the data with Pandas functionalities), whilst the dictionary has the data breath-by-breath and it’s better suited for dumping the data in json format directly from the dictionary. You will notice that the thresholds (we abbreviate the thresholds with VT1 and VT2) are given both in terms of time elapsed from the beginning of the test, but also in terms of VO2 (mlO2/min). Additionally, in the dictionary, you can read also other info, such as:
Most of these are self-explanatory, but a lot of emphasis should be placed on VT1 and VT2 (the time values after the beginning of the test when the exercise thresholds are detected) and their relationship with the maximal oxygen consumption (VO2max). Please, notice that the pyoxynet function allows you to define these thresholds, the duration of the test, or the level of fitness of the person you are trying to generate the CPET for.
Please pay attention: garbage in ⇒ garbage out. So if you ask for someone with unreasonable exercise thresholds, the function won’t provide any reasonable/usable result. If you are not sure where to place the thresholds, but you just want to define the level of fitness of the person, then you can just type:
# With fitness_group (int): Fitness level: low (1), medium (2), high (3).
# Default to random.
df_gen, dict_gen = generate_CPET(generator, fitness_group=1, noise_factor=None)
Now, you can easily plot the data and check if they make sense to you. Let’s start from the VCO2 vs VO2 plot, which is commonly used by experts to detect VT1 (there should be a breakpoint in the VCO2 vs VO2 graph, can you see it?):
import matplotlib.pyplot as plt
plt.vlines(int(data_dict_fake['VO2VT1']), 500, 3000)
plt.vlines(int(data_dict_fake['VO2VT2']), 1000, 4000, linestyles='dashed')
The vertical lines are placed for VT1 and VT2 in terms of VO2 (oxygen uptake). Then you can plot VE vs VO2, which is commonly used by experts to detect VT2 (there should be a breakpoint in the VE vs VO2 graph, demarcating a disproportionate increase of VE vs VO2, can you see it?):
The vertical lines are used for the exercise thresholds (VT2 is the dashed line). Other graphs such as PetO2 vs VO2 and PetCO2 vs VO2 can help you to identify VT1 and VT2. Check them out!
Congrats! You just generated a fake-but-realistic example of a CPET conducted on a 20-year-old woman, 1.87 m tall, 75 kg of weight,
of medium aerobic fitness level with VO2max of 3010 mlO2/min (40.1 mlO2/min/kg), VO2 at VT1 1895 mlO2/min and VT2 at 2634 mlO2/min.
Considerations on ethics
When it comes to analysing CPET data, deep learning algorithms can change the way we make decisions. However, this new research has uncovered amplified ethical concerns that were previously unknown. In the past, deep learning algorithms have been only proposed solely as decision-making tools. But now, with the development of cGAN technology, these algorithms have become active and generative — a development that’s been hotly debated in the field of natural language processing. The potential for harm is greater than ever, as synthetic data could be generated to skew outcomes in clinical vital sign tests or cover up faulty equipment.
The first goal of this research was to solve the problem of disagreement in the detection and interpretation of physiological markers from cardiopulmonary test data — a disagreement that could have serious implications for an individual’s health. However, it’s clear that these ethical considerations need to be addressed sooner than later. What do you think?
- Pyoxynet is part of the Oxynet project, which aims at developing machine learning models for the automatic interpretation of CPET data. You can find additional information in the project repository.
- Principles of Exercise Testing and Interpretation: Including Pathophysiology and Clinical Applications. A must-have book. An infinite source of wisdom.
- Keir et al. 2022 is a must-read for anyone involved in CPET. In this paper, The Exercise Thresholds App is also presented: a tool to learn, practice, and analyse exercise thresholds.
- 🧙♂️Follow the wizard Felipe @felipe_mattioni who designed the Exercise Threshold App.
- Published survey for the use of CPET in clinical practice.
- A huge thank you to Jason of Machine Learning Mastery for writing the tutorial on cGAN.
- A complete contribution published on Sensors can give you an idea of the different patterns of the CPET variables.
- Wiki page on GAN and on Ian Goodfellow, the father of GANs.