Your Future Echography Could Look Like This Thanks to Computer Vision and AI

Efficient Fetus Anatomy Detection in Ultrasound, a Glimpse into the Future

Sandra G
ILLUMINATION
12 min readApr 16, 2024

--

The rapid advancement of technology is no secret, as its uses extend far beyond the realm of computers and increasingly impact various aspects of our lives. Now, this technology is poised to make an impact not only during our lifetimes but even before birth.

Computer vision, a technique powered by artificial intelligence and machine learning, enables the digital interpretation of real-world scenes. This technology can now be utilized during pregnancy to enhance ultrasound procedures, resulting in more efficient and rapid processes. It can aid in the detection of anomalies, and diseases in the fetus, or even predict movements and development of abnormalities.

This application holds significant benefits as it provides healthcare professionals with a supportive system, enabling them to offer better advice to their patients and make more informed decisions. It may also help identify overlooked issues or streamline waiting lists. With this intelligent technological support, professionals may conduct these ultrasounds with less expertise, thereby expediting processes. In cases of detecting abnormalities, they could then refer the case to a more experienced professional.

Will we eventually reach a point where these ultrasound procedures can be conducted at home by the mother whenever she wants? Could individuals then visit a doctor only when artificial intelligence computer vision recommends it upon detecting something unusual in the fetus? While we’re currently far from this hypothetical scenario, it could potentially become a reality in the future.

Detection of fetal anatomy in standard ultrasonographic sections based on real-time target detection network

https://www.mdpi.com/biomimetics/biomimetics-08-00519/article_deploy/html/images/biomimetics-08-00519-g006.png

In the process of prenatal ultrasound diagnosis, the accurate recognition of fetal facial ultrasound standard plane is crucial for facial malformation detection and disease screening. — Gynecology and Obstetrics Research

Prenatal ultrasound plays a crucial role in pregnancy by assessing fetal growth, detecting defects, and aiding diagnosis. It facilitates quick intervention in disease progression. However, accuracy can be influenced by factors like fetal mobility and abdominal wall thickness.

On the one hand, Deep Learning (DL) stands out in medical imaging due to its ability to extract features from vast data efficiently and is exceptionally efficient at image classification, target detection, and image segmentation. So, it is not a surprise that it became strongly popular among medical technology advancements. On the other hand, Deep Convolutional Neural Networks (DCNNs) excel in image analysis, including ultrasound images, they can distinguish similar ultrasonic views without any manually designed features using their feature representation capabilities.

The improvement of plane detection accuracy by using a compound neural network and multi-task learning, addressing domain differences and dataset limitations able to achieve high accuracy in fetal brain standard plane recognition. Hence, numerous studies have indicated that combining AI with prenatal ultrasound can significantly:

  • Improve the efficacy and accuracy of plane recognition.
  • Reduce the variance between different operators.
  • Confirm the consistency and repeatability of plane adoption.

However, limitations exist, among them, we can find:

  • Most of the studies only include healthy cases.
  • The lack of pathological samples hampers model development and clinical applications.

Large-scale, diversified, and high-quality clinicopathological databases must be built and incorporated into the future training and verification of AI algorithms. Future research needs diverse, high-quality databases for robust AI algorithms.

Head Circumference Measurement (HC)

Photo by Kelly Sikkema on Unsplash

For instance, head circumference (HC) is vital in assessing fetal growth, and development, assessing gestational age (GA), and weight, and identifying abnormalities. However, manual HC measurement is time-consuming and prone to errors. Ultrasound images also suffer from low contrast and artifacts. Consequently, even highly experienced sonographers find the manual measurement of fetal HC time-consuming and challenging.

Automated quantification, utilizing transformers and CNNs, offers efficiency and accuracy, promising clinical applications without human-computer interaction.

Emerging studies have attempted the automated measurement of other biometric parameters such as fetal biparietal diameter (BPD), cerebellar transverse diameter, and occipital frontal diameter.

In addition to two-dimensional (2D) ultrasound popular in clinical practice, three-dimensional (3D) ultrasound has also been adopted to present cubic anatomical structures, providing richer spatial information and quantitative biometric parameters in combination with the hybrid attention scheme (HAS) for the whole fetal head segmentation that are more representative and comprehensive — MDPI research

So, the combination of conventional HC measurements in ultrasound with AI:

  • Reduces examination time
  • Reduces inter-clinician variability
  • Increases diagnostic accuracy

The current direction is to incorporate more and better-quality datasets and design enhanced network structures to improve performance.

For instance, smart plane software can automatically measure HC and BPD in 3D ultrasound with good reproducibility, which has been put into clinical use.

Fetal Abdominal Circumference (AC)

AC serves as a key parameter for estimating fetal weight, holding significant clinical relevance in evaluating fetal growth and identifying potential issues like intrauterine growth restriction. Improving the accuracy of AC measurement is critical for reducing fetal morbidity and mortality associated with these conditions.

Manually locating the standard abdominal plane presents challenges due to factors like:

  • Fetal posture variability
  • Oligohydramnios
  • Variations in pregnant women’s abdominal wall thickness

Addressing this need, researchers have explored automated image segmentation techniques for AC measurement.

As we saw previously, Convolutional Neural Networks (CNNs) have shown promise in medical image classification. However, challenges arise when there’s insufficient amniotic fluid, affecting algorithm accuracy.

The response to those challenges could be the combination of multiple CNNs and U-Net for multi-task learning. This approach can accurately identify the fetal abdominal region, leveraging positional information of fetal ribs and spine to mitigate the impact of amniotic fluid deficiency and image artifacts.

Besides, attention mechanisms, such as the attention gate (AG), in network architectures have enhanced segmentation accuracy.

Studies combining multi-scale feature pyramid networks and U-Net with AG achieved remarkable Dice Similarity Coefficient (DSC) values, indicating high segmentation accuracy. Automated multi-parameter measurements, including AC, HC, BPD, and femur length, correlate strongly with manual methods, eliminating the need for additional user intervention. — MDPI research

Challenges of Abdominal ultrasound Images:

  • Low contrast
  • Irregular shapes
  • Scanning variability
  • Blurred edges

AI-driven automated AC measurements are able to:

  • Streamline workflow
  • Reduce operator dependence
  • Handle image artifacts in an intelligent way.

For instance, deep neural networks can estimate shadow intensities in ultrasound images, aiding in image pre-processing to filter out low-quality images.

Identification with Deep-learning Computer Vision of the Increased Nuchal Translucency (NT) in the First Trimester of Pregnancy

Photo by 🇸🇮 Janko Ferlič on Unsplash

NT corresponds to the fluid-filled area beneath the fetal neck skin and serves as a crucial indicator in prenatal screening.

  • Thickened NT: can signal potential chromosomal abnormalities and poor pregnancy outcomes, including Down syndrome.

Precise measurement of NT thickness in the standard sagittal plane aids in the early detection of fetal structural abnormalities and genetic defects.

However, acquiring the standard plane and accurately measuring NT poses significant challenges due to factors such as:

  • Low ultrasound image quality
  • Short fetal length
  • Fetal mobility in early gestation.

AI here offers assistance by automatically identifying the neck region in ultrasound images and measuring NT. Tools like SonoNT have been integrated into commercial ultrasound equipment, enabling semi-automatic NT measurement in clinical settings. Fully automated tools are anticipated to further enhance clinician efficiency and examination accuracy.

The benefits of automatic measurement of fetal biological parameters by AI are clear:

  • Reduces errors between operators
  • Enhances clinical efficiency, and matches the accuracy of expert ultrasonographers.
  • Serves as a valuable tool for inexperienced sonographers in making accurate clinical decisions.
  • Contribute to precision medicine and address the shortage of prenatal ultrasonographers globally.

Other AI Applications in Fetal Ultrasonic Diseases Diagnosis

Photo by Raymart Arniño on Unsplash

Neonatal Respiratory Diseases and Fetal Lung Maturity Assessment

Lung hypoplasia is one of the main causes of premature mortality and neonatal respiratory morbidity (NRM).

Traditionally, clinicians assess fetal lung maturity (FLM) through biochemical analysis of amniotic fluid via amniocentesis. However, this invasive procedure has limitations, including potential complications and compromised results due to fluid contamination.

As we mentioned a couple of times in this article, recent decades have seen significant advancements in ultrasound technology as a noninvasive and reproducible method for assessing pregnancies and now, FLM too.

  • Conventional Ultrasound: correlates well with FLM by comparing echogenic differences between the fetal lung and adjacent structures like the placenta, intestine, or liver. Despite its potential, limitations such as instrumentation variability, subjective examiner interpretation, and maternal-fetal factors restrict its clinical utility.
  • Texture Feature Analysis for FLM Quantification: this emerges as a promising approach to extract key features directly from ultrasound images and quantify FLM objectively. An Automatic Quantitative Ultrasound Analysis (AQUA) texture extractor, can achieve high sensitivity, specificity, and accuracy in FLM prediction.

Palacio’s team conducted a multicenter prospective study using “quantusFLM”, predicting neonatal respiratory distress syndrome with high accuracy and specificity. Further applications include:

  • Twin pregnancies
  • Assessment of fetal lung development in maternal gestational diseases.

Role of AI in Fetal Lung Maturity Detection

AI-based technologies offer innovative approaches to detect FLM in fetal ultrasound images, providing new avenues for noninvasive assessment and prediction of neonatal respiratory distress syndrome.

Intracranial malformations: Central Nervous System (CNS) Malformations in Fetal Ultrasound Diagnosis

CNS malformations are prevalent congenital anomalies, with brain abnormalities affecting approximately 1% of fetuses.

Current clinical diagnosis of suspected brain abnormalities relies on invasive procedures like amniocentesis or MRI, each with its limitations, including post-puncture complications and susceptibility to fetal movement.

  • Advantages of Fetal Neurosonography (NSG): offers a noninvasive, radiation-free, real-time, and dynamic imaging alternative for diagnosing CNS disorders. However, manual identification of fetal brain planes by sonographers in clinical practice poses challenges, leading to high false-positive and false-negative rates.
  • AI-assisted ultrasound diagnosis: could be a promising solution to overcome traditional ultrasound examination limitations. Pioneering algorithms for prenatal ultrasound diagnosis of fetal brain abnormalities, utilizing U-Net for cranial region segmentation and VGG-Net for distinguishing normal and abnormal images, reduce false-negative rates. Despite low lesion region localization accuracy, object detection techniques or back-propagated approaches can compensate for this limitation.

Another study by Xie et al. utilized a CNN-based DL model to distinguish normal and abnormal fetal brains with high accuracy. This model visualized lesion sites through heat maps and overlapping images, enhancing clinical examination sensitivity. Additionally, they developed an AI-assisted image recognition system, PAICS, based on the YOLO algorithm, capable of real-time detection and classification of nine fetal brain malformations. PAICS demonstrated comparable performance to experts and reduced diagnosis time significantly.

Gestational Age (GA) estimation

Ultrasound measurements of fetal anatomical landmarks are established for estimating gestational age (GA), particularly in early gestation. However, errors increase in late pregnancy due to neglecting variability in fetal growth and development, sometimes exceeding 2 weeks.

Namburete et al. employed the regression forest method to analyze the spatial and temporal association between brain maturation and GA in fetal cranial ultrasound images. Their model achieved close estimation to clinical measurements, with a root mean square error (RMSE) of ±6.10 days in the second and third trimesters. Feature selection highlighted key brain anatomical regions associated with GA changes.

DL (Deep Learning) Models for GA Estimation: Burgos-Artizzu et al. introduced quantusGA, a DL model analyzing changes in brain morphology in fetal ultrasound images. Their method demonstrated lower errors in late pregnancy compared to traditional biometric parameter measurements. On the other hand, Lee et al. utilized CNN to analyze images from multiple standard ultrasound views for GA estimation without biometric information. Their model achieved low mean absolute errors (MAE) in both mid and late-pregnancy stages, applicable to diverse pregnancy types and geographical regions.

Congenital Heart Disease (CHD)

CHD is the most common and severe congenital disease among newborns, with prevalence rates ranging from 6 to 13 per 1000 births.

Surgical treatment for CHD patients, both neonatal and adult, is costly, lengthy, and carries risks of secondary surgeries and high mortality rates, posing significant burdens on patients and their families.

  • Role of Prenatal Ultrasound in CHD Diagnosis: prenatal ultrasound diagnosis plays a crucial role in assisting clinical decisions and improving neonatal outcomes by detecting fetal CHD. Precise identification of complex cardiac abnormalities remains challenging due to fetal activity, rapid heartbeats, small heart size, and the high level of expertise required. In regions lacking robust healthcare systems and experienced personnel, missed diagnoses are common, leading to delayed treatment and poorer prognoses.
  • Integration of AI with Traditional Ultrasound: holds promise in addressing the challenges mentioned earlier since recent advancements in AI techniques have shown significant progress in assessing cardiac structure and function, particularly in distinguishing normal hearts from complex CHDs.

In fact, Arnaout et al. trained a neural network model on 2D ultrasound images to differentiate normal hearts from complex CHDs with high sensitivity and specificity.

Additionally, Yeo et al. developed the Fetal Intelligent Navigation Echocardiogram (FINE) with Virtual Intelligent Sonographer Assistance, enabling clinicians to locate anatomical landmarks and automatically generate standard fetal echocardiographic views, simplifying examinations and reducing operator dependence. Furthermore, the FINE model has been integrated into commercial ultrasound equipment, enhancing accessibility.

Further studies, such as the 5D Heart Color model proposed by Yeo et al., have demonstrated the visualization of vascular anatomy and flow direction, improving diagnostic accuracy and sensitivity in CHD cases. Anda et al. introduced Learning Deep Architectures for the Interpretation of First-Trimester Fetal Echocardiography (LIFE), the first AI-standardized approach to assist sonographers in diagnosing fetal CHD in the first trimester.

AI offers significant clinical potential in diagnosing congenital diseases, shortening training periods, and reducing subjective variability among clinicians. These advancements hold promise for improving prenatal CHD detection rates and ultimately enhancing patient outcomes.

Ultralitycs and YOLOV — Accessible object Detection

Photo by Jack Millard on Unsplash

YOLOv8 is the new tool from Ultralitycs, able to automate fetal movement assessment, expediting decision-making processes and offering valuable insights into fetal health and development.

Their key benefits are clear:

  • Automation expedites decision-making processes.
  • Enables early detection of fetal health concerns, optimizing maternal healthcare interventions.

Object detection involves identifying the location and class of objects in an image or video stream. YOLOv8 provides pre-trained models for efficient object detection, with varying sizes and capabilities.

These are their available models:

Source
  • Training: dataset format and training arguments are configurable for customization.
  • Validation and Prediction: predict with the model on new images to identify objects of interest.
  • Model Export: you can export YOLOv8 models to different formats like ONNX, CoreML, TensorRT, etc., for deployment and usage across various environments.

In conclusion, Ultralytics YOLOv8 offers accessible and efficient object detection capabilities, empowering healthcare professionals with advanced tools for fetal anatomy detection in ultrasound images.

Further Recommended Information: Automatic Fetal Motion Detection from Trajectory of US Videos Based on YOLOv5 and LSTM

Other AI Applications in Healthcare

Photo by Hush Naidoo Jade Photography on Unsplash
  • Pathological Image Analysis with AI: aids pathologists in analyzing tissue samples at a microscopic level. This could automate the detection of cancer cells, classify tumors, or even enhance workflow efficiency.
  • Retinal Scans for Disease Detection: Machine learning is able to analyze retinal images to detect early signs of eye diseases such as diabetic retinopathy and age-related macular degeneration.
  • Dermatological Image Analysis: dermatologists could receive assistance in diagnosing skin diseases as AI analyzes images of lesions, moles, or rashes.
  • Assistance in Endoscopies and Colonoscopies: helping doctors identify anomalies, polyps, or lesions during procedures.
  • Surgical Assistance and Navigation: vision AI is able to guide surgeons during procedures by providing them with real-time information about anatomy, highlighting critical structures, and improving accuracy in minimally invasive surgeries.
  • Fall Detection and Elderly Care Cameras: these cameras would be equipped with vision AI in healthcare facilities or residences and are able to detect falls or unusual movements.
  • Gesture Recognition for Rehabilitation: can monitor and analyze patients’ movements during rehabilitation exercises, aiding in providing real-time feedback to both patients and therapists and optimizing rehabilitation programs.
  • Visualization of Blood Flow Patterns: computer vision powered by AI can process image data to visualize blood flow patterns. This helps cardiologists assess cardiovascular health and identify abnormalities in blood vessels.

Would you like to stay updated with the latest insights, articles, and analyses on data science, artificial intelligence, and technology trends?

Make sure to connect with me on LinkedIn for professional networking and engaging discussions. You can also find more in-depth articles and thought pieces on my Medium profile, where I delve into various topics related to data science and beyond.

Thanks for reading!

--

--

Sandra G
ILLUMINATION

Tech-savvy entrepreneur, Business Analyst and passionate about shaping a better future solving problems with data. Let's innovate and drive success together!