The AutoScope: Automated Microscopic Urinalysis

How machine learning enables a $100 microscope to perform almost as well as a $100k system

Urinalysis tests help doctors diagnose and rule-out thousands of different diseases. They are a powerful way to non-invasively glimpse into what is happening inside the body. More than 200 million urinalysis tests are ordered each year in the USA and 46% of them require microscopic analysis, which involves counting the types of particles in urine.

Today, doctors need to send out microscopic urinalysis tests to a medical lab. It typically takes a few days for the lab to receive the sample, run the test, and report the results, after which the doctor interprets it and communicates the diagnosis to the patient.

We set out to shorten the time from doctor’s visit to diagnosis. We wanted to make microscopic urinalysis available to anyone, anywhere, at any time. For my research at MIT, I developed a $100 microscope that performs automated microscopic urinalysis to a similar level of accuracy as a $100,000 gold-standard system*. We were able to do this by using machine learning to analyze information that humans cannot easily understand.

This system can be utilized directly at the doctor’s office so that physicians can conduct microscopic urinalysis (and not just dipstick urinalysis) on the spot. Doing so (1) reduces the time to a diagnosis, and (2) allows physicians to bill insurance for the urinalysis test, thereby increasing their revenue. A similar system could be developed to do automated blood counts.


Here are the 3 most interesting parts of my work:

1. We use suboptimal, non-uniform lighting yet can still detect where particles are located

Traditional microscope manufacturers spend a lot of time and money making a flat, uniform lighting field †. This is because uneven illumination can obscure critical information and makes it challenging to correctly interpret an image. Our microscope has incredibly uneven lighting, but we make up for this by using machine learning to correctly identify particle locations.

Our lighting system. Our microscope utilizes four small LEDs on each corner of the imaging field. These LEDs produce an uneven illumination field.

Below is an image of hundreds of red blood cells (i.e. the tiny specks) taken with our microscope. We built a convolutional neural network (CNN) that can compensate for the illumination variation and illumination artifacts across the image to correctly find where the particles are located.

An image of red blood cells (RBCs) using our microscope (1x magnification). RBCs look like tiny specks dotting the image. Note the uneven illumination field, multiple artifacts, and glares. Also, notice how different RBCs look under the various lighting conditions (compare the circled ones, for example). Our CNN is able to learn what the lightings conditions are at different parts of the image.

Watch the animation below to see the CNN in action. We fed the CNN lots of training images so it could learn the variation in illumination across the image. At the end of each training cycle, we gave the CNN the same image (that it hadn’t been trained on) and saw how well the particle segmentation was performed. By the end, it correctly classified 99.6% of the pixels.

Particle segmentation in action. We developed a CNN that correctly identifies where particles are located despite the different lighting glares and shadows across the image. For each pixel, the CNN classifies whether the pixel is part of a particle or not part of a particle. We trained the CNN with hundreds of images. By the end, it could correctly identify 99.6% of pixels.

2. We do not use any magnification yet can still accurately classify particles

Traditional microscopes utilize both high magnification and high resolution lenses to generate beautiful, detailed images of small particles. Generally, the higher the magnification and resolution, the more expensive the system.

The resolution of our microscope is ~6–8.5um, significantly worse than the resolution of traditional microscopes used for urine/blood analysis. To put our system’s resolution in perspective, a red blood cell (RBC) is 6um in diameter, and the features that distinguish an RBC are even smaller, clearly below the resolution of our system. Based on these resolutions, we should not be able to distinguish between a RBC and a dirt speck of the same size. Yet, once again, we utilize machine learning to get around this limitation and accurately classify particles.

Medical Lab Images vs. Our Images. (Left) Medical laboratories have microscopes with high resolution and magnification that enable beautiful, detailed pictures of particle to be taken. These images are easily interpreted by humans. (Right) Our microscope has 1x magnification and much lower resolution. Cells are nothing more than tiny specks on the image. Even the most well-trained expert would have difficulty telling apart the various specks.

To classify each particle in the image, we built another neural network. Watch the animation below to see the neural network identify each particle as either a red blood cell (RBC), white blood cell (WBC), nanoparticle (same size as WBC), or other (other particles, contamination, light glares, etc.).

Particle classification in action. We developed a CNN that can accurately classify each particle in the image. Since we illuminated particles from multiple oblique angles, we obtained multiple projections of each particle with unique diffraction patterns. The CNN is able to make sense of these patterns that we, as humans, cannot easily interpret.

Even the most well-trained expert would have trouble classifying the tiny particle specks in our microscope’s image. Computers, on the other hand, are able to take advantage of information that we, as humans, cannot. For example, the reason our system is able to correctly identify red blood cells even though they are smaller than the microscope’s resolution limit has to do with our lighting system. Our system’s 4 LED lights illuminate the particles from multiple oblique angles. Each light source generates a specific projection of the particle and thus produces different diffraction patterns. These patterns depend on the particle’s diameter, thickness, and optical properties. As humans, we cannot easily makes sense of these different projections, but computers can. Even though our microscope has lower resolution, we make up for this by obtaining additional information about each particle through the multiple lighting sources.

3. We built a system that is 3 orders of magnitude cheaper but has the same performance as a traditional urinalysis system.

Commercial systems that do automated urinalysis cost approximately $100,000. Our sensor is 3 orders of magnitude cheaper and achieves the same performance as the commercial system.

Bill of materials for our microscope. The cost of parts for the microscope comes out to less than $100.

We performed a head-to-head comparison of our microscope to that of a commercial system found in a medical lab for 8 different synthetic urine samples. Our results correlated well (r2=0.98) with those of the medical laboratory.

Correlation between our particle counts and those of a medical laboratory. We performed a head-to-head comparison of our microscope to that of a medical laboratory. We tested 8 different synthetic urine samples with different particle concentrations. Our results correlated very well r²=0.98 with those of the laboratory.

Conclusions

We developed an automated, low-cost urinalysis system that can be used at the point-of-care.

Our $100 system performs as well as a commercial $100,000 piece of equipment. This huge reduction is made possible by relaxing the design requirements of a typical microscope, including: (1) using non-uniform illumination, (2) using low-resolution, low-magnification optics. Instead, our microscope collects additional information on the particles by illuminating them from 4 different angles, providing different perspectives on each particle. We then use machine learning to parse through the information, and identify the unique features that classify each particle.

Machine learning is enabling a shift in how we think about imaging/diagnostics. This work is just one example. Up until now, the goal for equipment manufacturers had been to build top-end instruments that generate high-resolutions images that can be interpreted by humans. Machines, however, can make sense of information that humans cannot. When machines do the interpretation, we can relax the design specifications of the equipment. When machines do the interpretation, we can make diagnostic systems smaller, faster, and cheaper.


Thanks for reading and let me know what you think! What other sensing systems can we rethink by optimizing for machine analysis instead of human analysis?


Footnotes

* Some medical labs do microscopic urinalysis manually using a regular microscope and a well-trained technician. Other labs utilize a semi-automated machine. This machine is called the iQ-200 and costs around $100k-$150k.

† A Quantitative Measure of Field Illumination. J Biomol Tech. 2015 Jul; 26(2): 37–44. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4365985/

A full technical description of this project can be found at: https://github.com/SidneyPrimas/AutoScope/blob/master/MIT_Master_Thesis.pdf