Practice makes Perfect… for AI

Mrudula Arali
9 min readOct 27, 2022

--

It’s pin-drop silent. Concentration and anxiety floods the room as the surgeon moves to the final step to complete the surgery. Just a little to the right and-

The monitor flatlines.

The surgeon stands there staring at their own two hands wondering where they went wrong.

With over 250 000 deaths in the US alone (Johns Hopkins), medical errors are more common than we think. Some claim that to be because of inadequately skilled staff. Others say that it might be because of an error in judgment. The truth is that it’s because we’re human. We make mistakes, and use them to learn and grow.

Humans. Make. Mistakes.

With Artificial Intelligence (AI), we’ve reached a point where we don’t have to worry about making mistakes. AI is skilled to a level of precision that humans will never be able to achieve.

It uses algorithms that are programmed to learn from their environments, and executes tasks as it’s told to with little to no error. Put simply, Artificial Intelligence is “smarter” and “better” than us.

Ever notice how the ads and types of products you see in your suggested section change to stuff you really want to click on? That’s because companies are using AI to learn your shopping habits, by using data from products you’ve previously purchased, or products you’re currently clicking on to view.

CMR Surgical

Even though this is a really small example of AI, using it in healthcare would maximize its potential. AI has the capability of learning and using the information it gets to improve itself. If we were to use this technology in ORs, surgeons would never have to doubt their skills again. AI even has the ability to calculate the risk level associated with each surgery, so you’d know exactly what to expect, no surprises.

We classify something as Artificial Intelligence when it can do the following:

  1. INTERPRET data
  2. LEARN from the data
  3. APPLY the knowledge learned

That’s it. Once it can interpret, learn, and apply, it has AI. Sounds similar to a human brain doesn’t it?

Machine Learning at its Best

Machine learning is a subset of AI that teaches itself from its past experiences. Kind of like us, we make mistakes, and learn from them… except AI doesn’t have to make mistakes to learn.

Even though Artificial Intelligence makes mistakes during the trial stages, it uses ALL the data it can get its hands on to make sure it doesn’t happen in the OR. This means it never makes mistakes when the stakes are high or when it matters most.

A good example of Machine Learning (ML) in AI is the online shopping example from earlier.

We can use ML to our advantage in healthcare to predict patient lifespans, organize patient data, and develop more accurate diagnoses. AI’s use in healthcare can bring medical errors down to near ZERO.

There are 3 types of Machine Learning:

  1. Reinforcement Learning: This is when the machine learns through trial and error. It just tries all the possibilities until it figures out which one has the best reward — in this case- the best solution to the problem.
  2. Unsupervised Learning: Takes input data and tries to find patterns through CLUSTERING (grouping like-data together)
  3. Supervised Learning: Takes both input AND output data and makes PREDICTIONS

Supervised Learning makes predictions by using mainly two techniques: Classification and Regression.

Classification: This one is easy. You give the machine an input, for example a marble. You tell it that the output — in this case the colour of the marble — is red, and it uses that for the next marble, which is either red or not red. On a higher level, we can use models like this for medical imaging like MRIs.

Regression: This is the slightly harder one. Regression works to make predictions on things that are more difficult to 100% quantify. For example, figuring out how much your SAT score can actually impact your post-secondary admissions. You would tell it past SAT Scores and their admissions status and then use that to help predict your outcome. Remember, it’s just a PREDICTION, not a fortune teller.

Deep Learning >>> Machine Learning

Let me explain why.

Machine learning is amazing and though it’s very valuable, it doesn’t really work well in healthcare, where we deal with various types of data. We have images, patient records, family history, etc.

Machine learning works better with simple tasks that have structured data.

In reality, that’s not reasonable at all. You’re not always handed simple problems to solve, and if you were, you wouldn’t always need AI for them.

That’s where deep learning (DL) comes in.

Deep Learning is really good at finding patterns in large amounts of data. It works better with more complex problems… which is usually what we’re dealing with in the real world. Deep learning is perfect for things like speech and image recognition.

This has SO many uses, especially in healthcare. In any type of diagnostic imaging, like MRIs, deep learning would be the best form of AI to use. The reason for DL’s efficiency is because of the increased number of hidden layers it uses in its neural networks.

A Breakdown of Neural Networks (NNs)

The base of Artificial Intelligence is math. That’s how neural networks are developed. Artificial Neural Networks (ANNs) basically try to mimic the brain and its neural pathways.

We can think of neural networks like water filtration systems. Each layer gets smaller and more difficult to satisfy the needs of to pass through. By the time the water gets through to the end, it’s been completely filtered and treated, even on a molecular level.

Comparing a Deep Neural Network with a Water Filtration System

There are a total of 3 types of layers in a neural network:

  1. The Input Layer
  2. The “Hidden” Layers (also everything in-between)
  3. The Output Layer

The hidden layers are where all the math is.

We have neurons (which represent a number 0–1) on a greyscale. Those are the circles in the diagram above. This is a function, where 0 is black and 1 is white.

Activation is the number inside the neuron, for example 0.34. Whatever gets activated in the first layer is what moves onto the second layer.

We can see what gets activated by its rating on the greyscale. The brightest neurons are what get passed onto the second layer (i.e. whichever numbers are closest to 1).

Eventually, by the time we reach the output layer, data becomes so specific that we’re shocked at how “accurate” AI can be. In reality, it’s the math that makes this possible.

Diving into the Math

Every connection in a neural network has its own weight. This is basically a number that helps us identify which connection is which. We change the weight using something called back-propagation.

Back-propagation helps us increase the accuracy of our outputs. We take our initial outputs and run them through the neural network again. This helps train the neural network to understand and change its weights to become more accurate and present the right outputs.

For example, if you had an image that you inputted and wanted it to be identified as a bike, the weights for each pixel in the image with a bike would increase until it would accurately be able to do so.

A bias can be used to only activate specific weighted sums. For example, only activating neurons when the weighted sum is less than 10. This helps add an additional layer of “screening” in the neural network. This is also what helps us determine which neurons get activated and passed onto the next layer of the network (sometimes a bias is referred to as a “threshold”, but a bias = — threshold).

TikZ

The weight is calculated using something called an ACTIVATION FUNCTION. This function is what helps us determine which neurons get activated into the next layer.

Activation Function = (w₀ a ₀ + w₁ a₁ + w₂ a₂ + … + wₙ a ₙ + bias)

In neural networks, the machine will find the weighted sum to help it determine the output. The way you do that is simple, you take the activations of each neuron, multiply it by its weight, and then add them up with the bias.

Types of Neural Networks

The same neural network isn’t always going to be the most efficient for every scenario. That’s why there are so many more out there. Here’s a small breakdown of the most commonly used ones:

  1. Recurrent Neural Networks (RNNs)
Recurrent Neural Network

The output data from the last layer will go back into the hidden layers before it. This is usually helpful for sequential data, or for translating languages.

  1. Convolutional Neural Networks (CNNs)

Works with the numbers inside the neurons (Activations). It’s used for image and facial recognition. For example, Face ID on your iPhone.

IBM

A Convolutional Neural Network is especially helpful in medical imaging. When radiologists analyze a scan, they’re only looking for what you came into the scan for. By using CNNs, AI can help detect other underlying diagnoses that go unnoticed through classification, detection, or segmentation. This can include finding cancers early-on to maximize treatment success.

A Piece of Cake… or not…

Despite making various advancements in AI, we still struggle with implementing it on a large scale in the real world. This is because a lot of the time we lack the infrastructure to make it happen.

Healthcare is an industry that doesn’t change very often. Once you find a system that works, it’s difficult to switch to another one. It often also needs to be standardized throughout the nation, state/province, city, etc. Just thinking about how long it took the healthcare industry to switch to digitalized patient records is proof of the lengthy process.

Another huge concern is security. Nobody wants their personal information out in the open, much less their medical records. For governments to test, verify, and deem technologies such as AI safe for the healthcare industry can often take months if not years.

AI is also expensive. Creating just ONE prototype of a surgical robot takes a lot of time, effort, and money. In an industry like healthcare where money is already in such high demand, it can be hard to spend it on prototypes for potential projects vs. on more products that they know work and are efficient.

The Future of AI x Healthcare

Surgical Robots (STAR)

These are robots that assist and sometimes can perform surgeries on their own. They use deep learning and gather data by watching live surgeons perform surgery.

The first EVER surgical robot performed a laparoscopic surgery with no human assistance using Machine Learning. The Smart Tissue Autonomous Robot (or STAR) was designed by researchers from John Hopkins University. STAR was able to perform a very difficult surgery and produce outcomes significantly better than that of a human. This included minimizing errors to even the tiniest tremors in the hands — proof of AI working to limit medical errors.

John Hopkins University

Faster Diagnosis (Selena+)

With the help of Deep Learning, AI can help diagnose patients faster than ever. Selena+ works on detecting eye diseases in patients with diabetes. It’s able to detect diabetic retinopathy, glaucoma, and age-related macular degeneration. It accesses a patient’s retinal images and is able to accurately identify the diseases as it was trained using exposure to multiple images with the diseases (similar to back-propagation). This back-propagation changes the weight of each connection in the neural network, increasing its accuracy.

The Bottom Line

The scope for AI in healthcare is INSANE, but nothing will actually be made accessible until we develop the infrastructure to support these advancements. Whether that be streamlining the implementation of AI in hospitals or increasing funding for current and future endeavours. Either ways, AI is a relatively new concept and we have so much more to explore in the coming years.

Resources

Thanks for reading! Stay tuned for future articles and be sure to follow me on Medium to get notified of future posts.

--

--