Custom ML model metrics visualization with the LifeOmic Platform

Sam Aran
Life and Tech @ LifeOmic
5 min readSep 15, 2023
Photo by Isaac Smith on Unsplash

The LifeOmic Platform’s Machine Learning (ML) feature allows our customers to run custom ML models without having to worry about the overhead of creating and managing a training pipeline.

All ML models need some metrics visualization to monitor how they’re improving over time. Perhaps you need to show your stakeholders that the p99 latency has decreased by 50% over the past few model runs, or you want to make sure your newest champion (i.e. the currently deployed model run that’s the “best” by the metrics you care about) isn’t getting overconfident. Custom models create a natural need for custom metrics visualization.

Our ML team wanted to balance the level of customization with the ease of implementation, so we’re leveraging the LifeOmic Platform’s JupyterLab integration to put the power in the hands of our customers. All you need is someone who knows a little about charting in Python.

  1. You’ll need an Enterprise account with the Machine Learning, Files, and Jupyter Notebooks products enabled. If you’re not already set up, your customer success manager can help you get started.
  2. Next, you’ll need your custom ML model. You can define your own metrics in your model’s training phase configuration, and the platform defines standard metrics for the evaluation phase out of the box. If you want to expose custom evaluation metrics, you can create a custom evaluation image to use with our pipeline.
  3. Using our Python SDK, you can import phc and access your model data through a session. For example, you can grab a singular run as follows:
import phc
from phc import Session
from phc.services import PatientML

session = Session(account="account-id")
response = PatientML(session).get_run(
model_id="12345-abcd-678-efg", run_id="98765-xyzq-432-rst"
)

4. You can then use your model run data to visualize to your heart’s content. The charting libraries matplotlib and plotly come preinstalled within our Jupyter notebook runtime. You can also install your favorite charting library via either terminal or code cell.

Here are some example graphs highlighting a few common ML metrics. In the first example, we do some reorganizing of our Intersection over Union (IoU) metrics from all runs, then display each metric as its own trace in a scatter plot. We can compare each IoU to the learning rate hyperparameter to see if there is any benefit to adjusting it.

Note: These metrics were produced by a testing model that’s only trained for a couple of iterations on a tiny dataset; real models will have much richer and more accurate IoU data.

# ~~~~~~~Build chart data from all model runs~~~~~~~

import phc
from phc import Session
from phc.services import PatientML

session = Session(account="account-id")
all_runs_res = PatientML(session).get_runs(model_id="12345-abcd-678-efg")
all_runs = all_runs_res["runs"]

chart_data = {}
current_run = {}

for run in all_runs:
hyperparameters = run["hyperparameters"]
metrics_chall = run["metrics"]["challenger"]
metrics_champ = run["metrics"]["champion"]
slug = run["slug"]
for parameter in hyperparameters:
if parameter["name"] == "lr":
current_run = {
"learning_rate": float(parameter["value"]),
"iou_macro_challenger": None,
"iou_micro_challenger": None,
"iou_macro_champion": None,
"iou_micro_champion": None,
}
for metric in metrics_chall:
if metric["name"] == "IoU (Macro)":
current_run["iou_macro_challenger"] = metric["value"]
if metric["name"] == "IoU (Micro)":
current_run["iou_micro_challenger"] = metric["value"]
for metric in metrics_champ:
if metric["name"] == "IoU (Macro)":
current_run["iou_macro_champion"] = metric["value"]
if metric["name"] == "IoU (Micro)":
current_run["iou_micro_champion"] = metric["value"]
chart_data[slug] = current_run

# ~~~~~~~Set up a data frame to use to make multiple traces~~~~~~~

import plotly.graph_objects as go
import pandas as pd

learning_rate = []
iou_macro_challenger = []
iou_micro_challenger = []
iou_macro_champion = []
iou_micro_champion = []

for item in chart_data:
learning_rate.append(chart_data[item]["learning_rate"])
iou_macro_challenger.append(chart_data[item]["iou_macro_challenger"])
iou_micro_challenger.append(chart_data[item]["iou_micro_challenger"])
iou_macro_champion.append(chart_data[item]["iou_macro_champion"])
iou_micro_champion.append(chart_data[item]["iou_micro_champion"])

df = pd.DataFrame(
dict(
learning_rate=learning_rate,
iou_micro_challenger=iou_micro_challenger,
iou_macro_challenger=iou_macro_challenger,
iou_micro_champion=iou_micro_champion,
iou_macro_champion=iou_macro_champion,
)
)

# ~~~~~~~Display scatter plot from chart data~~~~~~~

fig = go.Figure()

plots = {}

for i in df.columns:
if i != "learning_rate":
plots[i] = go.Scatter(
name=i,
x=df.learning_rate,
y=df[i],
mode="markers"
)

for plot in plots:
fig.add_trace(plots[plot])

fig.update_xaxes(type="log")
fig.update_layout(height=500)
fig.show()
Example scatter plot showing 4 IoU metrics vs learning rate for all runs

In the second example, we compare our three latency metrics between our deployed champion run and a new challenger run. The challenger has higher latency than our champion at all three levels, so it looks like we won’t be deploying it as the new champion today.

# ~~~~~~~Populate a list with our run's three latency metrics~~~~~~~

import plotly.graph_objects as go

latencies = []
names = ["p50", "p90", "p99"]

for i in range(-1, -4, -1):
latencies.append(
{
"name": names[i],
"challenger": my_run["metrics"]["challenger"][i]["value"],
"champion": my_run["metrics"]["champion"][i]["value"],
}
)

# ~~~~~~~Render three charts from the latency data~~~~~~~

for latency in latencies:
fig = go.Figure(
go.Indicator(
mode="number+gauge+delta",
value=latency["challenger"],
domain={"x": [0, 1], "y": [0, 1]},
delta={
"reference": latency["champion"],
"position": "top",
"decreasing": {"color": "green", "symbol": "▼"},
"increasing": {"color": "red", "symbol": "▲"},
},
title={"text": latency["name"], "font": {"size": 15}},
gauge={
"shape": "bullet",
"axis": {"range": [None, 10]},
"threshold": {
"line": {"color": "black", "width": 2},
"thickness": 0.75,
"value": latency["champion"],
},
"bgcolor": "white",
"steps": [
{"range": [0, 2], "color": "#82d96f"},
{"range": [2, 4], "color": "#cbe373"},
{"range": [4, 6], "color": "#f7fc7c"},
{"range": [6, 8], "color": "#f4dc75"},
{"range": [8, 10], "color": "#ee945f"},
],
"bar": {"color": "black"},
},
)
)
fig.update_layout(height=250)
fig.show()
Three example latency bullet charts
Example Latency Bullet Charts

5. Finally, you can leverage our file service to export your work to your account in the LifeOmic Platform to share out to your larger community.

import phc
from phc import Session
from phc.services import Files

session = Session(account="account-id")
response = Files(session).upload(
project_id="12345-abcde-67890-lmnop",
source="./path/to/file/FileName.ipynb",
overwrite=True,
)
You can view your exported file inside LifeOmic Platform using the Files feature

Our ML Team hopes this solution offers you the personalization that you need to manage your custom ML model metrics. The solution is simple enough that a small team, or someone just learning about charting in Python, can accomplish what they need to do. If you have any questions or want to see the examples in this article inside a Jupyter notebook, please reach out to our team.

--

--