TDS Archive

An archive of data science, data analytics, data engineering, machine learning, and artificial intelligence writing from the former Towards Data Science Medium publication.

Part 4: Semantic Segmentation

Sean Law
8 min readJun 16, 2020

--

(Image by Ronan Furuta)

The Whole is Greater than the Sum of Its Parts

(Image by Author)

STUMPY is a powerful and scalable Python library for modern time series analysis and, at its core, efficiently computes something called a matrix profile. The goal of this multi-part series is to explain what the matrix profile is and how you can start leveraging STUMPY for all of your modern time series data mining tasks!

Note: These tutorials were originally featured in the STUMPY documentation.

Part 1: The Matrix Profile
Part 2: STUMPY Basics
Part 3: Time Series Chains
Part 4: Semantic Segmentation
Part 5: Fast Approximate Matrix Profiles with STUMPY
Part 6: Matrix Profiles for Streaming Time Series Data
Part 7: Fast Pattern Searching with STUMPY
Part 8: AB-Joins with STUMPY
Part 9: Time Series Consensus Motifs
Part 10: Discovering Multidimensional Time Series Motifs
Part 11: User-Guided Motif Search
Part 12: Matrix Profiles for Machine Learning

Identifying Change Points in Time Series Data with FLUSS and FLOSS

This example utilizes the main takeaways from the Matrix Profile VIII research paper. For proper context, we highly recommend that you read the paper first but know that our implementations follow this paper closely.

According to the aforementioned publication, “one of the most basic analyses one can perform on [increasing amounts of time series data being captured] is to segment it into homogeneous regions.” In other words, wouldn’t it be nice if you could take your long time series data and be able to segment or chop it up into k regions (where k is small) and with the ultimate goal of presenting only k short representative patterns to a human (or machine) annotator in order to produce labels for the entire dataset. These segmented regions are also known as “regimes”. Additionally, as an exploratory tool, one might uncover new actionable insights in the data that was previously undiscovered. Fast low-cost unipotent semantic segmentation (FLUSS) is an algorithm that produces something called an “arc curve” which annotates the raw time series with information about the likelihood of a regime change. Fast low-cost online semantic segmentation (FLOSS) is a variation of FLUSS that, according to the original paper, is domain agnostic, offers streaming capabilities with potential for actionable real-time intervention, and is suitable for real world data (i.e., does not assume that every region of the data belongs to a well-defined semantic segment).

To demonstrate the API and underlying principles, we will be looking at arterial blood pressure (ABP) data from a healthy volunteer resting on a medical tilt table and will be seeing if we can detect when the table is tilted from a horizontal position to a vertical position. This is the same data that is presented throughout the original paper (above).

Getting Started

Let’s import the packages that we’ll need to load, analyze, and plot the data.

%matplotlib inline

import pandas as pd
import numpy as np
import stumpy
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle, FancyArrowPatch
from matplotlib import animation
from IPython.display import HTML

plt.rcParams["figure.figsize"] = [20, 6] # width, height
plt.rcParams['xtick.direction'] = 'out'

Retrieving the Data

df = pd.read_csv("https://zenodo.org/record/4276400/files/Semantic_Segmentation_TiltABP.csv?download=1")
df.head()
time abp
0 06832.0
1 16928.0
2 26968.0
3 36992.0
4 46980.0

Visualizing the Raw Data

plt.plot(df['time'], df['abp'])
rect = Rectangle((24000,2400),2000,6000,facecolor='lightgrey')
plt.gca().add_patch(rect)
(Image by Author)

We can clearly see that there is a change around time=25000 that corresponds to when the table was tilted upright.

FLUSS

Instead of using the full dataset, let’s zoom in and analyze the 2,500 data points directly before and after x=25000 (see Figure 5 in the paper).

start = 25000 - 2500
stop = 25000 + 2500
abp = df.iloc[start:stop, 1]
plt.plot(range(abp.shape[0]), abp)
plt.ylim(2800, 8500)
plt.axvline(x=2373, linestyle="dashed")
style="Simple, tail_width=0.5, head_width=6, head_length=8"
kw = dict(arrowstyle=style, color="k")
# regime 1
rect = Rectangle((55,2500), 225, 6000, facecolor='lightgrey')
plt.gca().add_patch(rect)
rect = Rectangle((470,2500), 225, 6000, facecolor='lightgrey')
plt.gca().add_patch(rect)
rect = Rectangle((880,2500), 225, 6000, facecolor='lightgrey')
plt.gca().add_patch(rect)
rect = Rectangle((1700,2500), 225, 6000, facecolor='lightgrey')
plt.gca().add_patch(rect)
arrow = FancyArrowPatch((75, 7000), (490, 7000), connectionstyle="arc3, rad=-.5", **kw)
plt.gca().add_patch(arrow)
arrow = FancyArrowPatch((495, 7000), (905, 7000), connectionstyle="arc3, rad=-.5", **kw)
plt.gca().add_patch(arrow)
arrow = FancyArrowPatch((905, 7000), (495, 7000), connectionstyle="arc3, rad=.5", **kw)
plt.gca().add_patch(arrow)
arrow = FancyArrowPatch((1735, 7100), (490, 7100), connectionstyle="arc3, rad=.5", **kw)
plt.gca().add_patch(arrow)
# regime 2
rect = Rectangle((2510,2500), 225, 6000, facecolor='moccasin')
plt.gca().add_patch(rect)
rect = Rectangle((2910,2500), 225, 6000, facecolor='moccasin')
plt.gca().add_patch(rect)
rect = Rectangle((3310,2500), 225, 6000, facecolor='moccasin')
plt.gca().add_patch(rect)
arrow = FancyArrowPatch((2540, 7000), (3340, 7000), connectionstyle="arc3, rad=-.5", **kw)
plt.gca().add_patch(arrow)
arrow = FancyArrowPatch((2960, 7000), (2540, 7000), connectionstyle="arc3, rad=.5", **kw)
plt.gca().add_patch(arrow)
arrow = FancyArrowPatch((3340, 7100), (3540, 7100), connectionstyle="arc3, rad=-.5", **kw)
plt.gca().add_patch(arrow)
(Image by Author)

Roughly, in the truncated plot above, we see that the segmentation between the two regimes occurs around time=2373 (vertical dotted line) where the patterns from the first regime (grey) don’t cross over to the second regime (orange) (see Figure 2 in the original paper). And so the “arc curve” is calculated by sliding along the time series and simply counting the number of times other patterns have “crossed over” that specific time point (i.e., “arcs”). Essentially, this information can be extracted by looking at the matrix profile indices (which tells you where along the time series your nearest neighbor is). And so, we’d expect the arc counts to be high where repeated patterns are near each other and low where there are no crossing arcs.

Before we compute the “arc curve”, we’ll need to first compute the standard matrix profile and we can approximately see that the window size is about 210 data points (thanks to the knowledge of the subject matter/domain expert).

m = 210
mp = stumpy.stump(abp, m=m)

Now, to compute the “arc curve” and determine the location of the regime change, we can directly call the stumpy.fluss() function. However, note that stumpy.fluss() requires the following inputs:

  1. the matrix profile indices mp[:, 1] (not the matrix profile distances)
  2. an appropriate subsequence length, L (for convenience, we’ll just choose it to be equal to the window size, m=210)
  3. the number of regimes, n_regimes, to search for (2 regions in this case)
  4. an exclusion factor, excl_factor, to nullify the beginning and end of the arc curve (anywhere between 1–5 is reasonable according to the paper)
L = 210
cac, regime_locations = stumpy.fluss(mp[:, 1], L=L, n_regimes=2, excl_factor=1)

Notice that stumpy.fluss() actually returns something called the “corrected arc curve” (CAC), which normalizes the fact that there are typically less arcs crossing over a time point near the beginning and end of the time series and more potential for cross overs near the middle of the time series. Additionally, stumpy.fluss() returns the regimes or location(s) of the dotted line(s). Let’s plot our original time series (top) along with the corrected arc curve (orange) and the single regime (vertical dotted line).

fig, axs = plt.subplots(2, sharex=True, gridspec_kw={'hspace': 0})
axs[0].plot(range(abp.shape[0]), abp)
axs[0].axvline(x=regime_locations[0], linestyle="dashed")
axs[1].plot(range(cac.shape[0]), cac, color='C1')
axs[1].axvline(x=regime_locations[0], linestyle="dashed")
(Image by Author)

Here, we see that stumpy.fluss() has not only successfully identified that there was a regime change exists but it was able to clearly and cleanly separate the two regimes.

FLOSS

Unlike FLUSS, FLOSS is concerned with streaming data, and so it calculates a modified version of the corrected arc curve (CAC) that is strictly one-directional (CAC_1D) rather than bidirectional. That is, instead of expecting cross overs to be equally likely from both directions, we expect more cross overs to point toward the future (and less to point toward the past). So, we can manually compute the CAC_1D

# This is for demo purposes only. Use stumpy.floss() below!
cac_1d = stumpy._cac(mp[:, 3], L, bidirectional=False, excl_factor=1)

and compare the CAC_1D (blue) with the bidirectional CAC (orange) and we see that the global minimum is approximately in the same place (see Figure 10 in the original paper).

fig, axs = plt.subplots(2, sharex=True, gridspec_kw={'hspace': 0})
axs[0].plot(np.arange(abp.shape[0]), abp)
axs[0].axvline(x=regime_locations[0], linestyle="dashed")
axs[1].plot(range(cac.shape[0]), cac, color='C1')
axs[1].axvline(x=regime_locations[0], linestyle="dashed")
axs[1].plot(range(cac_1d.shape[0]), cac_1d)
(Image by Author)

Streaming Data with FLOSS

However, instead of manually computing CAC_1D like we did above on streaming data, we can actually call the stumpy.floss() function directly which instantiates a streaming object. To demonstrate the use of stumpy.floss(), let’s take some old_data and compute the matrix profile indices for it like we did above:

old_data = df.iloc[20000:20000+5000, 1].values  # This is well before the regime change has occurredmp = stumpy.stump(old_data, m=m)

Now, we could do what we did early and compute the bidirectional corrected arc curve but we’d like to see how the arc curve changes as a result of adding new data points. So, let’s define some new data that is to be streamed in:

new_data = df.iloc[25000:25000+5000, 1].values

Finally, we call the stumpy.floss() function to initialize a streaming object and pass in:

  1. the matrix profile generated from the old_data (only the matrix profile indices are used)
  2. the `old_data` used to generate the matrix profile in 1.
  3. the matrix profile window size, m=210
  4. the subsequence length, L=210
  5. the exclusion factor
stream = stumpy.floss(mp, old_data, m=m, L=L, excl_factor=1)

You can now update the stream with a new data point, t,via the stream.update(t) function and this will slide your window over by one data point and it will automatically update:

  1. the CAC_1D (accessed via the .cac_1d_ attribute)
  2. the matrix profile (accessed via the .P_ attribute)
  3. the matrix profile indices (accessed via the .I_ attribute)
  4. the sliding window of data used to produce the CAC_1D (accessed via the .T_ attribute - this should be the same size as the length of the `old_data)

Let’s continuously update our stream with the new_data one value at a time and store them in a list (you’ll see why in a second):

windows = []
for i, t in enumerate(new_data):
stream.update(t)
if i % 100 == 0:
windows.append((stream.T_, stream.cac_1d_))

Below, you can see an animation that changes as a result of updating the stream with new data. For reference, we’ve also plotted the CAC_1D (orange) that we manually generated from above for the stationary data. You’ll see that halfway through the animation, the regime change occurs and the updated CAC_1D (blue) will be perfectly aligned with the orange curve.

fig, axs = plt.subplots(2, sharex=True, gridspec_kw={'hspace': 0})axs[0].set_xlim((0, mp.shape[0]))
axs[0].set_ylim((-0.1, max(np.max(old_data), np.max(new_data))))
axs[1].set_xlim((0, mp.shape[0]))
axs[1].set_ylim((-0.1, 1.1))
lines = []
for ax in axs:
line, = ax.plot([], [], lw=2)
lines.append(line)
line, = axs[1].plot([], [], lw=2)
lines.append(line)
def init():
for line in lines:
line.set_data([], [])
return lines
def animate(window):
data_out, cac_out = window
for line, data in zip(lines, [data_out, cac_out, cac_1d]):
line.set_data(np.arange(data.shape[0]), data)
return lines
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=windows, interval=100,
blit=True)
anim_out = anim.to_jshtml()
plt.close() # Prevents duplicate image from displaying
if os.path.exists("None0000000.png"):
os.remove("None0000000.png") # Delete rogue temp file
HTML(anim_out)
(Image by Author)

Summary

And that’s it! You’ve just learned the basics of how to programmatically identify changing segments/regimes within your time series data using the matrix profile indices and leveraging stumpy.fluss() and stumpy.floss().

Resources

Matrix Profile VIII
STUMPY Matrix Profile Documentation
STUMPY Matrix Profile Github Code Repository

Part 3: Time Series Chains | Part 5: Fast Approximate Matrix Profiles with STUMPY

--

--

TDS Archive
TDS Archive

Published in TDS Archive

An archive of data science, data analytics, data engineering, machine learning, and artificial intelligence writing from the former Towards Data Science Medium publication.

Sean Law
Sean Law

Written by Sean Law

Principal Data Scientist at a Fortune 500 FinTech company. PyData Ann Arbor organizer. Creator of STUMPY for modern time series analysis. Twitter: @seanmylaw

Responses (1)