binder

Distance based time series classification in aeon

Distance based classifiers use a time series specific distance function to measure the similarity between time series. Time series distance functions are often called elastic distances, since they compensate for possible misalignment between series by shifting or editing the series.

Dynamic time warping is the best known elastic distance measure. This image demonstrates how a warping path is found between two series A visualisation of dynamic time warping

We have a range of elastic distance functions in the distances module. Please see the distance notebook for more information. Distance functions have been mostly used with a nearest neighbour (NN) classifier, but you can use them with sklearn and aeon distances

Example of warping two series to the best

alignment.
width:

400

class:

no-scaled-link

Load data and list distance based classifiers

[2]:
import warnings

from sklearn import metrics

from aeon.datasets import load_italy_power_demand
from aeon.registry import all_estimators

warnings.filterwarnings("ignore")
all_estimators("classifier", filter_tags={"algorithm_type": "distance"})
---------------------------------------------------------------------------
ModuleNotFoundError                       Traceback (most recent call last)
Cell In[2], line 3
      1 import warnings
----> 3 from sklearn import metrics
      5 from aeon.datasets import load_italy_power_demand
      6 from aeon.registry import all_estimators

File C:\Code\aeon\venv\lib\site-packages\sklearn\__init__.py:82
     80 from . import _distributor_init  # noqa: F401
     81 from . import __check_build  # noqa: F401
---> 82 from .base import clone
     83 from .utils._show_versions import show_versions
     85 __all__ = [
     86     "calibration",
     87     "cluster",
   (...)
    128     "show_versions",
    129 ]

File C:\Code\aeon\venv\lib\site-packages\sklearn\base.py:17
     15 from . import __version__
     16 from ._config import get_config
---> 17 from .utils import _IS_32BIT
     18 from .utils._set_output import _SetOutputMixin
     19 from .utils._tags import (
     20     _DEFAULT_TAGS,
     21 )

File C:\Code\aeon\venv\lib\site-packages\sklearn\utils\__init__.py:25
     23 from .deprecation import deprecated
     24 from .discovery import all_estimators
---> 25 from .fixes import parse_version, threadpool_info
     26 from ._estimator_html_repr import estimator_html_repr
     27 from .validation import (
     28     as_float_array,
     29     assert_all_finite,
   (...)
     38     _is_arraylike_not_scalar,
     39 )

File C:\Code\aeon\venv\lib\site-packages\sklearn\utils\fixes.py:19
     17 import numpy as np
     18 import scipy
---> 19 import scipy.stats
     20 import threadpoolctl
     22 from .deprecation import deprecated

File C:\Code\aeon\venv\lib\site-packages\scipy\stats\__init__.py:453
      1 """
      2 .. _statsrefmanual:
      3
   (...)
    450
    451 """
--> 453 from ._stats_py import *
    454 from ._variation import variation
    455 from .distributions import *

File C:\Code\aeon\venv\lib\site-packages\scipy\stats\_stats_py.py:44
     42 import scipy.special as special
     43 from scipy import linalg
---> 44 from . import distributions
     45 from . import _mstats_basic as mstats_basic
     46 from ._stats_mstats_common import (_find_repeats, linregress, theilslopes,
     47                                    siegelslopes)

File C:\Code\aeon\venv\lib\site-packages\scipy\stats\distributions.py:11
      8 from ._distn_infrastructure import (rv_discrete, rv_continuous, rv_frozen)
     10 from . import _continuous_distns
---> 11 from . import _discrete_distns
     13 from ._continuous_distns import *
     14 from ._discrete_distns import *

File C:\Code\aeon\venv\lib\site-packages\scipy\stats\_discrete_distns.py:21
     17 from ._distn_infrastructure import (
     18     rv_discrete, _ncx2_pdf, _ncx2_cdf, get_distribution_names,
     19     _check_shape)
     20 import scipy.stats._boost as _boost
---> 21 from ._biasedurn import (_PyFishersNCHypergeometric,
     22                         _PyWalleniusNCHypergeometric,
     23                         _PyStochasticLib3)
     25 class binom_gen(rv_discrete):
     26     r"""A binomial discrete random variable.
     27
     28     %(before_notes)s
   (...)
     51
     52     """

File _biasedurn.pyx:1, in init scipy.stats._biasedurn()

ModuleNotFoundError: No module named 'numpy.random.bit_generator'

Distance based classifiers

The data was derived from twelve monthly electrical power demand time series from Italy and first used in the paper “Intelligent Icons: Integrating Lite-Weight Data Mining and Visualization into GUI Operating Systems”. The classification task is to distinguish days from Oct to March (inclusive) from April to September.

The dataset consists of 1096 rows in total. Each row represents a day of Italys electric power consumption. All days have a label either 1 or 2. 67 rows are used for training and the rest are for testing.

[ ]:
X_train, y_train = load_italy_power_demand(split="train", return_X_y=True)
X_test, y_test = load_italy_power_demand(split="test", return_X_y=True)
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
[ ]:
import matplotlib.pyplot as plt
import numpy as np

fig, axs = plt.subplots(2, 2, figsize=(8, 6))
axs[1, 1].axis("off")
axs[1, 0].axis("off")
ax_combined = fig.add_subplot(2, 1, (2, 3))
axs[0, 0].set_title("All days class 1")
axs[0, 1].set_title("All days class 2")
ax_combined.set_title("Both classes on top of each other")
for i in np.where(y_test == "1")[0]:
    axs[0, 0].plot(X_test[i][0], alpha=0.1, color="cornflowerblue", linestyle="solid")
    ax_combined.plot(X_test[i][0], alpha=0.1, color="cornflowerblue", linestyle="--")
for i in np.where(y_test == "2")[0]:
    axs[0, 1].plot(X_test[i][0], alpha=0.1, color="orange", linestyle="solid")
    ax_combined.plot(X_test[i][0], alpha=0.1, color="orange", linestyle=":")
[ ]:
from aeon.classification.distance_based import (
    ElasticEnsemble,
    KNeighborsTimeSeriesClassifier,
)

K-NN: KNeighborsTimeSeriesClassifier in aeon

k-NN is often called a lazy classifier, because there is little work done in the fit operation. The fit operation simply stores the training data. When we want to make a prediction for a new time series, k-NN measures the distance between the new time series and all the series in the training data and records the class of the closest k train series. The class labels of these nearest neighbours are used to make a prediction: if they are all the same label, then that is the prediction. If they differ, then some form of voting mechanism is required. For example, we may predict the most common class label amongst the nearest neighbours for the test instance.

KNeighborsTimeSeriesClassifier in aeon is configurable to use any of the distances functions in the distance module, or it can be passed a bespoke callable. You can set the number of neighbours and the weights. Weights are used in the prediction process when neightbours differ in class values. By default all neighbours have an equal vote. There is an option to weight by distance, meaning closer neighbours have more weight in the vote.

[ ]:
knn = KNeighborsTimeSeriesClassifier(distance="msm", n_neighbors=3, weights="distance")
knn.fit(X_train, y_train)
knn_preds = knn.predict(X_test)
metrics.accuracy_score(y_test, knn_preds)

Elastic Ensemble: ElasticEnsemble in aeon

The first algorithm to significantly out perform 1-NN DTW on the UCR data was the Elastic Ensemble (EE) [1]. EE is a weighted ensemble of 11 1-NN classifiers with a range of elastic distance measures. It was the best performing distance based classifier in the bake off. Elastic distances can be slow, and EE requires cross validation to find the weights of each classifier in the ensemble. You can configure EE to use specified distance functions, and tell it how much.

[ ]:
ee = ElasticEnsemble(
    distance_measures=["dtw", "msm"],
    proportion_of_param_options=0.1,
    proportion_train_in_param_finding=0.3,
    proportion_train_for_test=0.5,
)
ee.fit(X_train, y_train)
ee_preds = ee.predict(X_test)
metrics.accuracy_score(y_test, ee_preds)

Proximity Forest

Proximity Forest [2] is a distance based ensemble of decision trees. Its is the most accurate purely distance based technique for TSC that we know of. We do not currently have a working version of PF in aeon, but would very much like to have one. please see this issue. https://github.com/aeon-toolkit/aeon/issues/159

Performance on the UCR univariate datasets

You can find the dictionary based classifiers as follows. Note we do not have a Proximity Forest implementation in aeon yet, but we do have the results

[ ]:
from aeon.registry import all_estimators

est = all_estimators("classifier", filter_tags={"algorithm_type": "distance"})
for c in est:
    print(c)
[ ]:
from aeon.benchmarking import get_estimator_results_as_array
from aeon.datasets.tsc_datasets import univariate

names = [t[0].replace("Classifier", "") for t in est]
names.append(
    "PF"
)  # Results from Java implementation, as are the ElasticEnsemble results

results, present_names = get_estimator_results_as_array(
    names, univariate, include_missing=False
)
results.shape
[ ]:
from aeon.visualisation import plot_boxplot_median, plot_critical_difference

plot_critical_difference(results, names)
[ ]:
plot_boxplot_median(results, names)

References

[1] Lines J, Bagnall A (2015) Time series classification with ensembles of elastic distance measures. Data Mining and Knowledge Discovery 29:565–592

[2] Lucas et al. (2019) Proximity Forest: an effective and scalable distance-based classifier. Data Mining and Knowledge Discovery 33: 607–635 https://arxiv.org/abs/1808.10594


Generated using nbsphinx. The Jupyter notebook can be found here.