WEASEL_V2

class WEASEL_V2(min_window=4, norm_options=(False,), word_lengths=(7, 8), use_first_differences=(True, False), feature_selection='chi2_top_k', max_feature_count=30000, class_weight=None, n_jobs=1, random_state=None)[source]

Word Extraction for Time Series Classification (WEASEL) v2.0.

Overview: Input ‘n’ series length ‘m’ WEASEL is a dictionary classifier that builds a bag-of-patterns using SFA for different window lengths and learns a logistic regression classifier on this bag.

WEASEL 2.0 has three key parameters that are automcatically set based on the length of the time series: (1) Minimal window length: Typically defaulted to 4 (2) Maximal window length: Typically chosen from

24, 44 or 84 depending on the time series length.

  1. Ensemble size: Typically chosen from 50, 100, 150, to derive a feature vector of roughly 20k up to 70k features (distinct words).

From the other parameters passed, WEASEL chosen random values for each set of configurations. E.g. for each of 150 configurations, a random value is chosen from the below options.

Parameters:
min_windowint, default=4,

Minimal length of the subsequences to compute words from.

norm_optionsarray of bool, default=[False]

If the array contains True, words are computed over mean-normed TS If the array contains False, words are computed over raw TS If both are set, words are computed for both. A value will be randomly chosen for each parameter-configuration.

word_lengthsarray of int, default=[7, 8]

Length of the words to compute. A value will be randomly chosen for each parameter-configuration.

use_first_differencesarray of bool, default=[True, False],

If the array contains True, words are computed over first order differences. If the array contains False, words are computed over the raw time series. If both are set, words are computed for both.

feature_selectionstr, default = “chi2_top_k”

Sets the feature selections strategy to be used. Options from {“chi2_top_k”, “none”, “random”}. Large amounts of memory may be needed depending on the setting of bigrams (true is more) or alpha (larger is more). ‘chi2_top_k’ reduces the number of words to at most ‘max_feature_count’, dropping values based on p-value. ‘random’ reduces the number to at most ‘max_feature_count’, by randomly selecting features. ‘none’ does not apply any feature selection and yields large bag of words

max_feature_countint, default=30_000

size of the dictionary - number of words to use - if feature_selection set to “chi2” or “random”. Else ignored.

class_weight{“balanced”, “balanced_subsample”}, dict or list of dicts, default=None

From sklearn documentation: If not given, all classes are supposed to have weight one. The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y)) The “balanced_subsample” mode is the same as “balanced” except that weights are computed based on the bootstrap sample for every tree grown. For multi-output, the weights of each column of y will be multiplied. Note that these weights will be multiplied with sample_weight (passed through the fit method) if sample_weight is specified.

random_stateint or None, default=None

If int, random_state is the seed used by the random number generator; If None, the random number generator is the RandomState instance used by np.random.

Attributes:
n_classes_int

The number of classes.

classes_list

The classes labels.

See also

MUSE

References

[1]

Patrick Schäfer and Ulf Leser, “WEASEL 2.0 – A Random Dilated Dictionary

Transform for Fast, Accurate and Memory Constrained Time Series Classification”, Preprint, https://arxiv.org/abs/2301.10194

Examples

>>> from aeon.classification.dictionary_based import WEASEL_V2
>>> from aeon.datasets import load_unit_test
>>> X_train, y_train = load_unit_test(split="train")
>>> X_test, y_test = load_unit_test(split="test")
>>> clf = WEASEL_V2()
>>> clf.fit(X_train, y_train)
WEASEL_V2(...)
>>> y_pred = clf.predict(X_test)

Methods

clone([random_state])

Obtain a clone of the object with the same hyperparameters.

fit(X, y)

Fit time series classifier to training data.

fit_predict(X, y, **kwargs)

Fits the classifier and predicts class labels for X.

fit_predict_proba(X, y, **kwargs)

Fits the classifier and predicts class label probabilities for X.

get_class_tag(tag_name[, raise_error, ...])

Get tag value from estimator class (only class tags).

get_class_tags()

Get class tags from estimator class and all its parent classes.

get_fitted_params([deep])

Get fitted parameters.

get_metadata_routing()

Sklearn metadata routing.

get_params([deep])

Get parameters for this estimator.

get_tag(tag_name[, raise_error, ...])

Get tag value from estimator class.

get_tags()

Get tags from estimator.

predict(X)

Predicts class labels for time series in X.

predict_proba(X)

Predicts class label probabilities for time series in X.

reset([keep])

Reset the object to a clean post-init state.

score(X, y[, metric, use_proba, metric_params])

Scores predicted labels against ground truth labels on X.

set_params(**params)

Set the parameters of this estimator.

set_tags(**tag_dict)

Set dynamic tags to given values.

clone(random_state=None)[source]

Obtain a clone of the object with the same hyperparameters.

A clone is a different object without shared references, in post-init state. This function is equivalent to returning sklearn.clone of self. Equal in value to type(self)(**self.get_params(deep=False)).

Parameters:
random_stateint, RandomState instance, or None, default=None

Sets the random state of the clone. If None, the random state is not set. If int, random_state is the seed used by the random number generator. If RandomState instance, random_state is the random number generator.

Returns:
estimatorobject

Instance of type(self), clone of self (see above)

fit(X, y) BaseCollectionEstimator[source]

Fit time series classifier to training data.

Parameters:
Xnp.ndarray or list

Input data, any number of channels, equal length series of shape ( n_cases, n_channels, n_timepoints) or 2D np.array (univariate, equal length series) of shape (n_cases, n_timepoints) or list of numpy arrays (any number of channels, unequal length series) of shape [n_cases], 2D np.array (n_channels, n_timepoints_i), where n_timepoints_i is length of series i. Other types are allowed and converted into one of the above.

Different estimators have different capabilities to handle different types of input. If self.get_tag("capability:multivariate") is False, they cannot handle multivariate series, so either n_channels == 1 is true or X is 2D of shape (n_cases, n_timepoints). If self.get_tag( "capability:unequal_length") is False, they cannot handle unequal length input. In both situations, a ValueError is raised if X has a characteristic that the estimator does not have the capability for is passed.

ynp.ndarray

1D np.array of float or str, of shape (n_cases) - class labels (ground truth) for fitting indices corresponding to instance indices in X.

Returns:
selfBaseClassifier

Reference to self.

Notes

Changes state by creating a fitted model that updates attributes ending in “_” and sets is_fitted flag to True.

fit_predict(X, y, **kwargs) ndarray[source]

Fits the classifier and predicts class labels for X.

fit_predict produces prediction estimates using just the train data. By default, this is through 10x cross validation, although some estimators may utilise specialist techniques such as out-of-bag estimates or leave-one-out cross-validation.

Classifiers which override _fit_predict will have the capability:train_estimate tag set to True.

Generally, this will not be the same as fitting on the whole train data then making train predictions. To do this, you should call fit(X,y).predict(X)

Parameters:
Xnp.ndarray or list

Input data, any number of channels, equal length series of shape ( n_cases, n_channels, n_timepoints) or 2D np.array (univariate, equal length series) of shape (n_cases, n_timepoints) or list of numpy arrays (any number of channels, unequal length series) of shape [n_cases], 2D np.array (n_channels, n_timepoints_i), where n_timepoints_i is length of series i. other types are allowed and converted into one of the above.

Different estimators have different capabilities to handle different types of input. If self.get_tag("capability:multivariate") is False, they cannot handle multivariate series, so either n_channels == 1 is true or X is 2D of shape (n_cases, n_timepoints). If self.get_tag( "capability:unequal_length") is False, they cannot handle unequal length input. In both situations, a ValueError is raised if X has a characteristic that the estimator does not have the capability for is passed.

ynp.ndarray

1D np.array of float or str, of shape (n_cases) - class labels (ground truth) for fitting indices corresponding to instance indices in X.

kwargsdict

key word arguments to configure the default cross validation if the base class default fit_predict is used (i.e. if function _fit_predict is not overridden. If _fit_predict is overridden, kwargs may not function as expected. If _fit_predict is not overridden, valid input is cv_size integer, which is the number of cross validation folds to use to estimate train data. If cv_size is not passed, the default is 10. If cv_size is greater than the minimum number of samples in any class, it is set to this minimum.

Returns:
predictionsnp.ndarray

shape [n_cases] - predicted class labels indices correspond to instance indices in

fit_predict_proba(X, y, **kwargs) ndarray[source]

Fits the classifier and predicts class label probabilities for X.

fit_predict_proba produces probability estimates using just the train data. By default, this is through 10x cross validation, although some estimators may utilise specialist techniques such as out-of-bag estimates or leave-one-out cross-validation.

Classifiers which override _fit_predict_proba will have the capability:train_estimate tag set to True.

Generally, this will not be the same as fitting on the whole train data then making train predictions. To do this, you should call fit(X,y).predict_proba(X)

Parameters:
Xnp.ndarray or list

Input data, any number of channels, equal length series of shape ( n_cases, n_channels, n_timepoints) or 2D np.array (univariate, equal length series) of shape (n_cases, n_timepoints) or list of numpy arrays (any number of channels, unequal length series) of shape [n_cases], 2D np.array (n_channels, n_timepoints_i), where n_timepoints_i is length of series i. other types are allowed and converted into one of the above.

Different estimators have different capabilities to handle different types of input. If self.get_tag("capability:multivariate") is False, they cannot handle multivariate series, so either n_channels == 1 is true or X is 2D of shape (n_cases, n_timepoints). If self.get_tag( "capability:unequal_length") is False, they cannot handle unequal length input. In both situations, a ValueError is raised if X has a characteristic that the estimator does not have the capability for is passed.

ynp.ndarray

1D np.array of float or str, of shape (n_cases) - class labels (ground truth) for fitting indices corresponding to instance indices in X.

kwargsdict

key word arguments to configure the default cross validation if the base class default fit_predict is used (i.e. if function _fit_predict is not overridden. If _fit_predict is overridden, kwargs may not function as expected. If _fit_predict is not overridden, valid input is cv_size integer, which is the number of cross validation folds to use to estimate train data. If cv_size is not passed, the default is 10. If cv_size is greater than the minimum number of samples in any class, it is set to this minimum.

Returns:
probabilitiesnp.ndarray

2D array of shape (n_cases, n_classes) - predicted class probabilities First dimension indices correspond to instance indices in X, second dimension indices correspond to class labels, (i, j)-th entry is estimated probability that i-th instance is of class j

classmethod get_class_tag(tag_name, raise_error=True, tag_value_default=None)[source]

Get tag value from estimator class (only class tags).

Parameters:
tag_namestr

Name of tag value.

raise_errorbool, default=True

Whether a ValueError is raised when the tag is not found.

tag_value_defaultany type, default=None

Default/fallback value if tag is not found and error is not raised.

Returns:
tag_value

Value of the tag_name tag in cls. If not found, returns an error if raise_error is True, otherwise it returns tag_value_default.

Raises:
ValueError

if raise_error is True and tag_name is not in self.get_tags().keys()

Examples

>>> from aeon.classification import DummyClassifier
>>> DummyClassifier.get_class_tag("capability:multivariate")
True
classmethod get_class_tags()[source]

Get class tags from estimator class and all its parent classes.

Returns:
collected_tagsdict

Dictionary of tag name and tag value pairs. Collected from _tags class attribute via nested inheritance. These are not overridden by dynamic tags set by set_tags or class __init__ calls.

get_fitted_params(deep=True)[source]

Get fitted parameters.

State required:

Requires state to be “fitted”.

Parameters:
deepbool, default=True

If True, will return the fitted parameters for this estimator and contained subobjects that are estimators.

Returns:
fitted_paramsdict

Fitted parameter names mapped to their values.

get_metadata_routing()[source]

Sklearn metadata routing.

Not supported by aeon estimators.

get_params(deep=True)[source]

Get parameters for this estimator.

Parameters:
deepbool, default=True

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:
paramsdict

Parameter names mapped to their values.

get_tag(tag_name, raise_error=True, tag_value_default=None)[source]

Get tag value from estimator class.

Includes dynamic and overridden tags.

Parameters:
tag_namestr

Name of tag to be retrieved.

raise_errorbool, default=True

Whether a ValueError is raised when the tag is not found.

tag_value_defaultany type, default=None

Default/fallback value if tag is not found and error is not raised.

Returns:
tag_value

Value of the tag_name tag in self. If not found, returns an error if raise_error is True, otherwise it returns tag_value_default.

Raises:
ValueError

if raise_error is True and tag_name is not in self.get_tags().keys()

Examples

>>> from aeon.classification import DummyClassifier
>>> d = DummyClassifier()
>>> d.get_tag("capability:multivariate")
True
get_tags()[source]

Get tags from estimator.

Includes dynamic and overridden tags.

Returns:
collected_tagsdict

Dictionary of tag name and tag value pairs. Collected from _tags class attribute via nested inheritance and then any overridden and new tags from __init__ or set_tags.

predict(X) ndarray[source]

Predicts class labels for time series in X.

Parameters:
Xnp.ndarray or list

Input data, any number of channels, equal length series of shape ( n_cases, n_channels, n_timepoints) or 2D np.array (univariate, equal length series) of shape (n_cases, n_timepoints) or list of numpy arrays (any number of channels, unequal length series) of shape [n_cases], 2D np.array (n_channels, n_timepoints_i), where n_timepoints_i is length of series i other types are allowed and converted into one of the above.

Different estimators have different capabilities to handle different types of input. If self.get_tag("capability:multivariate") is False, they cannot handle multivariate series, so either n_channels == 1 is true or X is 2D of shape (n_cases, n_timepoints). If self.get_tag( "capability:unequal_length") is False, they cannot handle unequal length input. In both situations, a ValueError is raised if X has a characteristic that the estimator does not have the capability for is passed.

Returns:
predictionsnp.ndarray

1D np.array of float, of shape (n_cases) - predicted class labels indices correspond to instance indices in X

predict_proba(X) ndarray[source]

Predicts class label probabilities for time series in X.

Parameters:
Xnp.ndarray or list

Input data, any number of channels, equal length series of shape ( n_cases, n_channels, n_timepoints) or 2D np.array (univariate, equal length series) of shape (n_cases, n_timepoints) or list of numpy arrays (any number of channels, unequal length series) of shape [n_cases], 2D np.array (n_channels, n_timepoints_i), where n_timepoints_i is length of series i. other types are allowed and converted into one of the above.

Different estimators have different capabilities to handle different types of input. If self.get_tag("capability:multivariate") is False, they cannot handle multivariate series, so either n_channels == 1 is true or X is 2D of shape (n_cases, n_timepoints). If self.get_tag( "capability:unequal_length") is False, they cannot handle unequal length input. In both situations, a ValueError is raised if X has a characteristic that the estimator does not have the capability for is passed.

Returns:
probabilitiesnp.ndarray

2D array of shape (n_cases, n_classes) - predicted class probabilities First dimension indices correspond to instance indices in X, second dimension indices correspond to class labels, (i, j)-th entry is estimated probability that i-th instance is of class j

reset(keep=None)[source]

Reset the object to a clean post-init state.

After a self.reset() call, self is equal or similar in value to type(self)(**self.get_params(deep=False)), assuming no other attributes were kept using keep.

Detailed behaviour:
removes any object attributes, except:

hyper-parameters (arguments of __init__) object attributes containing double-underscores, i.e., the string “__”

runs __init__ with current values of hyperparameters (result of get_params)

Not affected by the reset are:

object attributes containing double-underscores class and object methods, class attributes any attributes specified in the keep argument

Parameters:
keepNone, str, or list of str, default=None

If None, all attributes are removed except hyperparameters. If str, only the attribute with this name is kept. If list of str, only the attributes with these names are kept.

Returns:
selfobject

Reference to self.

score(X, y, metric='accuracy', use_proba=False, metric_params=None) float[source]

Scores predicted labels against ground truth labels on X.

Parameters:
Xnp.ndarray or list

Input data, any number of channels, equal length series of shape ( n_cases, n_channels, n_timepoints) or 2D np.array (univariate, equal length series) of shape (n_cases, n_timepoints) or list of numpy arrays (any number of channels, unequal length series) of shape [n_cases], 2D np.array (n_channels, n_timepoints_i), where n_timepoints_i is length of series i. other types are allowed and converted into one of the above.

Different estimators have different capabilities to handle different types of input. If self.get_tag("capability:multivariate") is False, they cannot handle multivariate series, so either n_channels == 1 is true or X is 2D of shape (n_cases, n_timepoints). If self.get_tag( "capability:unequal_length") is False, they cannot handle unequal length input. In both situations, a ValueError is raised if X has a characteristic that the estimator does not have the capability for is passed.

ynp.ndarray

1D np.array of float or str, of shape (n_cases) - class labels (ground truth) for fitting indices corresponding to instance indices in X.

metricUnion[str, callable], default=”accuracy”,

Defines the scoring metric to test the fit of the model. For supported strings arguments, check sklearn.metrics.get_scorer_names.

use_probabool, default=False,

Argument to check if scorer works on probability estimates or not.

metric_paramsdict, default=None,

Contains parameters to be passed to the scoring function. If None, no parameters are passed.

Returns:
scorefloat

Accuracy score of predict(X) vs y.

set_params(**params)[source]

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters:
**paramsdict

Estimator parameters.

Returns:
selfestimator instance

Estimator instance.

set_tags(**tag_dict)[source]

Set dynamic tags to given values.

Parameters:
**tag_dictdict

Dictionary of tag name and tag value pairs.

Returns:
selfobject

Reference to self.