IndividualInceptionClassifier

class IndividualInceptionClassifier(n_filters=32, n_conv_per_layer=3, kernel_size=40, use_max_pooling=True, max_pool_size=3, strides=1, dilation_rate=1, padding='same', activation='relu', use_bias=False, use_residual=True, use_bottleneck=True, bottleneck_size=32, depth=6, use_custom_filters=False, file_path='./', save_best_model=False, save_last_model=False, save_init_model=False, best_file_name='best_model', last_file_name='last_model', init_file_name='init_model', batch_size=64, use_mini_batch_size=False, n_epochs=1500, callbacks=None, random_state=None, verbose=False, loss='categorical_crossentropy', metrics='accuracy', optimizer=None)[source]

Single InceptionTime classifier.

Parameters:
depthint, default = 6,

the number of inception modules used

n_filtersint or list of int32, default = 32,

the number of filters used in one inception module, if not a list, the same number of filters is used in all inception modules

n_conv_per_layerint or list of int, default = 3,

the number of convolution layers in each inception module, if not a list, the same number of convolution layers is used in all inception modules

kernel_sizeint or list of int, default = 40,

the head kernel size used for each inception module, if not a list, the same is used in all inception modules

use_max_poolingbool or list of bool, default = True,

conditioning whether or not to use max pooling layer in inception modules,if not a list, the same is used in all inception modules

max_pool_sizeint or list of int, default = 3,

the size of the max pooling layer, if not a list, the same is used in all inception modules

stridesint or list of int, default = 1,

the strides of kernels in convolution layers for each inception module, if not a list, the same is used in all inception modules

dilation_rateint or list of int, default = 1,

the dilation rate of convolutions in each inception module, if not a list, the same is used in all inception modules

paddingstr or list of str, default = “same”,

the type of padding used for convoltuon for each inception module, if not a list, the same is used in all inception modules

activationstr or list of str, default = “relu”,

the activation function used in each inception module, if not a list, the same is used in all inception modules

use_biasbool or list of bool, default = False,

conditioning whether or not convolutions should use bias values in each inception module, if not a list, the same is used in all inception modules

use_residualbool, default = True,

condition whether or not to use residual connections all over Inception

use_bottleneckbool, default = True,

confition whether or not to use bottlenecks all over Inception

bottleneck_sizeint, default = 32,

the bottleneck size in case use_bottleneck = True

use_custom_filtersbool, default = False,

condition on whether or not to use custom filters in the first inception module

batch_sizeint, default = 64

the number of samples per gradient update.

use_mini_batch_sizebool, default = False

condition on using the mini batch size formula Wang et al.

n_epochsint, default = 1500

the number of epochs to train the model.

callbackskeras callback or list of callbacks,

default = None The default list of callbacks are set to ModelCheckpoint and ReduceLROnPlateau.

file_pathstr, default = “./”

file_path when saving model_Checkpoint callback

save_best_modelbool, default = False

Whether or not to save the best model, if the modelcheckpoint callback is used by default, this condition, if True, will prevent the automatic deletion of the best saved model from file and the user can choose the file name

save_last_modelbool, default = False

Whether or not to save the last model, last epoch trained, using the base class method save_last_model_to_file

save_init_modelbool, default = False

Whether to save the initialization of the model.

best_file_namestr, default = “best_model”

The name of the file of the best model, if save_best_model is set to False, this parameter is discarded.

last_file_namestr, default = “last_model”

The name of the file of the last model, if save_last_model is set to False, this parameter is discarded.

init_file_namestr, default = “init_model”

The name of the file of the init model, if save_init_model is set to False, this parameter is discarded.

random_stateint, RandomState instance or None, default=None

If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random. Seeded random number generation can only be guaranteed on CPU processing, GPU processing will be non-deterministic.

verboseboolean, default = False

whether to output extra information

optimizerkeras.optimizer, default = tf.keras.optimizers.Adam()

The keras optimizer used for training.

lossstr, default = “categorical_crossentropy”

The name of the keras training loss.

metricsstr or list[str], default=”accuracy”

The evaluation metrics to use during training. If a single string metric is provided, it will be used as the only metric. If a list of metrics are provided, all will be used for evaluation.

Notes

Adapted from the implementation from Fawaz et. al https://github.com/hfawaz/InceptionTime/blob/master/classifiers/inception.py

and Ismail-Fawaz et al. https://github.com/MSD-IRIMAS/CF-4-TSC

References

..[1] Fawaz et al. InceptionTime: Finding AlexNet for Time Series Classification, Data Mining and Knowledge Discovery, 34, 2020

..[2] Ismail-Fawaz et al. Deep Learning For Time Series Classification Using New Hand-Crafted Convolution Filters, 2022 IEEE International Conference on Big Data.

Examples

>>> from aeon.classification.deep_learning import IndividualInceptionClassifier
>>> from aeon.datasets import load_unit_test
>>> X_train, y_train = load_unit_test(split="train")
>>> X_test, y_test = load_unit_test(split="test")
>>> inc = IndividualInceptionClassifier(n_epochs=20,batch_size=4)  
>>> inc.fit(X_train, y_train)  
IndividualInceptionClassifier(...)

Methods

build_model(input_shape, n_classes, **kwargs)

Construct a compiled, un-trained, keras model that is ready for training.

clone([random_state])

Obtain a clone of the object with the same hyperparameters.

convert_y_to_keras(y)

Convert y to required Keras format.

fit(X, y)

Fit time series classifier to training data.

fit_predict(X, y, **kwargs)

Fits the classifier and predicts class labels for X.

fit_predict_proba(X, y, **kwargs)

Fits the classifier and predicts class label probabilities for X.

get_class_tag(tag_name[, raise_error, ...])

Get tag value from estimator class (only class tags).

get_class_tags()

Get class tags from estimator class and all its parent classes.

get_fitted_params([deep])

Get fitted parameters.

get_metadata_routing()

Sklearn metadata routing.

get_params([deep])

Get parameters for this estimator.

get_tag(tag_name[, raise_error, ...])

Get tag value from estimator class.

get_tags()

Get tags from estimator.

load_model(model_path, classes)

Load a pre-trained keras model instead of fitting.

predict(X)

Predicts class labels for time series in X.

predict_proba(X)

Predicts class label probabilities for time series in X.

reset([keep])

Reset the object to a clean post-init state.

save_last_model_to_file([file_path])

Save the last epoch of the trained deep learning model.

score(X, y[, metric, use_proba, metric_params])

Scores predicted labels against ground truth labels on X.

set_params(**params)

Set the parameters of this estimator.

set_tags(**tag_dict)

Set dynamic tags to given values.

summary()

Summary function to return the losses/metrics for model fit.

build_model(input_shape, n_classes, **kwargs)[source]

Construct a compiled, un-trained, keras model that is ready for training.

Parameters:
input_shapetuple

The shape of the data fed into the input layer

n_classes: int
The number of classes, which shall become the size of the output

layer

Returns:
outputa compiled Keras Model
clone(random_state=None)[source]

Obtain a clone of the object with the same hyperparameters.

A clone is a different object without shared references, in post-init state. This function is equivalent to returning sklearn.clone of self. Equal in value to type(self)(**self.get_params(deep=False)).

Parameters:
random_stateint, RandomState instance, or None, default=None

Sets the random state of the clone. If None, the random state is not set. If int, random_state is the seed used by the random number generator. If RandomState instance, random_state is the random number generator.

Returns:
estimatorobject

Instance of type(self), clone of self (see above)

convert_y_to_keras(y)[source]

Convert y to required Keras format.

fit(X, y) BaseCollectionEstimator[source]

Fit time series classifier to training data.

Parameters:
Xnp.ndarray or list

Input data, any number of channels, equal length series of shape ( n_cases, n_channels, n_timepoints) or 2D np.array (univariate, equal length series) of shape (n_cases, n_timepoints) or list of numpy arrays (any number of channels, unequal length series) of shape [n_cases], 2D np.array (n_channels, n_timepoints_i), where n_timepoints_i is length of series i. Other types are allowed and converted into one of the above.

Different estimators have different capabilities to handle different types of input. If self.get_tag("capability:multivariate") is False, they cannot handle multivariate series, so either n_channels == 1 is true or X is 2D of shape (n_cases, n_timepoints). If self.get_tag( "capability:unequal_length") is False, they cannot handle unequal length input. In both situations, a ValueError is raised if X has a characteristic that the estimator does not have the capability for is passed.

ynp.ndarray

1D np.array of float or str, of shape (n_cases) - class labels (ground truth) for fitting indices corresponding to instance indices in X.

Returns:
selfBaseClassifier

Reference to self.

Notes

Changes state by creating a fitted model that updates attributes ending in “_” and sets is_fitted flag to True.

fit_predict(X, y, **kwargs) ndarray[source]

Fits the classifier and predicts class labels for X.

fit_predict produces prediction estimates using just the train data. By default, this is through 10x cross validation, although some estimators may utilise specialist techniques such as out-of-bag estimates or leave-one-out cross-validation.

Classifiers which override _fit_predict will have the capability:train_estimate tag set to True.

Generally, this will not be the same as fitting on the whole train data then making train predictions. To do this, you should call fit(X,y).predict(X)

Parameters:
Xnp.ndarray or list

Input data, any number of channels, equal length series of shape ( n_cases, n_channels, n_timepoints) or 2D np.array (univariate, equal length series) of shape (n_cases, n_timepoints) or list of numpy arrays (any number of channels, unequal length series) of shape [n_cases], 2D np.array (n_channels, n_timepoints_i), where n_timepoints_i is length of series i. other types are allowed and converted into one of the above.

Different estimators have different capabilities to handle different types of input. If self.get_tag("capability:multivariate") is False, they cannot handle multivariate series, so either n_channels == 1 is true or X is 2D of shape (n_cases, n_timepoints). If self.get_tag( "capability:unequal_length") is False, they cannot handle unequal length input. In both situations, a ValueError is raised if X has a characteristic that the estimator does not have the capability for is passed.

ynp.ndarray

1D np.array of float or str, of shape (n_cases) - class labels (ground truth) for fitting indices corresponding to instance indices in X.

kwargsdict

key word arguments to configure the default cross validation if the base class default fit_predict is used (i.e. if function _fit_predict is not overridden. If _fit_predict is overridden, kwargs may not function as expected. If _fit_predict is not overridden, valid input is cv_size integer, which is the number of cross validation folds to use to estimate train data. If cv_size is not passed, the default is 10. If cv_size is greater than the minimum number of samples in any class, it is set to this minimum.

Returns:
predictionsnp.ndarray

shape [n_cases] - predicted class labels indices correspond to instance indices in

fit_predict_proba(X, y, **kwargs) ndarray[source]

Fits the classifier and predicts class label probabilities for X.

fit_predict_proba produces probability estimates using just the train data. By default, this is through 10x cross validation, although some estimators may utilise specialist techniques such as out-of-bag estimates or leave-one-out cross-validation.

Classifiers which override _fit_predict_proba will have the capability:train_estimate tag set to True.

Generally, this will not be the same as fitting on the whole train data then making train predictions. To do this, you should call fit(X,y).predict_proba(X)

Parameters:
Xnp.ndarray or list

Input data, any number of channels, equal length series of shape ( n_cases, n_channels, n_timepoints) or 2D np.array (univariate, equal length series) of shape (n_cases, n_timepoints) or list of numpy arrays (any number of channels, unequal length series) of shape [n_cases], 2D np.array (n_channels, n_timepoints_i), where n_timepoints_i is length of series i. other types are allowed and converted into one of the above.

Different estimators have different capabilities to handle different types of input. If self.get_tag("capability:multivariate") is False, they cannot handle multivariate series, so either n_channels == 1 is true or X is 2D of shape (n_cases, n_timepoints). If self.get_tag( "capability:unequal_length") is False, they cannot handle unequal length input. In both situations, a ValueError is raised if X has a characteristic that the estimator does not have the capability for is passed.

ynp.ndarray

1D np.array of float or str, of shape (n_cases) - class labels (ground truth) for fitting indices corresponding to instance indices in X.

kwargsdict

key word arguments to configure the default cross validation if the base class default fit_predict is used (i.e. if function _fit_predict is not overridden. If _fit_predict is overridden, kwargs may not function as expected. If _fit_predict is not overridden, valid input is cv_size integer, which is the number of cross validation folds to use to estimate train data. If cv_size is not passed, the default is 10. If cv_size is greater than the minimum number of samples in any class, it is set to this minimum.

Returns:
probabilitiesnp.ndarray

2D array of shape (n_cases, n_classes) - predicted class probabilities First dimension indices correspond to instance indices in X, second dimension indices correspond to class labels, (i, j)-th entry is estimated probability that i-th instance is of class j

classmethod get_class_tag(tag_name, raise_error=True, tag_value_default=None)[source]

Get tag value from estimator class (only class tags).

Parameters:
tag_namestr

Name of tag value.

raise_errorbool, default=True

Whether a ValueError is raised when the tag is not found.

tag_value_defaultany type, default=None

Default/fallback value if tag is not found and error is not raised.

Returns:
tag_value

Value of the tag_name tag in cls. If not found, returns an error if raise_error is True, otherwise it returns tag_value_default.

Raises:
ValueError

if raise_error is True and tag_name is not in self.get_tags().keys()

Examples

>>> from aeon.classification import DummyClassifier
>>> DummyClassifier.get_class_tag("capability:multivariate")
True
classmethod get_class_tags()[source]

Get class tags from estimator class and all its parent classes.

Returns:
collected_tagsdict

Dictionary of tag name and tag value pairs. Collected from _tags class attribute via nested inheritance. These are not overridden by dynamic tags set by set_tags or class __init__ calls.

get_fitted_params(deep=True)[source]

Get fitted parameters.

State required:

Requires state to be “fitted”.

Parameters:
deepbool, default=True

If True, will return the fitted parameters for this estimator and contained subobjects that are estimators.

Returns:
fitted_paramsdict

Fitted parameter names mapped to their values.

get_metadata_routing()[source]

Sklearn metadata routing.

Not supported by aeon estimators.

get_params(deep=True)[source]

Get parameters for this estimator.

Parameters:
deepbool, default=True

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:
paramsdict

Parameter names mapped to their values.

get_tag(tag_name, raise_error=True, tag_value_default=None)[source]

Get tag value from estimator class.

Includes dynamic and overridden tags.

Parameters:
tag_namestr

Name of tag to be retrieved.

raise_errorbool, default=True

Whether a ValueError is raised when the tag is not found.

tag_value_defaultany type, default=None

Default/fallback value if tag is not found and error is not raised.

Returns:
tag_value

Value of the tag_name tag in self. If not found, returns an error if raise_error is True, otherwise it returns tag_value_default.

Raises:
ValueError

if raise_error is True and tag_name is not in self.get_tags().keys()

Examples

>>> from aeon.classification import DummyClassifier
>>> d = DummyClassifier()
>>> d.get_tag("capability:multivariate")
True
get_tags()[source]

Get tags from estimator.

Includes dynamic and overridden tags.

Returns:
collected_tagsdict

Dictionary of tag name and tag value pairs. Collected from _tags class attribute via nested inheritance and then any overridden and new tags from __init__ or set_tags.

load_model(model_path, classes)[source]

Load a pre-trained keras model instead of fitting.

When calling this function, all functionalities can be used such as predict, predict_proba etc. with the loaded model.

Parameters:
model_pathstr (path including model name and extension)

The directory where the model will be saved including the model name with a “.keras” extension. Example: model_path=”path/to/file/best_model.keras”

classesnp.ndarray

The set of unique classes the pre-trained loaded model is trained to predict during the classification task.

Returns:
None
predict(X) ndarray[source]

Predicts class labels for time series in X.

Parameters:
Xnp.ndarray or list

Input data, any number of channels, equal length series of shape ( n_cases, n_channels, n_timepoints) or 2D np.array (univariate, equal length series) of shape (n_cases, n_timepoints) or list of numpy arrays (any number of channels, unequal length series) of shape [n_cases], 2D np.array (n_channels, n_timepoints_i), where n_timepoints_i is length of series i other types are allowed and converted into one of the above.

Different estimators have different capabilities to handle different types of input. If self.get_tag("capability:multivariate") is False, they cannot handle multivariate series, so either n_channels == 1 is true or X is 2D of shape (n_cases, n_timepoints). If self.get_tag( "capability:unequal_length") is False, they cannot handle unequal length input. In both situations, a ValueError is raised if X has a characteristic that the estimator does not have the capability for is passed.

Returns:
predictionsnp.ndarray

1D np.array of float, of shape (n_cases) - predicted class labels indices correspond to instance indices in X

predict_proba(X) ndarray[source]

Predicts class label probabilities for time series in X.

Parameters:
Xnp.ndarray or list

Input data, any number of channels, equal length series of shape ( n_cases, n_channels, n_timepoints) or 2D np.array (univariate, equal length series) of shape (n_cases, n_timepoints) or list of numpy arrays (any number of channels, unequal length series) of shape [n_cases], 2D np.array (n_channels, n_timepoints_i), where n_timepoints_i is length of series i. other types are allowed and converted into one of the above.

Different estimators have different capabilities to handle different types of input. If self.get_tag("capability:multivariate") is False, they cannot handle multivariate series, so either n_channels == 1 is true or X is 2D of shape (n_cases, n_timepoints). If self.get_tag( "capability:unequal_length") is False, they cannot handle unequal length input. In both situations, a ValueError is raised if X has a characteristic that the estimator does not have the capability for is passed.

Returns:
probabilitiesnp.ndarray

2D array of shape (n_cases, n_classes) - predicted class probabilities First dimension indices correspond to instance indices in X, second dimension indices correspond to class labels, (i, j)-th entry is estimated probability that i-th instance is of class j

reset(keep=None)[source]

Reset the object to a clean post-init state.

After a self.reset() call, self is equal or similar in value to type(self)(**self.get_params(deep=False)), assuming no other attributes were kept using keep.

Detailed behaviour:
removes any object attributes, except:

hyper-parameters (arguments of __init__) object attributes containing double-underscores, i.e., the string “__”

runs __init__ with current values of hyperparameters (result of get_params)

Not affected by the reset are:

object attributes containing double-underscores class and object methods, class attributes any attributes specified in the keep argument

Parameters:
keepNone, str, or list of str, default=None

If None, all attributes are removed except hyperparameters. If str, only the attribute with this name is kept. If list of str, only the attributes with these names are kept.

Returns:
selfobject

Reference to self.

save_last_model_to_file(file_path='./')[source]

Save the last epoch of the trained deep learning model.

Parameters:
file_pathstr, default = “./”

The directory where the model will be saved

Returns:
None
score(X, y, metric='accuracy', use_proba=False, metric_params=None) float[source]

Scores predicted labels against ground truth labels on X.

Parameters:
Xnp.ndarray or list

Input data, any number of channels, equal length series of shape ( n_cases, n_channels, n_timepoints) or 2D np.array (univariate, equal length series) of shape (n_cases, n_timepoints) or list of numpy arrays (any number of channels, unequal length series) of shape [n_cases], 2D np.array (n_channels, n_timepoints_i), where n_timepoints_i is length of series i. other types are allowed and converted into one of the above.

Different estimators have different capabilities to handle different types of input. If self.get_tag("capability:multivariate") is False, they cannot handle multivariate series, so either n_channels == 1 is true or X is 2D of shape (n_cases, n_timepoints). If self.get_tag( "capability:unequal_length") is False, they cannot handle unequal length input. In both situations, a ValueError is raised if X has a characteristic that the estimator does not have the capability for is passed.

ynp.ndarray

1D np.array of float or str, of shape (n_cases) - class labels (ground truth) for fitting indices corresponding to instance indices in X.

metricUnion[str, callable], default=”accuracy”,

Defines the scoring metric to test the fit of the model. For supported strings arguments, check sklearn.metrics.get_scorer_names.

use_probabool, default=False,

Argument to check if scorer works on probability estimates or not.

metric_paramsdict, default=None,

Contains parameters to be passed to the scoring function. If None, no parameters are passed.

Returns:
scorefloat

Accuracy score of predict(X) vs y.

set_params(**params)[source]

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters:
**paramsdict

Estimator parameters.

Returns:
selfestimator instance

Estimator instance.

set_tags(**tag_dict)[source]

Set dynamic tags to given values.

Parameters:
**tag_dictdict

Dictionary of tag name and tag value pairs.

Returns:
selfobject

Reference to self.

summary()[source]

Summary function to return the losses/metrics for model fit.

Returns:
historydict or None,

Dictionary containing model’s train/validation losses and metrics