evaluate(forecaster, cv, y, X=None, strategy: str = 'refit', scoring: Optional[Union[callable, List[callable]]] = None, return_data: bool = False, error_score: Union[str, int, float] = nan, backend: Optional[str] = None, compute: bool = True, **kwargs)[source]#

Evaluate forecaster using timeseries cross-validation.

forecasteraeon BaseForecaster descendant

aeon forecaster (concrete BaseForecaster descendant)

cvaeon BaseSplitter descendant

Splitter of how to split the data into test data and train data

yaeon time series container

Target (endogeneous) time series used in the evaluation experiment

Xaeon time series container, of same mtype as y

Exogenous time series used in the evaluation experiment

strategy{“refit”, “update”, “no-update_params”}, optional, default=”refit”

defines the ingestion mode when the forecaster sees new data when window expands “refit” = forecaster is refitted to each training window “update” = forecaster is updated with training window data, in sequence provided “no-update_params” = fit to first training window, re-used without fit or update

scoringsubclass of aeon.performance_metrics.BaseMetric or list of same,

default=None. Used to get a score function that takes y_pred and y_test arguments and accept y_train as keyword argument. If None, then uses scoring = MeanAbsolutePercentageError().

return_databool, default=False

Returns three additional columns in the DataFrame, by default False. The cells of the columns contain each a pd.Series for y_train, y_pred, y_test.

error_score“raise” or numeric, default=np.nan

Value to assign to the score if an exception occurs in estimator fitting. If set to “raise”, the exception is raised. If a numeric value is given, FitFailedWarning is raised.

backend{“dask”, “loky”, “multiprocessing”, “threading”}, by default None.

Runs parallel evaluate if specified and strategy is set as “refit”. - “loky”, “multiprocessing” and “threading”: uses joblib Parallel loops - “dask”: uses dask, requires dask package in environment Recommendation: Use “dask” or “loky” for parallel evaluate. “threading” is unlikely to see speed ups due to the GIL and the serialization backend (cloudpickle) for “dask” and “loky” is generally more robust than the standard pickle library used in “multiprocessing”.

computebool, default=True

If backend=”dask”, whether returned DataFrame is computed. If set to True, returns pd.DataFrame, otherwise dask.dataframe.DataFrame.

**kwargsKeyword arguments

Only relevant if backend is specified. Additional kwargs are passed into joblib.Parallel if backend is “loky”, “multiprocessing” or “threading”.

resultspd.DataFrame or dask.dataframe.DataFrame

DataFrame that contains several columns with information regarding each refit/update and prediction of the forecaster. Row index is splitter index of train/test fold in cv. Entries in the i-th row are for the i-th train/test split in cv. Columns are as follows: - test_{scoring.name}: (float) Model performance score. If scoring is a list,

then there is a column withname test_{scoring.name} for each scorer.

  • fit_time: (float) Time in sec for fit or update on train fold.

  • pred_time: (float) Time in sec to predict from fitted estimator.

  • len_train_window: (int) Length of train window.

  • cutoff: (int, pd.Timestamp, pd.Period) cutoff = last time index in train fold.

  • y_train: (pd.Series) only present if see return_data=True train fold of the i-th split in cv, used to fit/update the forecaster.

  • y_pred: (pd.Series) present if see return_data=True forecasts from fitted forecaster for the i-th test fold indices of cv.

  • y_test: (pd.Series) present if see return_data=True testing fold of the i-th split in cv, used to compute the metric.

>>> from aeon.datasets import load_airline
>>> from aeon.forecasting.model_evaluation import evaluate
>>> from aeon.forecasting.model_selection import ExpandingWindowSplitter
>>> from aeon.forecasting.naive import NaiveForecaster
>>> y = load_airline()
>>> forecaster = NaiveForecaster(strategy="mean", sp=12)
>>> cv = ExpandingWindowSplitter(initial_window=12, step_length=3,
... fh=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12])
>>> results = evaluate(forecaster=forecaster, y=y, cv=cv)

Optionally, users may select other metrics that can be supplied by scoring argument. These can be forecast metrics of any kind, i.e., point forecast metrics, interval metrics, quantile foreast metrics. https://www.aeon-toolkit.org/en/stable/api_reference/performance_metrics.html?highlight=metrics To evaluate estimators using a specific metric, provide them to the scoring arg.

>>> from aeon.performance_metrics.forecasting import MeanAbsoluteError
>>> loss = MeanAbsoluteError()
>>> results = evaluate(forecaster=forecaster, y=y, cv=cv, scoring=loss)

Optionally, users can provide a list of metrics to scoring argument.

>>> from aeon.performance_metrics.forecasting import MeanSquaredError
>>> results = evaluate(
...     forecaster=forecaster,
...     y=y,
...     cv=cv,
...     scoring=[MeanSquaredError(square_root=True), MeanAbsoluteError()],
... )

An example of an interval metric is the PinballLoss. It can be used with all probabilistic forecasters.

>>> from aeon.forecasting.naive import NaiveVariance
>>> from aeon.performance_metrics.forecasting.probabilistic import PinballLoss
>>> loss = PinballLoss()
>>> forecaster = NaiveForecaster(strategy="drift")
>>> results = evaluate(forecaster=NaiveVariance(forecaster),
... y=y, cv=cv, scoring=loss)