evaluate#

evaluate(forecaster, cv, y, X=None, strategy: str = 'refit', scoring: callable | List[callable] | None = None, return_data: bool = False, error_score: str | int | float = nan, backend: str | None = None, compute: bool = True, **kwargs)[source]#

Evaluate forecaster using timeseries cross-validation.

Parameters:
forecasteraeon BaseForecaster descendant

aeon forecaster (concrete BaseForecaster descendant)

cvaeon BaseSplitter descendant

Splitter of how to split the data into test data and train data

yaeon time series container

Target (endogeneous) time series used in the evaluation experiment

Xaeon time series container, of same mtype as y

Exogenous time series used in the evaluation experiment

strategy{“refit”, “update”, “no-update_params”}, optional, default=”refit”

defines the ingestion mode when the forecaster sees new data when window expands “refit” = forecaster is refitted to each training window “update” = forecaster is updated with training window data, in sequence provided “no-update_params” = fit to first training window, re-used without fit or update

scoringCallable or None, default=None

Function in aeon.performance_metrics. Used to get a score function that takes y_pred and y_test arguments and accept y_train as keyword argument. If None, then uses scoring = mean_absolute_percentage_error.

return_databool, default=False

Returns three additional columns in the DataFrame, by default False. The cells of the columns contain each a pd.Series for y_train, y_pred, y_test.

error_score“raise” or numeric, default=np.nan

Value to assign to the score if an exception occurs in estimator fitting. If set to “raise”, the exception is raised. If a numeric value is given, FitFailedWarning is raised.

backend{“dask”, “loky”, “multiprocessing”, “threading”}, by default None.

Runs parallel evaluate if specified and strategy is set as “refit”. - “loky”, “multiprocessing” and “threading”: uses joblib Parallel loops - “dask”: uses dask, requires dask package in environment Recommendation: Use “dask” or “loky” for parallel evaluate. “threading” is unlikely to see speed ups due to the GIL and the serialization backend (cloudpickle) for “dask” and “loky” is generally more robust than the standard pickle library used in “multiprocessing”.

computebool, default=True

If backend=”dask”, whether returned DataFrame is computed. If set to True, returns pd.DataFrame, otherwise dask.dataframe.DataFrame.

**kwargsKeyword arguments

Only relevant if backend is specified. Additional kwargs are passed into joblib.Parallel if backend is “loky”, “multiprocessing” or “threading”.

Returns:
resultspd.DataFrame or dask.dataframe.DataFrame

DataFrame that contains several columns with information regarding each refit/update and prediction of the forecaster. Row index is splitter index of train/test fold in cv. Entries in the i-th row are for the i-th train/test split in cv. Columns are as follows: - test_{scoring.__name__}: (float) Model performance score. If scoring is a list, then there is a column withname test_{scoring.__name__} for each scorer. - fit_time: (float) Time in sec for fit or update on train fold. - pred_time: (float) Time in sec to predict from fitted estimator. - len_train_window: (int) Length of train window. - cutoff: (int, pd.Timestamp, pd.Period) cutoff = last time index in train fold. - y_train: (pd.Series) only present if see return_data=True

train fold of the i-th split in cv, used to fit/update the forecaster.

  • y_pred: (pd.Series) present if see return_data=True forecasts from fitted forecaster for the i-th test fold indices of cv.

  • y_test: (pd.Series) present if see return_data=True testing fold of the i-th split in cv, used to compute the metric.

>>> from aeon.datasets import load_airline
>>> from aeon.forecasting.model_evaluation import evaluate
>>> from aeon.forecasting.model_selection import ExpandingWindowSplitter
>>> from aeon.forecasting.naive import NaiveForecaster
>>> y = load_airline()
>>> forecaster = NaiveForecaster(strategy="mean", sp=12)
>>> cv = ExpandingWindowSplitter(initial_window=12, step_length=3,
... fh=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12])
>>> results = evaluate(forecaster=forecaster, y=y, cv=cv)

Optionally, users may select other metrics that can be supplied by scoring argument. These can be forecast metrics of any kind, i.e., point forecast metrics, interval metrics, quantile foreast metrics. https://www.aeon-toolkit.org/en/stable/api_reference/performance_metrics.html?highlight=metrics To evaluate estimators using a specific metric, provide them to the scoring arg.

>>> from aeon.performance_metrics.forecasting import mean_absolute_error as loss
>>> results = evaluate(forecaster=forecaster, y=y, cv=cv, scoring=loss)

Optionally, users can provide a list of metrics to scoring argument.

>>> from aeon.performance_metrics.forecasting import mean_absolute_error as loss
>>> from aeon.performance_metrics.forecasting import mean_squared_error as loss2
>>> results = evaluate(
...     forecaster=forecaster,
...     y=y,
...     cv=cv,
...     scoring=[loss, loss2],
... )