mean_absolute_scaled_error#

mean_absolute_scaled_error(y_true, y_pred, sp=1, horizon_weight=None, multioutput='uniform_average', **kwargs)[source]#

Mean absolute scaled error (MASE).

MASE output is non-negative floating point. The best value is 0.0.

Like other scaled performance metrics, this scale-free error metric can be used to compare forecast methods on a single series and also to compare forecast accuracy between series.

This metric is well suited to intermittent-demand series because it will not give infinite or undefined values unless the training data is a flat timeseries. In this case the function returns a large value instead of inf.

Works with multioutput (multivariate) timeseries data with homogeneous seasonal periodicity.

Parameters:
y_truepd.Series, pd.DataFrame or np.array of shape (fh,) or (fh, n_outputs) where fh is the forecasting horizon

Ground truth (correct) target values.

y_predpd.Series, pd.DataFrame or np.array of shape (fh,) or (fh, n_outputs) where fh is the forecasting horizon

Forecasted values.

y_trainpd.Series, pd.DataFrame or np.array of shape (n_timepoints,) or (n_timepoints, n_outputs), default = None

Observed training values.

spint

Seasonal periodicity of training data.

horizon_weightarray-like of shape (fh,), default=None

Forecast horizon weights.

multioutput{‘raw_values’, ‘uniform_average’} or array-like of shape (n_outputs,), default=’uniform_average’

Defines how to aggregate metric for multivariate (multioutput) data. If array-like, values used as weights to average the errors. If ‘raw_values’, returns a full set of errors in case of multioutput input. If ‘uniform_average’, errors of all outputs are averaged with uniform weight.

Returns:
lossfloat or ndarray of floats

MASE loss. If multioutput is ‘raw_values’, then MASE is returned for each output separately. If multioutput is ‘uniform_average’ or an ndarray of weights, then the weighted average MASE of all output errors is returned.

References

Hyndman, R. J and Koehler, A. B. (2006). “Another look at measures of forecast accuracy”, International Journal of Forecasting, Volume 22, Issue 4.

Hyndman, R. J. (2006). “Another look at forecast accuracy metrics for intermittent demand”, Foresight, Issue 4.

Makridakis, S., Spiliotis, E. and Assimakopoulos, V. (2020) “The M4 Competition: 100,000 time series and 61 forecasting methods”, International Journal of Forecasting, Volume 3.

Examples

>>> from aeon.performance_metrics.forecasting import mean_absolute_scaled_error
>>> y_train = np.array([5, 0.5, 4, 6, 3, 5, 2])
>>> y_true = np.array([3, -0.5, 2, 7, 2])
>>> y_pred = np.array([2.5, 0.0, 2, 8, 1.25])
>>> mean_absolute_scaled_error(y_true, y_pred, y_train=y_train)
0.18333333333333335
>>> y_train = np.array([[0.5, 1], [-1, 1], [7, -6]])
>>> y_true = np.array([[0.5, 1], [-1, 1], [7, -6]])
>>> y_pred = np.array([[0, 2], [-1, 2], [8, -5]])
>>> mean_absolute_scaled_error(y_true, y_pred, y_train=y_train)
0.18181818181818182
>>> mean_absolute_scaled_error(y_true, y_pred, y_train=y_train,     multioutput='raw_values')
array([0.10526316, 0.28571429])
>>> mean_absolute_scaled_error(y_true, y_pred, y_train=y_train,     multioutput=[0.3, 0.7])
0.21935483870967742