relative_loss#

relative_loss(y_true, y_pred, relative_loss_function=<function mean_absolute_error>, horizon_weight=None, multioutput='uniform_average', **kwargs)[source]#

Relative loss of forecast versus benchmark forecast for a given metric.

Applies a forecasting performance metric to a set of forecasts and benchmark forecasts and reports ratio of the metric from the forecasts to the the metric from the benchmark forecasts. Relative loss output is non-negative floating point. The best value is 0.0.

If the score of the benchmark predictions for a given loss function is zero then a large value is returned.

This function allows the calculation of scale-free relative loss metrics. Unlike mean absolute scaled error (MASE) the function calculates the scale-free metric relative to a defined loss function on a benchmark method instead of the in-sample training data. Like MASE, metrics created using this function can be used to compare forecast methods on a single series and also to compare forecast accuracy between series.

This is useful when a scale-free comparison is beneficial but the training data used to generate some (or all) predictions is unknown such as when comparing the loss of 3rd party forecasts or surveys of professional forecasters.

Only metrics that do not require y_train are curretnly supported.

Parameters:
y_truepd.Series, pd.DataFrame or np.array of shape (fh,) or (fh, n_outputs) where fh is the forecasting horizon

Ground truth (correct) target values.

y_predpd.Series, pd.DataFrame or np.array of shape (fh,) or (fh, n_outputs) where fh is the forecasting horizon

Forecasted values.

y_pred_benchmarkpd.Series, pd.DataFrame or np.array of shape (fh,) or (fh, n_outputs) where fh is the forecasting horizon, default=None

Forecasted values from benchmark method.

relative_loss_functionfunction, default=mean_absolute_error

Function to use in calculation relative loss. The function must comply with API interface of aeon forecasting performance metrics. Metrics requiring y_train or y_pred_benchmark are not supported.

horizon_weightarray-like of shape (fh,), default=None

Forecast horizon weights.

multioutput{‘raw_values’, ‘uniform_average’} or array-like of shape (n_outputs,), default=’uniform_average’

Defines how to aggregate metric for multivariate (multioutput) data. If array-like, values used as weights to average the errors. If ‘raw_values’, returns a full set of errors in case of multioutput input. If ‘uniform_average’, errors of all outputs are averaged with uniform weight.

Returns:
relative_lossfloat

Loss for a method relative to loss for a benchmark method for a given loss metric. If multioutput is ‘raw_values’, then relative loss is returned for each output separately. If multioutput is ‘uniform_average’ or an ndarray of weights, then the weighted average relative loss of all output errors is returned.

References

Hyndman, R. J and Koehler, A. B. (2006). “Another look at measures of forecast accuracy”, International Journal of Forecasting, Volume 22, Issue 4.

Examples

>>> import numpy as np
>>> from aeon.performance_metrics.forecasting import relative_loss
>>> from aeon.performance_metrics.forecasting import mean_squared_error
>>> y_true = np.array([3, -0.5, 2, 7, 2])
>>> y_pred = np.array([2.5, 0.0, 2, 8, 1.25])
>>> y_pred_benchmark = y_pred*1.1
>>> relative_loss(y_true, y_pred, y_pred_benchmark=y_pred_benchmark)
0.8148148148148147
>>> relative_loss(y_true, y_pred, y_pred_benchmark=y_pred_benchmark,     relative_loss_function=mean_squared_error)
0.5178095088655261
>>> y_true = np.array([[0.5, 1], [-1, 1], [7, -6]])
>>> y_pred = np.array([[0, 2], [-1, 2], [8, -5]])
>>> y_pred_benchmark = y_pred*1.1
>>> relative_loss(y_true, y_pred, y_pred_benchmark=y_pred_benchmark)
0.8490566037735847
>>> relative_loss(y_true, y_pred, y_pred_benchmark=y_pred_benchmark,     multioutput='raw_values')
array([0.625     , 1.03448276])
>>> relative_loss(y_true, y_pred, y_pred_benchmark=y_pred_benchmark,     multioutput=[0.3, 0.7])
0.927272727272727