median_relative_absolute_error¶
- median_relative_absolute_error(y_true, y_pred, horizon_weight=None, multioutput='uniform_average', **kwargs)[source]¶
Median relative absolute error (MdRAE).
In relative error metrics, relative errors are first calculated by scaling (dividing) the individual forecast errors by the error calculated using a benchmark method at the same index position. If the error of the benchmark method is zero then a large value is returned.
MdRAE applies medan absolute error (MdAE) to the resulting relative errors.
- Parameters:
- y_truepd.Series, pd.DataFrame or np.array of shape (fh,) or (fh, n_outputs) where fh is the forecasting horizon
Ground truth (correct) target values.
- y_predpd.Series, pd.DataFrame or np.array of shape (fh,) or (fh, n_outputs) where fh is the forecasting horizon
Forecasted values.
- y_pred_benchmarkpd.Series, pd.DataFrame or np.array of shape (fh,) or (fh, n_outputs) where fh is the forecasting horizon, default=None
Forecasted values from benchmark method.
- horizon_weightarray-like of shape (fh,), default=None
Forecast horizon weights.
- multioutput{‘raw_values’, ‘uniform_average’} or array-like of shape (n_outputs,), default=’uniform_average’
Defines how to aggregate metric for multivariate (multioutput) data. If array-like, values used as weights to average the errors. If ‘raw_values’, returns a full set of errors in case of multioutput input. If ‘uniform_average’, errors of all outputs are averaged with uniform weight.
- Returns:
- lossfloat
MdRAE loss. If multioutput is ‘raw_values’, then MdRAE is returned for each output separately. If multioutput is ‘uniform_average’ or an ndarray of weights, then the weighted average MdRAE of all output errors is returned.
See also
References
Hyndman, R. J and Koehler, A. B. (2006). “Another look at measures of forecast accuracy”, International Journal of Forecasting, Volume 22, Issue 4.
Examples
>>> from aeon.performance_metrics.forecasting import median_relative_absolute_error >>> y_true = np.array([3, -0.5, 2, 7, 2]) >>> y_pred = np.array([2.5, 0.0, 2, 8, 1.25]) >>> y_pred_benchmark = y_pred*1.1 >>> median_relative_absolute_error(y_true, y_pred, y_pred_benchmark=y_pred_benchmark) 1.0 >>> y_true = np.array([[0.5, 1], [-1, 1], [7, -6]]) >>> y_pred = np.array([[0, 2], [-1, 2], [8, -5]]) >>> y_pred_benchmark = y_pred*1.1 >>> median_relative_absolute_error(y_true, y_pred, y_pred_benchmark=y_pred_benchmark) 0.6944444444444443 >>> median_relative_absolute_error(y_true, y_pred, y_pred_benchmark=y_pred_benchmark, multioutput='raw_values') array([0.55555556, 0.83333333]) >>> median_relative_absolute_error(y_true, y_pred, y_pred_benchmark=y_pred_benchmark, multioutput=[0.3, 0.7]) 0.7499999999999999