mean_absolute_percentage_error#

mean_absolute_percentage_error(y_true, y_pred, horizon_weight=None, multioutput='uniform_average', symmetric=False, **kwargs)[source]#

Mean absolute percentage error (MAPE) or symmetric version.

If symmetric is False then calculates MAPE and if symmetric is True then calculates symmetric mean absolute percentage error (sMAPE). Both MAPE and sMAPE output is non-negative floating point. The best value is 0.0.

sMAPE is measured in percentage error relative to the test data. Because it takes the absolute value rather than square the percentage forecast error, it penalizes large errors less than MSPE, RMSPE, MdSPE or RMdSPE.

There is no limit on how large the error can be, particulalrly when y_true values are close to zero. In such cases the function returns a large value instead of inf.

Parameters:
y_truepd.Series, pd.DataFrame or np.array of shape (fh,) or (fh, n_outputs) where fh is the forecasting horizon

Ground truth (correct) target values.

y_predpd.Series, pd.DataFrame or np.array of shape (fh,) or (fh, n_outputs) where fh is the forecasting horizon

Forecasted values.

horizon_weightarray-like of shape (fh,), default=None

Forecast horizon weights.

multioutput{‘raw_values’, ‘uniform_average’} or array-like of shape (n_outputs,), default=’uniform_average’

Defines how to aggregate metric for multivariate (multioutput) data. If array-like, values used as weights to average the errors. If ‘raw_values’, returns a full set of errors in case of multioutput input. If ‘uniform_average’, errors of all outputs are averaged with uniform weight.

symmetricbool, default=False

Calculates symmetric version of metric if True.

Returns:
lossfloat

MAPE or sMAPE loss. If multioutput is ‘raw_values’, then MAPE or sMAPE is returned for each output separately. If multioutput is ‘uniform_average’ or an ndarray of weights, then the weighted average MAPE or sMAPE of all output errors is returned.

References

Hyndman, R. J and Koehler, A. B. (2006). “Another look at measures of forecast accuracy”, International Journal of Forecasting, Volume 22, Issue 4.

Examples

>>> from aeon.performance_metrics.forecasting import     mean_absolute_percentage_error
>>> y_true = np.array([3, -0.5, 2, 7, 2])
>>> y_pred = np.array([2.5, 0.0, 2, 8, 1.25])
>>> mean_absolute_percentage_error(y_true, y_pred, symmetric=False)
0.33690476190476193
>>> mean_absolute_percentage_error(y_true, y_pred, symmetric=True)
0.5553379953379953
>>> y_true = np.array([[0.5, 1], [-1, 1], [7, -6]])
>>> y_pred = np.array([[0, 2], [-1, 2], [8, -5]])
>>> mean_absolute_percentage_error(y_true, y_pred, symmetric=False)
0.5515873015873016
>>> mean_absolute_percentage_error(y_true, y_pred, symmetric=True)
0.6080808080808081
>>> mean_absolute_percentage_error(y_true, y_pred, multioutput='raw_values',         symmetric=False)
array([0.38095238, 0.72222222])
>>> mean_absolute_percentage_error(y_true, y_pred, multioutput='raw_values',         symmetric=True)
array([0.71111111, 0.50505051])
>>> mean_absolute_percentage_error(y_true, y_pred, multioutput=[0.3, 0.7],     symmetric=False)
0.6198412698412699
>>> mean_absolute_percentage_error(y_true, y_pred, multioutput=[0.3, 0.7],     symmetric=True)
0.5668686868686869