The regression evaluation statistics calculated by this function belong
to two different groups of measures: absolute and relative. The former
include "mae", "mse", and "rmse" and are calculated as follows: "mae": mean absolute error, which is calculated as sum(|t_i - p_i|)/N,
where t's are the true values and p's are the predictions, while N is
supposed to be the size of both vectors.
"mse": mean squared error, which is calculated as sum( (t_i - p_i)^2
)/N
"rmse": root mean squared error that is calculated as sqrt(mse)
The remaining measures ("mape", "nmse" and "nmae") are relative
measures, the two later
comparing the performance of the model with a baseline. They are
unit-less measures with values always greater than 0. In the case of
"nmse" and "nmae" the values are expected to be in the interval [0,1]
though occasionaly scores can overcome 1, which means that your model
is performing worse than the baseline model. The baseline used in our
implementation is a constant model that always predicts the average
target variable value, estimated using the values of this variable on
the training data (data used to obtain the model that generated the
predictions), which should be
given in the parameter train.y
. The relative error measure
"mape" does not require a baseline. It simply calculates the average
percentage difference between the true values and the
predictions.
These measures are calculated as follows:
"mape": sum(|(t_i - p_i) / t_i|)/N
"nmse": sum( (t_i - p_i)^2 ) / sum( (t_i - AVG(Y))^2 ), where AVG(Y)
is the average of the values provided in vector train.y
"nmae": sum(|t_i - p_i|) / sum(|t_i - AVG(Y)|)