Learn R Programming

greybox (version 0.3.0)

rmc: RMC test

Description

RMC stands for "Regression for Methods Comparison". This is a parametric test for the comparison of means of several distributions This test is a parametric counterpart of nemenyi / MCB test (Demsar, 2006) and uses asymptotic properties of regression models. It relies on distributional assumptions about the provided data. For instance, if the mean forecast errors are used, then it is safe to assume that the regression model constructed on them will have normally distributed residuals.

Usage

rmc(data, distribution = c("norm", "fnorm", "chisq"), level = 0.95,
  sort = TRUE, style = c("mcb", "lines"), select = NULL, plot = TRUE,
  ...)

Arguments

data

Matrix or data frame with observations in rows and variables in columns.

distribution

Type of the distribution to use. If this is a clear forecast error, then "norm" is appropriate, leading to a simple Gausian linear regression. "fnorm" would lead to a alm model with folded normal distribution. Finally, "chisq" would lead to the alm with Chi squared distribution.

level

The width of the confidence interval. Default is 0.95.

sort

If TRUE function sorts the final values of mean ranks. If plots are requested via type parameter, then this is forced to TRUE.

style

What style of plot to use after the calculations. This can be either "MCB" style or "Vertical lines" one.

select

What column of data to highlight on the plot. If NULL, then the method with the lowest value is selected.

plot

If TRUE then the graph is produced after the calculations. You can also use plot method on the produced object in order to get the same effect.

...

Other parameters passed to plot function

Value

If plot=TRUE, then the function plots the results after all the calculations. In case of distribution="norm", the closer to zero the intervals are, the better model performs. When distribution="fnorm" or distribution="chisq", the smaller, the better.

Function returns a list of a class "rmc", which contains the following variables:

  • meanMean values for each method.

  • intervalConfidence intervals for each method.

  • p.valuep-value for the test of the significance of the model. In case of distribution="norm" F-test is done. Otherwise Chisq is done.

  • importanceThe weights of the estimated model in comparison with the model with the constant only. 0 means that the constant is better, 1 means that the estimated model is the best.

  • levelSignificance level.

  • modellm model produced for the calculation of the intervals.

  • styleStyle of the plot to produce.

  • selectThe selected variable to highlight.

Details

The test constructs the regression model of the kind:

y = b' X + e,

where y is the vector of the provided data (as.vector(data)), X is the matrix of dummy variables for each column of the data (forecasting method), b is the vector of coefficients for the dummies and e is the error term of the model.

Depending on the provided data, it might make sense to use different types of regressions. The function supports Gausian linear regression (distribution="norm", when the data is normal), advanced linear regression with folded normal distribution (distribution="fnorm", for example, absolute errors, assuming that the original errors are normally distributed) and advanced linear regression with Chi-Squared distribution (distribution="chisq", when the data is distributed as Chi^2, for example squared normal errors).

The advisable error measures to use in the test are RelMAE and RelMSE, which are unbiased and whose logarithms are symmetrically distributed (Davydenko & Fildes, 2013). In fact RelMSE should have F-distribution with h and h degrees of freedom and its logarithm is a log F distribution, because each MSE * h has chi-square(h) (assuming that the forecast error is normal).

As for RelMAE, its distribution is trickier, because each MAE has folded normal distribution (assuming that the original error is normal) and their ratio is something complicated, but tractable (Kim, 2006).

Still, given large samples, the parameters of the regression on logarithms of the both RelMAE and RelMSE should have normal distribution. Thus distribution="norm" can be used in this case (see examples).

If you use distribution="fnorm" or distribution="chisq", then the inverse link is used in Gamma distribution, so the parameters have an inverse meaning as well. i.e. the method with lower MSE-based measure will have a higher parameter.

The test is equivalent to nemenyi test, when applied to the ranks of the error measures on large samples.

References

See Also

alm

Examples

Run this code
# NOT RUN {
N <- 50
M <- 4
ourData <- matrix(rnorm(N*M,mean=0,sd=1), N, M)
ourData[,2] <- ourData[,2]+1
ourData[,3] <- ourData[,3]+0.7
ourData[,4] <- ourData[,4]+0.5
colnames(ourData) <- c("Method A","Method B","Method C - long name","Method D")
rmc(ourData, distribution="norm", level=0.95)
# In case of AE-based measures, distribution="fnorm" should be selected
rmc(abs(ourData), distribution="fnorm", level=0.95)

# In case of SE-based measures, distribution="chisq" should be selected
rmc(ourData^2, distribution="chisq", level=0.95)

# APE-based measures should not be used in general...

# If RelMAE or RelMSE is used for measuring data, then it makes sense to use
# distribution="norm" and provide logarithms of the RelMAE, which can be approximated by
# normal distribution
ourData <- abs(ourData)
rmc(ourData / ourData[,1], distribution="norm", level=0.95)

# The following example should give similar results to nemenyi test on
# large samples, which compares medians of the distributions:
rmc(t(apply(ourData,1,rank)), distribution="norm", level=0.95)

# }

Run the code above in your browser using DataLab