gamlss.add (version 5.0-1)

penLags: Penalised Lag Regression Function

Description

The function penLags() fits a regression model to lags of an explanatory variable x or to lags of y itself. The estimated coefficients of the lags are penalised using a quadratic penalty similar to P-splines.

Usage

penLags(y, x, lags = 10, from.lag=0, weights = NULL, data = NULL, df = NULL, 
        lambda = NULL, start.lambda = 10, order = 1, 
        plot = FALSE, method = c("ML", "GAIC"), k = 2, ...)

Arguments

y

The response variable

x

The explanatory variable which can be the response itself if autoregressive model is required

lags

The number of lags required

from.lag

from which lag value to start, the default is zero which means include the original x in the basis

weights

The prior weights

data

The data frame if needed

df

If not NULL this argument sets the required effective degrees of freedom for the penalty

lambda

If not NULL this argument sets the required smoothing parameter of the penalty

start.lambda

Staring values for lambda for the local ML estimation

order

The order of the penalties in the beta coefficients

plot

Whether to plot the data and the fitted values

method

The method of estimating the smoothing parameter with two alternatives, i) ML: the local maximum likelihood estimation method (or PQL method) ii) GAIC: the generalised Akaike criterion method of estimating the smoothing parameter

k

The penalty required if the method GAIC is used i.e. k=2 for AIC or k=log(n) if BIC (or SBC).

for further arguments

Value

Returns penLags objects which has several method.

Details

This function is designed for fitting a simple penalised lag regression model to a response variable. The meaning of simple in this case is that only one explanatory variable can used (whether it is a true explanatory or the response variable itself) and only a normal assumption for the response is made. For multiple explanatory variables and for different distributions within gamlss use the additive function la.

References

Benjamin M. A., Rigby R. A. and Stasinopoulos D.M. (2003) Generalised Autoregressive Moving Average Models. J. Am. Statist. Ass., 98, 214-223.

Rigby, R. A. and Stasinopoulos D. M. (2005). Generalized additive models for location, scale and shape,(with discussion), Appl. Statist., 54, part 3, pp 507-554.

Stasinopoulos D. M., Rigby R.A. and Akantziliotou C. (2006) Instructions on how to use the GAMLSS package in R. Accompanying documentation in the current GAMLSS help files, (see also http://www.gamlss.org/).

Stasinopoulos D. M. Rigby R.A. (2007) Generalized additive models for location scale and shape (GAMLSS) in R. Journal of Statistical Software, Vol. 23, Issue 7, Dec 2007, http://www.jstatsoft.org/v23/i07.

Examples

Run this code
# NOT RUN {
# generating data
y <- arima.sim(500, model=list(ar=c(.9,-.8)))
#----------------------------------
#fitting model with different order
m0 <- penLags(y,y, lag=20, order=0)
m1 <- penLags(y,y, lag=20, order=1)
m2 <- penLags(y,y, lag=20, order=2)
m3 <- penLags(y,y, lag=20, order=3)   
# chosing the order
AIC(m0, m1, m2, m3)
#---------------------------------
# look at the AR coefficients of the models
op <- par(mfrow=c(2,2))
 plot(coef(m0,"AR"), type="h")
 plot(coef(m1, "AR"), type="h")
 plot(coef(m2, "AR"), type="h")
 plot(coef(m3,"AR"), type="h")
par(op)
#-------------------------------
# refit and  plotting model
m1 <- penLags(y,y, lag=20, order=1, plot=TRUE)

# looking at the residuals
plot(resid(m1))
acf(resid(m1))
pacf(resid(m1))
# or better use plot, wp or dtop
plot(m1, ts=TRUE)
wp(m1)
dtop(m1)
# the coefficients
coef(m1)
coef(m1, "AR")
coef(m1, 'varComp')
#
print(m1)
#summary(m1)
# use prediction
plot(ts(c(y, predict(m1,100))))
# }

Run the code above in your browser using DataLab