awsh(y, x = NULL, p = 0, sigma2 = NULL, qlambda = NULL, eta = 0.5, tau = NULL,
lkern = "Triangle", hinit = NULL, hincr = NULL, hmax = 100, hmaxs= 2*hmax, u = NULL,
graph = FALSE, demo = FALSE, symmetric = NULL, conf = FALSE, qconf = 0.95, alpha = 2)
y
contains the observed values (regression function plus errors).
In case of x=NULL
(second parameter) y
is assumed to be
observed on a one-dimensional grid.x
is either NULL
, in this case y
is assumed
to be observed on a grid, or is a vector determining the design.p
is the degree of the polynomial model to use. For univariate
regression p
can be an nonnegative integer less or equal than 5.sigma2
can be used to provide an estimate for the error
variance. If is.null(sigma2)
a homoskedastic model is assumed and
a variance estimate is generated from the data. If length(sigma2)==length(y)
qlambda
determines the scale parameter qlambda
for the stochastic penalty. The scaling parameter in the stochastic
penalty lambda
is choosen as the qlambda
-quantile
of a eta
is a memory parameter used to stabilize the procedure.
eta
has to be between 0
and 1
, with
eta=.5
being the default.tau
is used in case of a the scale parameter polynomial
degree p!=0
only. It is the scale parameter in the extention
penalty used to prevent from leverage problems. The default value
for tau
lkern
determines the location kernel to be used. Options
are "Uniform"
, "Triangle"
, "Quadratic"
,
"Cubic"
and "Exponential"
. Default is "Triangle"
hinit
Initial bandwidth for the location penalty.
Appropriate value is choosen in case of hinit=NULL
hincr
hincr
is used as a factor to increase the
bandwidth between iterations. Defauts to hincr=1.25
hmax
Maximal bandwidth to be used. Determines the
number of iterations and is used as the stopping rule.hmaxs
Maximal bandwidth to be used when estimating the
heterogenous variance from consequtive differences of y
by function
laws
. Determines the number of iterations of laws
.u
used to supply values of the true regression function
for test purposes to calculate Mean Squared Error (MSE) and
Mean Absolute Error (MAE)graph
if TRUE
results are displayed after each
iteration step.demo
if TRUE
after each iteration step results
are displayed and the process waits for user interaction.symmetric==TRUE
the stochastic penalty is
symmetrized, i.e. (sij + sji)/lambda
is used instead of
sij/lambda
. See references for details.
symmetric==FALSE
is forced ifconf
if TRUE
conditional (on weights) confidence intervals are provided
at each design point.qconf
determines the level of the conditional (on weights) confidence intervalsalpha
Parameter used for a penalized MSE estimate for p=0
. This is
experimental to try to select hmax
.y
x
Adaptive weights smoothing is an iterative data adaptive smoothing technique that
is designed for smoothing in regression problems with discontinuous regression
function. The basic assumption is that the regression function can be approximated
by a simple local, e.g. local constant or local polynomial, model.
The estimate of the regression function, i.e. the conditional expectation of y
given x
is computed as a weighted maximum likelihood estimate, with weights choosen
in a completely data adaptive way. The procedure is edge preserving. If the assumed local
model is globally valid, almost all weights used will be 1, i.e. the resulting estimate
almost is the global estimate.
Currently implemented are the following models (specified by parameter p
and attributes
of x
and y
) are implemented:
[object Object],[object Object],[object Object]
The essential parameter in the procedure is qlambda
. This parameter has an
interpretation as a significance level of a test for equivalence of two local
parameter estimates. Optimal values mainly depend on the choosen p
and the value of symmetric
(determines the use of an asymmetric or a symmetrized
test). The optimal values only slightly depend on the model parameters, i.e. the
default parameters should work in most situations. Larger values of qlambda
may lead to oversmoothing, small values of qlambda
lead to a random segmentation
of homogeneous regions. A good value of qlambda
can be obtained by the propagation
condition, requiring that in case of global validity of the local model the
estimate for large hmax
should be equal to the global estimate.
The numerical complexity of the procedure is mainly determined by hmax
. The number
of iterations is d*log(hmax/hinit)/log(hincr)
with d
being the dimension
of y
. Comlexity in each iteration step is Const*hakt*n
with hakt
being the actual bandwith in the iteration step and n
the number of design points.
hmax
determines the maximal possible variance reduction.
aws
##---- Should be DIRECTLY executable !! ----
##-- ==> Define data, use random,
##-- or do help(data=index) for the standard data sets.
Run the code above in your browser using DataLab