emulator (version 1.2-20)

optimal.scales: Use optimization techniques to find the optimal scales

Description

Uses optimization techniques (either Nelder-Mead or simulated annealing) to find the optimal scales, or roughness lengths. Function optimal.scale() (ie singular) finds the optimal scale on the assumption that the roughness is isotropic so all scales are identical.

Usage

optimal.scales(val, scales.start, d, use.like = TRUE,  give.answers =
FALSE, func=regressor.basis, ...)
optimal.scale(val, d, use.like = TRUE,  give.answers =
FALSE, func=regressor.basis, ...)

Arguments

val

Matrix with rows corresponding to points at which the function is known

scales.start

Initial guess for the scales (plural). See details section for explanation

d

vector of observations, one for each row of val

use.like

Boolean, with default TRUE meaning to use likelihood for the objective function, and FALSE meaning to use a leave-out-one bootstrap estimator

give.answers

Boolean, with default FALSE meaning to return just the roughness lengths and TRUE meaning to return extra information as returned by optim()

func

Function used to determine basis vectors, defaulting to regressor.basis if not given

...

Extra parameters passed to optim() or optimize(). See examples for usage of this argument

Value

If give.answers takes the default value of FALSE, a vector of roughness lengths is returned. If TRUE, output from optim() is returned directly (note that element par is the logarithm of the desired roughness length as the optimization routine operates with the logs of the lengths as detailed above)

Details

Internally, this function works with the logarithms of the roughness lengths, because they are inherently positive. However, note that the lengths themselves must be supplied to argument scales.start, not their logarithms.

The reason that there are two separate functions is that optim() and optimize() are very different.

References

  • J. Oakley 2004. Estimating percentiles of uncertain computer code outputs. Applied Statistics, 53(1), pp89-93.

  • J. Oakley 1999. Bayesian uncertainty analysis for complex computer codes, PhD thesis, University of Sheffield.

See Also

interpolant,corr

Examples

Run this code
# NOT RUN {
##First, define some scales:
fish <- c(1,1,1,1,1,4)

## and a sigmasquared value:
REAL.SIGMASQ <- 0.3

## and a real relation:
real.relation <- function(x){sum( (1:6)*x )}

## Now a design matrix:
val  <- latin.hypercube(100,6)

## apply the real relation:
d <- apply(val,1,real.relation)

## and add some suitably correlated Gaussian noise:
A <- corr.matrix(val,scales=fish)
d.noisy <-  as.vector(rmvnorm(n=1,mean=apply(val,1,real.relation),REAL.SIGMASQ*A))

##  Now see if we can estimate the roughness lengths well.  Remember that
##  the true values are those held in vector "fish":
optimal.scales(val=val, scales.start=rep(1,6), d=d.noisy,
       method="SANN",control=list(trace=1000,maxit=3),
       give=FALSE)



# Now a test of optimal.scale(), where there is only a single roughness
#  scale to estimate.  This should be more straightforward:


df <- latin.hypercube(40,6)
fish2 <- rep(2,6)
A2 <- corr.matrix(df,scales=fish2)
d.noisy <- as.vector(rmvnorm(n=1, mean=apply(df,1,real.relation), sigma=A2))

jj.T <- optimal.scale(val=df,d=d.noisy,use.like=TRUE)
jj.F <- optimal.scale(val=df,d=d.noisy,use.like=FALSE)

# }

Run the code above in your browser using DataLab