MetaIS(dimension,
limit_state_function,
N = 500000,
N_alpha = 100,
N_DOE = 2*dimension,
N1 = N_DOE*30,
Ru = 8,
Nmin = 30,
Nmax = 200,
Ncall_max = 1000,
precision = 0.05,
N_seeds = 1,
Niter_seed = 10000,
N_alphaLOO = 5000,
K_alphaLOO = 2*dimension,
alpha_int = c(0.1,10),
k_margin = 1.96,
learn_db = NULL,
lsf_value = NULL,
failure = 0,
meta_model = NULL,
kernel = "matern5_2",
learn_each_train = FALSE,
limit_fun_MH = NULL,
sampling_strategy = "MH",
seeds = NULL,
seeds_eval = NULL,
burnin = 20,
thinning = 4,
plot = FALSE,
limited_plot = FALSE,
add = FALSE,
output_dir = NULL,
z_MH = NULL,
z_lsf = NULL,
verbose = 0)N1 samples.limit_state_function for the construction step.limit_state_function for the construction step.limit_state_function for the whole algorithm.alpha_int[1] < alphaLOO < alpha_int[2].dimension x number_of_vector.learn_db.limit_state_function(x) < failure }.limit_state_function, failure domain is defined by points whom values of limit_fun_MHsampling_strategy=="MH", seeds from which MH algorithm starts. This should be a matrix with dimension and limit_fun_MH on the seeds.thinning = 0 means no thinning. To be used only for Monte-Carlo population sampling and set to 0 elsewhere.limit_state_function, final DOE and metamodel. Should be used with plot==FALSE.z_MH (from outer function) can be provided to avoid extra computational time.z_lsf (from outer function) can be provided to avoid extra computational time.list containing the failure probability and some more outputs as described below:limit_state_function.limit_state_function has been calculated.limit_state_function on the learning database.limit_state_function. A call output is a list containing the value and the standard deviation.plot==TRUE, the evaluation of the metamodel on the plot grid.N_seeds are got from an accept-reject strategy on a standard population.
Once criterion is reached or maximum number of call done, the augmented failure probability is estimated with a crude Monte-Carlo. Then, a new population is generated according to the quasi-optimal instrumenal PDF; burnin and thinning are used here and alpha is evaluated. While the coefficient of variation of alpha estimate is greater than a given threshold and some computation spots still available (defined by Ncall_max) the estimate is refined with extra calculus.
The final probability is the product of p_epsilon and alpha, and final squared coefficient of variation is the sum of p_epsilon and alpha one's.SubsetSimulation
MonteCarlo
km (in package #Limit state function defined by Kiureghian & Dakessian :
kiureghian = function(x, b=5, kappa=0.5, e=0.1) {
x = as.matrix(x)
b - x[2,] - kappa*(x[1,]-e)^2
}
res = MetaIS(dimension=2,limit_state_function=kiureghian,plot=TRUE)
#Compare with crude Monte-Carlo reference value
N = 500000
U = matrix(rnorm(dimension*N),dimension,N)
G = apply(U,2,kiureghian)
P = mean(G<0)
cov = sqrt((1-P)/(N*P))
#See impact of kernel choice with Waarts function :
waarts = function(x) {
x = as.matrix(x)
apply(x, 2, function(u) {min(
(3+(u[1]-u[2])^2/10 - (u[1]+u[2])/sqrt(2)),
(3+(u[1]-u[2])^2/10 + (u[1]+u[2])/sqrt(2)),
u[1]-u[2]+7/sqrt(2),
u[2]-u[1]+7/sqrt(2))})
}
res = list()
res$matern5_2 = MetaIS(2,waarts,plot=TRUE)
res$matern3_2 = MetaIS(2,waarts,kernel="matern3_2",plot=TRUE)
res$gaussian = MetaIS(2,waarts,kernel="gauss",plot=TRUE)
res$exp = MetaIS(2,waarts,kernel="exp",plot=TRUE)
#Compare with crude Monte-Carlo reference value
N = 500000
U = matrix(rnorm(dimension*N),dimension,N)
G = apply(U,2,waarts)
P = mean(G<0)
cov = sqrt((1-P)/(N*P))Run the code above in your browser using DataLab