SMART(dimension,
limit_state_function,
N1 = 10000,
N2 = 50000,
N3 = 200000,
Nu = 50,
lambda1 = 7,
lambda2 = 3.5,
lambda3 = 1,
tune_cost = c(1,10,100,1000),
tune_gamma = c(0.5,0.2,0.1,0.05,0.02,0.01),
clusterInMargin = TRUE,
alpha_margin = 1,
k1 = round(6*(dimension/2)^(0.2)),
k2 = round(12*(dimension/2)^(0.2)),
k3 = k2 + 16,
learn_db = NULL,
lsf_value = NULL,
failure = 0,
limit_fun_MH = NULL,
sampling_strategy = "MH",
seeds = NULL,
seeds_eval = NULL,
burnin = 30,
thinning = 4,
plot = FALSE,
limited_plot = FALSE,
add = FALSE,
output_dir = NULL,
z_MH = NULL,
z_lsf = NULL,
verbose = 0)N3 standard samples.N1, N2 or N3 points lying in the margin. Thus, they are not necessarily located into the margin. This boolean, if TRUE, enfok1-1).k1 to k2-1).k2 to k3).dimension x number_of_vector.learn_db.limit_state_function(x) < failure }.limit_state_function, failure domain is defined by points whom values of limit_fun_MHsampling_strategy=="MH", seeds from which starting the Metrepolis-Hastings algorithm. This should be a matrix with dimension and limit_fun_MH on the seeds.thinning = 0 means no thinning.limit_state_function, final DOE and metamodel. Should be used with plot==FALSE. As for plot it requires the calculus of the limit_state_function on a grid of size 161x161.z_MH (from outer function) can be provided to avoid extra computational time.z_lsf (from outer function) can be provided to avoid extra computational time.list containing the failure probability and some more outputs as described below:limit_state_function.limit_state_function has been calculated.limit_state_function on the learning database.limit_state_function. A call output is a list containing the value and the standard deviation.plot==TRUE, the evaluation of the metamodel on the plot grid.SMART is a reliability method proposed by J.-M. Bourinet et al. It makes uses of a SVM-based metamodel to approximate the limit state function and calculate the failure probability with a crude Monte-Carlo method using the metamodel-based limit state function. As SVM is a classification method, it makes use of limit state function values to create two classes : greater and lower than the failure threshold. Then the border is taken as a surogate of the limit state function.
Concerning the refinement strategy, it distinguishes 3 stages, known as Localisation, Stalibilsation and Convergence stages. The first one is proposed to reduce the margin as much as possible, the second one focuses on switching points while the last one works on the final Monte-Carlo population and is designed to insure a strong margin ; see F. Deheeger PhD thesis for more information.SubsetSimulation
MonteCarlo
svm (in package #Limit state function defined by Kiureghian & Dakessian :
kiureghian = function(x, b=5, kappa=0.5, e=0.1) {b - x[2] - kappa*(x[1]-e)^2}
SMART_estim = SMART(dimension=2,limit_state_function=kiureghian,plot=TRUE)
MC_estim = MonteCarlo(2, kiureghian, N_max = 500000)
#Limit state function defined by Waarts :
waarts = function(u) { min(
(3+(u[1]-u[2])^2/10 - (u[1]+u[2])/sqrt(2)),
(3+(u[1]-u[2])^2/10 + (u[1]+u[2])/sqrt(2)),
u[1]-u[2]+7/sqrt(2),
u[2]-u[1]+7/sqrt(2))
}
SMART_estim = SMART(dimension=2,limit_state_function=waarts,plot=TRUE)
MC_estim = MonteCarlo(2, waarts, N_max = 500000)Run the code above in your browser using DataLab