Learn R Programming

mistral (version 1.1-0)

AKMCS: Active learning reliability method combining Kriging and Monte Carlo Simulation

Description

Estimate a failure probability with the AKMCS method.

Usage

AKMCS(dimension,
      limit_state_function,
      N                 = 500000,
      N1                = 10*dimension,
      Nmax              = 200,
      learn_db          = NULL,
      lsf_value         = NULL,
      failure           = 0.0,
      precision         = 0.05,
      meta_model        = NULL,
      kernel            = "matern5_2",
      learn_each_train  = FALSE,
      crit_min          = 2,
      limit_fun_MH      = NULL,
      sampling_strategy = "MH",
      first_DOE         = "Gaussian",
      seeds             = NULL,
      seeds_eval        = NULL,
      burnin            = 30,
      thinning          = 4,
      plot              = FALSE,
      limited_plot      = FALSE,
      add               = FALSE,
      output_dir        = NULL,
      z_MH              = NULL,
      z_lsf             = NULL,
      verbose 		      = 0)

Arguments

dimension
an integer giving the dimension of the input space.
limit_state_function
the failure function.
N
an integer defining the Monte-Carlo population size for probability estimation.
N1
an integer defining the size of the first Design Of Experiment got by clustering of the N standard gaussian samples.
Nmax
an integer defining the maximum number of calls to the limit state function during refinement steps. This means total number of call will be N1 + Nmax.
learn_db
optional. A matrix of already known points, with dim : dimension x number_of_vector.
lsf_value
values of the limit_state_function on the vectors given in learn_db.
failure
the value defining the failure domain F = { x | limit_state_function(x) < failure }.
precision
the maximum value of the coefficient of variation for proba. If the first run with N gives a too large cov, then approximate necessary N is derived from cov and Monte-Carlo estimate is run a
meta_model
optional. If a kriging based metamodel has already been fitted to the data (from DiceKriging package) it can be given as an input to keep the same parameters.
kernel
a specified kernel to be used for the metamodel. See DiceKriging for available options.
learn_each_train
specify if hyperparameters of the model should be evaluated each time points are added to the learning database ("TRUE") or only the first time ("FALSE").
crit_min
the minimum value of the criterion to be used for refinement step.
limit_fun_MH
optional. If the working space is to be reduced to some subset defining by a function, eg. in case of use in a Subset Simulation algorithm. As for the limit_state_function, failure domain is defined by points whom values of limit_fun_MH
sampling_strategy
either "AR" or "MH", to specify which sampling strategy is to be used when generating Monte-Carlo population in a case of subset simulation : "AR" stands for accept-reject while "MH" stands for Metropolis-Hastings.
first_DOE
Either "Gaussian" or "Uniform", to specify the population on which clustering if done
seeds
optional. If sampling_strategy=="MH", seeds from which starting the Metrepolis-Hastings algorithm. This should be a matrix with nrow = dimension and ncol = number of vector.
seeds_eval
optional. The value of the limit_fun_MH on the seeds.
burnin
a burnin parameter for Metropolis-Hastings algorithm.
thinning
a thinning parameter for Metropolis-Hastings algorithm. thinning = 0 means no thinning.
plot
a boolean parameter specifying if function and samples should be plotted. The plot is refreshed at each iteration with the new data. Note that this option is only to be used when working on light limit state functions as it requires the c
limited_plot
only a final plot with limit_state_function, final DOE and metamodel. Should be used with plot==FALSE. As for plot it requires the calculus of the limit_state_function on a grid of size 161x161.
add
optional. "TRUE" if plots are to be added to the current active device.
output_dir
optional. If plots are to be saved in .jpeg in a given directory. This variable will be pasted with "_AKMCS.jpeg" to get the full output directory.
z_MH
optional. For plots, if metamodel has already been evaluated on the grid then z_MH (from outer function) can be provided to avoid extra computational time.
z_lsf
optional. For plots, if LSF has already been evaluated on the grid then z_lsf (from outer function) can be provided to avoid extra computational time.
verbose
Eiher 0 for an almost no output message, or 1 for medium size or 2 for full size

Value

  • An object of class list containing the failure probability and some more outputs as described below:
  • probaThe estimated failure probability.
  • covThe coefficient of variation of the Monte-Carlo probability estimate.
  • NcallThe total number of calls to the limit_state_function.
  • learn_dbThe final learning database, ie. all points where limit_state_function has been calculated.
  • lsf_valueThe value of the limit_state_function on the learning database.
  • meta_funThe metamodel approximation of the limit_state_function. A call output is a list containing the value and the standard deviation.
  • meta_modelThe final metamodel. An S4 object from DiceKriging.
  • pointsPoints in the failure domain according to the metamodel.
  • meta_evalEvaluation of the metamodel on these points.
  • z_metaIf plot==TRUE, the evaluation of the metamodel on the plot grid.

Details

AKMCS strategy is based on a original Monte-Carlo population which is classified with a kriging-based metamodel. This means that no sampling is done during refinements steps. Indeed, it tries to classify this Monte-Carlo population with a confidence greater than a given value, for instance distance to the failure should be greater than crit_min standard deviation. Thus, while this criterion is not verified, the point minimizing it is added to the learning database and then evaluated. Finally, once all points are classified or when the maximum number of calls has been reached, crude Monte-Carlo is performed. A final test controlling the size of this population regarding the targeted coefficient of variation is done; if it is too small then a new population of sufficient size (considering ordre of magnitude of found probability) is generated, and algorithm run again.

References

  • B. Echard, N. Gayton, M. Lemaire: AK-MCS : an Active learning reliability method combining Kriging and Monte Carlo Simulation Structural Safety, Elsevier, 2011.
  • B. Echard, N. Gayton, M. Lemaire and N. Relun: A combined Importance Sampling and Kriging reliability method for small failure probabilities with time-demanding numerical models Reliability Engineering \& System Safety,2012
  • B. Echard, N. Gayton and A. Bignonnet: A reliability analysis method for fatigue design International Journal of Fatigue, 2014

See Also

SubsetSimulation MonteCarlo km (in package DiceKriging)

Examples

Run this code
#Limit state function defined by Kiureghian & Dakessian :
kiureghian = function(x, b=5, kappa=0.5, e=0.1) {b - x[2] - kappa*(x[1]-e)^2}

res = AKMCS(dimension=2,limit_state_function=kiureghian,plot=TRUE)

#Compare with crude Monte-Carlo reference value
N = 500000
U = matrix(rnorm(dimension*N),dimension,N)
G = apply(U,2,kiureghian)
P = mean(G<0)
cov = sqrt((1-P)/(N*P))

#See impact of kernel choice with Waarts function :
waarts = function(u) { min(
		(3+(u[1]-u[2])^2/10 - (u[1]+u[2])/sqrt(2)),
		(3+(u[1]-u[2])^2/10 + (u[1]+u[2])/sqrt(2)),
		u[1]-u[2]+7/sqrt(2),
		u[2]-u[1]+7/sqrt(2))
}

res = list()
res$matern5_2 = AKMCS(dimension=2, limit_state_function=waarts, plot=TRUE)
res$matern3_2 = AKMCS(dimension=2, limit_state_function=waarts, kernel="matern3_2", plot=TRUE)
res$gaussian  = AKMCS(dimension=2, limit_state_function=waarts, kernel="gauss", plot=TRUE)
res$exp       = AKMCS(dimension=2, limit_state_function=waarts, kernel="exp", plot=TRUE)

#Compare with crude Monte-Carlo reference value
N = 500000
U = matrix(rnorm(dimension*N),dimension,N)
G = apply(U,2,waarts)
P = mean(G<0)
cov = sqrt((1-P)/(N*P))

Run the code above in your browser using DataLab