Usage
multi_optim(model, max.try = 10, lambda, LB = -Inf, UB = Inf, type,
optMethod = "nlminb", gradFun = "ram", pars_pen = NULL,
diff_par = NULL, hessFun = "none", verbose = TRUE, warm.start = FALSE,
Start2 = NULL, tol = 1e-06, max.iter = 50000)
Arguments
model
Lavaan output object. This is a model that was previously
run with any of the lavaan main functions: cfa(), lavaan(), sem(),
or growth(). It also can be from the efaUnrotate() function from
the semTools package. Currently, the parts of the model which can
max.try
number of starts to try before convergence.
lambda
Penalty value. Note: higher values will result in additional
convergence issues.
LB
lower bound vector. Note: This is very important to specify
when using regularization. It greatly increases the chances of
converging.
type
Penalty type. Options include "none", "lasso", "ridge",
and "diff_lasso". diff_lasso penalizes the discrepency between
parameter estimates and some pre-specified values. The values
to take the deviation from are specified in diff_par.
optMethod
Solver to use. Recommended options include "nlminb" and
"optimx". Note: for "optimx", the default method is to use nlminb.
This can be changed in subOpt.
gradFun
Gradient function to use. Recommended to use "ram",
which refers to the method specified in von Oertzen & Brick (2014).
The "norm" procedure uses the forward difference method for
calculating the hessian. This is slower and less accurate.
pars_pen
Parameter indicators to penalize. If left NULL, by default,
all parameters in the A matrix outside of the intercepts are
penalized when lambda > 0 and type != "none".
diff_par
Parameter values to deviate from. Only used when
type="diff_lasso".
hessFun
Hessian function to use. Recommended to use "ram",
which refers to the method specified in von Oertzen & Brick (2014).
The "norm" procedure uses the forward difference method for
calculating the hessian. This is slower and less accurate.
verbose
Whether to print iteration number.
warm.start
Whether start values are based on previous iteration.
This is not recommended.
Start2
Provided starting values. Not required
tol
absolute tolerance for convergence.
max.iter
max iterations for optimization.