weight.factor
calculates weights by estimating a common factor analysis model with a single factor for each
indicator block and using the resulting estimates to calculate factor score weights
weight.principal
calculates weights by calculating a principal component analysis for each
indicator block and returning the weights for the first principal component.
weight.pls(S, model, W.model, outerEstimators = NULL,
innerEstimator = inner.path, ..., convCheck = convCheck.absolute,
variant = "lohmoller", tol = 1e-05, iter = 100, validateInput = TRUE)weight.optim(S, model, W.model, parameterEstimator = params.separate,
optimCriterion = optim.maximizeInnerR2, method = "BFGS", ...,
validateInput = TRUE, standardize = TRUE)
weight.fixed(S, model, W.model = NULL, ..., standardize = TRUE)
weight.factor(S, model, W.model = NULL, ..., fm = "minres",
standardize = TRUE)
weight.principal(S, model, W.model = NULL, ..., standardize = TRUE)
inner
, reflective
, and formative
defining the free regression paths
in the model.n
is estimated
with the estimator in the n<
inner.path
.
See innerEstimators
.tol
to check for convergence. The default
is convCheck.absolute
. See
"lohmoller"
, default) or Wold's ("wold"
)
variant of PLS. In Wold's variant the inner and outer estimation steps are repeated for each
indicator block whereas in Lohmöller's variant the weights for all S
,
model specification model
, and weights W
and returns a named vector of parameter estimates. The default is
matrixpls
and returning a scalar. The default is optim.maximizeInnerR2
.
See optimCri
optim
for details. Default is "BFGS"
.S
the weights should be scaled to produce
standardized composites.minres
, wls
, gls
, pa
, and ml
. The parameter is passed through to
to fa<
"matrixplsweights"
, which is a matrix containing the weights with the following attributes:weight.pls
returns the following as attributes:TRUE
if the weight algorithm converged and FALSE
otherwise.weight.pls
: partial Least Squares and other iterative two-stage weight algorithms.weight.optim
: calculates a set of weights to minimize an optimization criterion.weight.fixed
: returns the starting weights.weight.factor
: blockwise factor score weights.weight.principal
: blockwise principal component weights.inner
, reflective
,
and formative
specifying the free parameters of a model: inner
(l x l
) specifies the
regressions between composites, reflective
(k x l
) specifies the regressions of observed
data on composites, and formative
(l x k
) specifies the regressions of composites on the
observed data. Here k
is the number of observed variables and l
is the number of composites.If the model is specified in lavaan format, the native
format model is derived from this model by assigning all regressions between latent
variables to inner
, all factor loadings to reflective
, and all regressions
of latent variables on observed variables to formative
. Regressions between
observed variables and all free covariances are ignored. All parameters that are
specified in the model will be treated as free parameters.
The original papers about Partial Least Squares, as well as many of the current PLS
implementations, impose restrictions on the matrices inner
,
reflective
, and formative
: inner
must be a lower triangular matrix,
reflective
must have exactly one non-zero value on each row and must have at least
one non-zero value on each column, and formative
must only contain zeros.
Some PLS implementations allow formative
to contain non-zero values, but impose a
restriction that the sum of reflective
and t(formative)
must satisfy
the original restrictions of reflective
. The only restrictions that matrixpls
imposes on inner
, reflective
, and formative
is that these must be
binary matrices and that the diagonal of inner
must be zeros.
The argument W.model
is a (l x k
) matrix that indicates
how the indicators are combined to form the composites. The original papers about
Partial Least Squares as well as all current PLS implementations define this as
t(reflective) | formative
, which means that the weight patter must match the
model specified in reflective
and formative
. Matrixpls does not
require that W.model
needs to match reflective
and formative
, but
accepts any numeric matrix. If this argument is not specified, all elements of W.model
that
correspond to non-zero elements in the reflective
or formative
formative
matrices receive the value 1.
weight.pls
calculates indicator weights by calling the
innerEstimator
and outerEstimators
iteratively until either the convergence criterion or
maximum number of iterations is reached and provides the results in a matrix.
weight.optim
calculates indicator weights by optimizing the indicator
weights against the criterion function using optim
. The
algoritmh works by first estimating the model with the starting weights. The
resulting matrixpls
object is passed to the optimCriterion
function, which evaluates the optimization criterion for the weights. The
weights are adjusted and new estimates are calculated until the optimization
criterion converges.
library(plspm)
# Run the customer satisfaction examle form plspm
# load dataset satisfaction
data(satisfaction)
# inner model matrix
IMAG = c(0,0,0,0,0,0)
EXPE = c(1,0,0,0,0,0)
QUAL = c(0,1,0,0,0,0)
VAL = c(0,1,1,0,0,0)
SAT = c(1,1,1,1,0,0)
LOY = c(1,0,0,0,1,0)
inner = rbind(IMAG, EXPE, QUAL, VAL, SAT, LOY)
colnames(inner) <- rownames(inner)
# Reflective model
list(1:5, 6:10, 11:15, 16:19, 20:23, 24:27)
reflective<- matrix(
c(1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1),
27,6, dimnames = list(colnames(satisfaction)[1:27],colnames(inner)))
# empty formative model
formative <- matrix(0, 6, 27, dimnames = list(colnames(inner), colnames(satisfaction)[1:27]))
# Estimation using covariance matrix
model <- list(inner = inner,
reflective = reflective,
formative = formative)
S <- cov(satisfaction[,1:27])
matrixpls.ModeA <- matrixpls(S, model)
matrixpls.ModeB <- matrixpls(S, model, outerEstimators = outer.modeB)
matrixpls.MaxR2 <- matrixpls(S, model, weightFunction = weight.optim)
# Compare the R2s from the different estimations
R2s <- cbind(r2(matrixpls.ModeA), r2(matrixpls.ModeB), r2(matrixpls.MaxR2))
print(R2s)
apply(R2s,2,mean)
# Optimization against custom function
maximizeSumOfCorrelations <- function(matrixpls.res){
C <- attr(matrixpls.res,"C")
model <- attr(matrixpls.res,"model")
- sum(C[model$inner != 0])
}
matrixpls.MaxCor <- matrixpls(S, model, weightFunction = weight.optim,
optimCriterion = maximizeSumOfCorrelations)
# Compare the Mode B and optimized solutions
C <- attr(matrixpls.ModeB,"C")
print(C)
print(sum(C[inner != 0]))
C <- attr(matrixpls.MaxCor,"C")
print(C)
print(sum(C[inner != 0]))
Run the code above in your browser using DataLab