Learn R Programming

WALS (version 0.2.5)

semiorthogonalize: Internal function: Semiorthogonal-type transformation of X2 to Z2

Description

Uses the matrix Z2s (called \(\bar{\Xi}\) in eq. (9) of deluca2018glm;textualWALS) to transform \(\bar{X}_2\) to \(\bar{Z}_2\), i.e. to perform \(\bar{Z}_2 = \bar{X}_2 \bar{\Delta}_2 \bar{\Xi}^{-1/2}\). For WALS in the linear regression model, the variables do not have a "bar".

Usage

semiorthogonalize(Z2s, X2, Delta2, SVD = TRUE, postmult = FALSE)

Arguments

Z2s

Matrix for which we take negative square root in \(X2 * Delta2 * Z2s^{1/2}\).

X2

Design matrix of auxiliary regressors to be transformed to Z2

Delta2

Scaling matrix such that diagonal of \(\bar{\Delta}_2 \bar{X}_2^{\top} \bar{M}_1 \bar{X}_2 \Delta_{2}\) is one (ignored scaling by \(n\) because not needed in code). See deluca2018glm;textualWALS

SVD

If TRUE, uses svd to compute eigendecomposition of Z2s, otherwise uses eigen.

postmult

If TRUE, then it uses \(Z2s^{-1/2} = T \Lambda^{-1/2} T^{\top}\), where \(T\) contains the eigenvectors of \(Z2s\) in its columns and \(\Lambda\) the corresponding eigenvalues. If FALSE it uses \(Z2s^{-1/2} = T \Lambda^{-1/2}\).

On the "semiorthogonal-type" transformation

For WALS GLM (and WALS in the linear regression model), the transformation is semiorthogonal (ignored scaling by \(n\) for clarity and because it is not needed in the code) in the sense that \(\bar{M}_{1} \bar{Z}_{2}\) is semiorthogonal since $$\bar{Z}_{2}^{\top} \bar{M}_1 \bar{Z}_{2} = (\bar{Z}_{2}^{\top} \bar{M}_1) (\bar{M}_{1} \bar{Z}_{2}) = I_{k_2},$$ where \(\bar{M}_1\) is an idempotent matrix.

For WALS in the NB2 regression model, \(\bar{M}_{1} \bar{Z}_{2}\) is not semiorthogonal anymore due to the rank-1 perturbation in \(\bar{M}_1\) which causes \(\bar{M}_1\) to not be idempotent anymore, see the section "Transformed model" in huynhwalsnb;textualWALS.

On the use of <code>postmult = TRUE</code>

The transformation of the auxiliary regressors \(Z_2\) for linear WALS in eq. (12) of magnus2016wals;textualWALS differs from the transformation for WALS GLM (and WALS NB) in eq. (9) of deluca2018glm;textualWALS:

In magnus2016wals;textualWALS the transformed auxiliary regressors are

$$Z_{2} = X_2 \Delta_2 T \Lambda^{-1/2},$$

where \(T\) contains the eigenvectors of \(\Xi = \Delta_2 X_{2}^{\top} M_{1} X_{2} \Delta_2\) in the columns and \(\Lambda\) the respective eigenvalues. This definition is used when postmult = FALSE.

In contrast, deluca2018glm;textualWALS defines

$$Z_2 = X_2 \Delta_2 T \Lambda^{-1/2} T^{\top},$$

where we ignored scaling by \(n\) and the notation with "bar" for easier comparison. This definition is used when postmult = TRUE and is strongly preferred for walsGLM and walsNB.

See huynhwals;textualWALS for more details.

References