This function implements the robust counterpart of GLLiM model and should be applied when outliers are present in the data.
The SLLiM model implemented in this function addresses the following non-linear mapping issue:
$$ E(Y | X=x) = g(x),$$
where \(Y\) is a L-vector of multivariate responses and \(X\) is a large D-vector of covariates' profiles such that \(D \gg L\). The methods implemented in this package aims at estimating the non linear regression function \(g\).
First, the methods of this package are based on an inverse regression strategy. The inverse conditional relation \(p(X | Y)\) is specified in a way that the forward relation of interest \(p(Y | X)\) can be deduced in closed-from. Under some hypothesis on covariance structures, the large number \(D\) of covariates is handled by this inverse regression trick, which acts as a dimension reduction technique. The number of parameters to estimate is therefore drastically reduced. Second, we propose to approximate the non linear \(g\) regression function by a piecewise affine function. Therefore, an hidden discrete variable \(Z\) is introduced, in order to divide the space in \(K\) regions such that an affine model holds between responses Y and variables X, in each region \(k\):
$$X = \sum_{k=1}^K I_{Z=k} (A_k Y + b_k + E_k)$$
where \(A_k\) is a \(D \times L\) matrix of coefficients for regression \(k\), \(b_k\) is a D-vector of intercepts and \(E_k\) is a noise with covariance matrix proportional to \(\Sigma_k\).
SLLiM is defined as the following hierarchical generalized Student mixture model for the inverse conditional density \(p(X | Y)\):
$$p(X=x | Y=y,Z=k; \theta,\phi) = S(x; A_kx+b_k,\Sigma_k,\alpha_k^x,\gamma_k^x)$$
$$p(Y=y | Z=k; \theta,\phi) = S(y; c_k,\Gamma_k,\alpha_k,1)$$
$$p(Z=k | \phi)=\pi_k$$
where \((\theta,\phi)\) are the sets of parameters \(\theta=(c_k,\Gamma_k,A_k,b_k,\Sigma_k)_{k=1}^K\) and \(\phi=(\pi_k,\alpha_k)_{k=1}^K\). In the previous expression, \(\alpha_k\) and \((\alpha_k^x,\gamma_k^x)\) determine the heaviness of the tail of the generalized Student distribution, which gives robustness to the model. Note that \(\alpha_k^x=\alpha_k + L/2\) and \(\gamma_k^x=1 + 1/2 \delta(y,c_k,\Gamma_k)\) where \(\delta\) is the Mahalanobis distance.
The forward conditional density of interest can be deduced from these equations and is also a Student mixture of regressions model.
Like gllim, sllim allows the addition of latent variables in order to account for correlation among covariates or if it is supposed that responses are only partially observed. Adding latent factors is known to improve prediction accuracy, if Lw is not too large with regard to the number of covariates. When latent factors are added, the dimension of the response is L=Lt+Lw and L=Lt otherwise.
For SLLiM, the number of parameters to estimate is:
$$(K-1)+ K(1+DL+D+L_t+ nbpar_{\Sigma}+nbpar_{\Gamma})$$
where \(L=L_w+L_t\) and \(nbpar_{\Sigma}\) (resp. \(nbpar_{\Gamma}\)) is the number of parameters in each of the large (resp. small) covariance matrix \(\Sigma_k\) (resp. \(\Gamma_k\)). For example,
if the constraint on \(\Sigma_k\) is cstr$Sigma="i", then \(nbpar_{\Sigma}=1\),which is the default constraint in the gllim function
if the constraint on \(\Sigma_k\) is cstr$Sigma="d", then \(nbpar_{\Sigma}=D\),
if the constraint on \(\Sigma_k\) is cstr$Sigma="", then \(nbpar_{\Sigma}=D(D+1)/2\),
if the constraint on \(\Sigma_k\) is cstr$Sigma="*", then \(nbpar_{\Sigma}=D(D+1)/(2K)\).
The rule to compute the number of parameters of \(\Gamma_k\) is the same as \(\Sigma_k\), replacing D by \(L_t\). Currently the \(\Gamma_k\) matrices are not constrained and \(nbpar_{\Gamma}=L_t(L_t+1)/2\) because for indentifiability reasons the \(L_w\) part is set to the identity matrix.
The user must choose the number of mixtures components \(K\) and, if needed, the number of latent factors \(L_w\). For small datasets (less than 100 observations), we suggest to select both \((K,L_w)\) by minimizing the BIC criterion. For larger datasets, to save computation time, we suggest to set \(L_w\) using BIC while setting \(K\) to an arbitrary value large enough to catch non linear relations between responses and covariates and small enough to have several observations (at least 10) in each clusters. Indeed, for large datasets, the number of clusters should not have a strong impact on the results while it is sufficiently large.