Computes the log-likelihood of observed continuous data under a Latent Profile Analysis (LPA) model with multivariate normal distributions within each latent profile. Implements robust numerical techniques to handle near-singular covariance matrices.
get.Log.Lik.LPA(response, P.Z, means, covs, jitter = 1e-10)A single numeric value representing the total log-likelihood: $$\log \mathcal{L} = \sum_{n=1}^N \log \left[ \sum_{l=1}^L \pi_l \cdot \mathcal{N}(\mathbf{x}_n \mid \boldsymbol{\mu}_l, \boldsymbol{\Sigma}_l) \right]$$
where \(\mathcal{N}(\cdot)\) denotes the multivariate normal density function.
A numeric matrix of dimension \(N \times I\) containing continuous observations. Rows represent observations, columns represent variables. Missing values are not permitted.
A numeric vector of length \(L\) containing prior probabilities for latent profiles. Must satisfy:
\(\sum_{l=1}^L \pi_l = 1\)
\(\pi_l > 0\) for all \(l = 1, \dots, L\)
A matrix of dimension \(L \times I\) where row \(l\) contains the mean vector \(\boldsymbol{\mu}_l\) for profile \(l\).
An array of dimension \(I \times I \times L\) where slice \(l\) contains the covariance matrix \(\boldsymbol{\Sigma}_l\) for profile \(l\). Must be symmetric positive semi-definite.
A small positive constant (default: 1e-10) added to diagonal elements of covariance matrices to ensure numerical stability during Cholesky decomposition.
The log-likelihood calculation follows these steps:
Covariance Stabilization: Each covariance matrix \(\boldsymbol{\Sigma}_l\) is symmetrized as \((\boldsymbol{\Sigma}_l + \boldsymbol{\Sigma}_l^\top)/2\). If Cholesky decomposition fails:
Add jitter to diagonal elements iteratively (up to 10 attempts, scaling jitter by 10x each attempt)
Fall back to diagonal covariance matrix if decomposition still fails
Profile-Specific Density for observation \(n\) in profile \(l\): $$\log f(\mathbf{x}_n \mid Z_n=l) = -\frac{I}{2}\log(2\pi) - \frac{1}{2}\log|\boldsymbol{\Sigma}_l| - \frac{1}{2}(\mathbf{x}_n - \boldsymbol{\mu}_l)^\top \boldsymbol{\Sigma}_l^{-1} (\mathbf{x}_n - \boldsymbol{\mu}_l)$$ Computed efficiently using Cholesky decomposition \(\boldsymbol{\Sigma}_l = \mathbf{R}^\top\mathbf{R}\) where applicable.
Joint Probability for observation \(n\) and profile \(l\): $$\log[\pi_l \cdot f(\mathbf{x}_n \mid Z_n=l)] = \log(\pi_l) + \log f(\mathbf{x}_n \mid Z_n=l)$$ \(\log(\pi_l)\) uses \(\log(\pi_l + 10^{-12})\) to avoid undefined values.
Marginal Likelihood per observation using log-sum-exp trick for numerical stability: $$\log f(\mathbf{x}_n) = a_{\max} + \log\left( \sum_{l=1}^L \exp\left\{ \log[\pi_l \cdot f(\mathbf{x}_n \mid Z_n=l)] - a_{\max} \right\} \right)$$ where \(a_{\max} = \max_l \log[\pi_l \cdot f(\mathbf{x}_n \mid Z_n=l)]\).
Total Log-Likelihood: Sum of \(\log f(\mathbf{x}_n)\) across all observations \(n=1,\dots,N\).