The algorithm estimates the sources from multiple dependent datasets jointly using their observed mixtures. The estimation is done by maximizing the independence between the sources. The options for different source densities are provided.
NewtonIVA(X, source_density="laplace", student_df=1,
init = "default", max_iter = 1024, eps = 1e-6, W_init = NA,
step_size=1, step_size_min = 0.1, alpha = 0.9, verbose = FALSE)
numeric data array containing the observed mixtures with dimension [P, N, D]
,
where P
is the dimension of the observed dataset, N
is the number of the observations
and D
is the number of the datasets. The number of datasets D
should be at least 2. Missing values are not allowed.
string to determine which source density model should be used. The options are "laplace"
, "laplace_diag"
, "gaussian"
or "student"
. For more information see the details section.
integer.
The degree of freedom for multivariate Student's distribution. Used only if source_denisty = "student"
.
string, to determine how to initialize the algorithm. The options are
"default"
, "IVA-G+fastIVA"
, "IVA-G"
, "fastIVA"
or "none"
. For more information see the details section.
positive integer, used to define the maximum number of iterations for algorithm to run. If max_iter
is reached, the unmixing matrices of the last iteration are used.
convergence tolerance, when the convergence measure is smaller than eps
, the algorithm stops.
numeric array of dimension [P, P, D]
containing initial unmixing matrices. If not set, initialized with identity matrices.
initial step size for Newton step, should be between 0 and 1, default is 1.
the minimum step size.
multiplier for how much to decrease step size when convergence is not getting smaller.
logical. If TRUE
the convergence measure is printed during the learning process.
An object of class "iva"
.
The estimated source signals with dimension [P, N, D]
. The estimated source signals are zero mean with unit variance.
The estimated unmixing matrices with dimension [P, P, D]
.
The estimated unmixing matrices with dimension [P, P, D]
for whitened data.
The whitening matrices with dimension [P, P, D]
.
The means for each observed mixture with dimension [P, D]
.
The number of iterations that the algorithm did run.
Logical value which tells if the algorithm converged.
The source density model used.
The number of observations.
The number of datasets.
The number of sources.
The degree of freedom for Student's source density model.
The function call.
The name of the variable containing the observed mixtures.
The algorithm uses Newton update together with decoupling trick to estimate the multivariate source signals from their observed mixtures. The elements of the source signals, or the datasets, should be dependent of each other to achieve the estimates where the sources are aligned in same order for each dataset. If the datasets are not dependent, the sources can still be separated but not necessarily aligned. The algorithm does not assume the unmixing matrices to be orthogonal. For more of the nonorthogonal Newton update based IVA algorithm, see Anderson, M. et al (2011) and Anderson, M. (2013).
The source density model should be selected to match the density of the true source signals. When source_density = "laplace"
, the multivariate Laplace source density model is used. This is the most flexible choice as it takes both second-order and higher-order dependence into account.
When source_density = "laplace_diag"
, the multivariate Laplace source density model with diagonal covariance structure is used. Multivariate diagonal Laplace source density model should be considered only when the sources are mainly higher-order dependent. It works best when the number of sources is significantly less than the number of datasets.
When source_density = "gaussian"
the multivariate Gaussian source density model is used. This is the superior choice in terms of computation power and should be used when the sources are mostly second-order dependent.
When source_density = "student"
the multivariate Student's source density model is used. Multivariate Student's source density model should be considered only when the sources are mainly higher-order dependent. It works best when the number of sources is significantly less than the number of datasets.
The init
parameter defines how the algorithm is initialized. When init = "default"
, the default initialization is used. As default the algorithm is initialized using init = "IVA-G+fastIVA"
when source_density
is "laplace"
, "laplace_diag"
or "student"
, and using init = "none"
when source_density = "gaussian"
.
When init = "IVA-G+fastIVA"
, the algorithm is initialized using first the estimated unmixing matrices of IVA-G, which is NewtonIVA
with source_density = "gaussian"
, to initialize fastIVA
algorithm. Then the estimated unmixing matrices W
of fastIVA
are used as initial unmixing matrices for NewtonIVA
. IVA-G is used to solve the permutation problem of aligning the source estimates when ever the true sources are second-order dependent. If the true sources are not second-order dependent, fastIVA
is used as backup as it solves the permutation problem more regularly than NewtonIVA
when the sources are purely higher-order dependent. When the sources possess any second-order dependence, IVA-G also speeds the computation time up a lot. This option should be used whenever there is no prior information about the sources and source_density
is either "laplace"
, "laplace_diag"
or "student"
.
When init = "IVA-G"
, the estimated unmixing matrices of IVA-G are used to initialize this algorithm. This option should be used if the true sources are expected to possess any second-order dependence and source_density
is not "gaussian"
.
When init = "fastIVA"
, the estimated unmixing matrices of fastIVA
algorithm is used to initialize this algorithm. This option should be used if the true sources are expected to possess only higher-order dependence. For more details, see fastIVA
.
When init = "none"
, the unmixing matrices are initialized randomly from standard normal distribution.
The algorithm assumes that observed signals are multivariate, i.e. the number of datasets D >= 2
. The estimated signals are zero mean and scaled to unit variance.
Anderson, M., Adal<U+0131>, T., & Li, X.-L. (2011). Joint blind source separation with multivariate Gaussian model: Algorithms and performance analysis. IEEE Transactions on Signal Processing, 60, 1672<U+2013>1683. <doi:10.1109/TSP.2011.2181836>
Anderson, M. (2013). Independent vector analysis: Theory, algorithms, and applications. PhD dissertation, University of Maryland, Baltimore County.
Liang, Y., Chen, G., Naqvi, S., & Chambers, J. A. (2013). Independent vector analysis with multivariate Student<U+2019>s t-distribution source prior for speech separation. Electronics Letters, 49, 1035<U+2013>1036. <doi:10.1049/el.2013.1999>
# NOT RUN {
if (require("LaplacesDemon")) {
# Generate sources from multivariate Laplace distribution
P <- 4; N <- 1000; D <- 4;
S <- array(NA, c(P, N, D))
for (i in 1:P) {
U <- array(rnorm(D * D), c(D, D))
Sigma <- crossprod(U)
S[i, , ] <- rmvl(N, rep(0, D), Sigma)
}
# Generate mixing matrices from standard normal distribution
A <- array(rnorm(P * P * D), c(P, P, D))
# Generate mixtures
X <- array(NaN, c(P, N, D))
for (d in 1:D) {
X[, , d] <- A[, , d] %*% S[, , d]
}
# Estimate sources and unmixing matrices
res_G <- NewtonIVA(X, source_density = "gaussian")
}
# }
Run the code above in your browser using DataLab