The input data is assumed to be non-negative tensor. NTD decompose the tensor to the dense core tensor (S) and low-dimensional factor matices (A).
NTD(X, M=NULL, pseudocount=1e-10, initS=NULL, initA=NULL, fixS=FALSE,
fixA=FALSE, L1_A=1e-10, L2_A=1e-10, rank = c(3, 3, 3), modes = 1:3,
algorithm = c("Frobenius", "KL", "IS", "Pearson", "Hellinger", "Neyman",
"HALS", "Alpha", "Beta", "NMF"), init = c("NMF", "ALS", "Random"),
nmf.algorithm = c("Frobenius", "KL", "IS", "Pearson", "Hellinger",
"Neyman", "Alpha", "Beta", "PGD", "HALS", "GCD", "Projected", "NHR",
"DTPP", "Orthogonal", "OrthReg"),
Alpha = 1,
Beta = 2, thr = 1e-10, num.iter = 100, num.iter2 = 10, viz = FALSE,
figdir = NULL, verbose = FALSE)
The input tensor which has I1, I2, and I3 dimensions.
The mask tensor which has I1, I2, and I3 dimensions. If the mask tensor has missing values, specify the element as 0 (otherwise 1).
The pseudo count to avoid zero division, when the element is zero (Default: 1e-10).
The initial values of core tensor which has J1, J2, and J3 dimensions (Default: NULL).
A list containing the initial values of multiple factor matrices (A_k, <Ik*Jk>, k=1..K, Default: NULL).
Whether the core tensor S is updated in each iteration step (Default: FALSE).
Whether the factor matrices Ak are updated in each iteration step (Default: FALSE).
Paramter for L1 regularitation (Default: 1e-10). This also works as small positive constant to prevent division by zero, so should be set as 0.
Paramter for L2 regularitation (Default: 1e-10).
The number of low-dimension in each mode (J1, J2, J3, J1<I1, J2<I2, J3 < I3) (Default: c(3,3,3)).
The vector of the modes on whih to perform the decomposition (Default: 1:3 <all modes>).
NTD algorithms. "Frobenius", "KL", "IS", "Pearson", "Hellinger", "Neyman", "HALS", "Alpha", "Beta", "NMF" are available (Default: "Frobenius").
NMF algorithms, when the algorithm is "NMF". Frobenius", "KL", "IS", "Pearson", "Hellinger", "Neyman", "Alpha", "Beta", "PGD", "HALS", "GCD", "Projected", "NHR", "DTPP", "Orthogonal", are "OrthReg" are available (Default: "Frobenius").
The initialization algorithms. "NMF", "ALS", and "Random" are available (Default: "NMF").
The parameter of Alpha-divergence.
The parameter of Beta-divergence.
When error change rate is lower than thr1, the iteration is terminated (Default: 1E-10).
The number of interation step (Default: 100).
The number of NMF interation step, when the algorithm is "NMF" (Default: 10).
If viz == TRUE, internal reconstructed tensor can be visualized.
the directory for saving the figure, when viz == TRUE (Default: NULL).
If verbose == TRUE, Error change rate is generated in console windos.
S : Tensor object, which is defined as S4 class of rTensor package. A : A list containing three factor matrices. RecError : The reconstruction error between data tensor and reconstructed tensor from S and A. TrainRecError : The reconstruction error calculated by training set (observed values specified by M). TestRecError : The reconstruction error calculated by test set (missing values specified by M). RelChange : The relative change of the error.
Yong-Deok Kim et. al., (2007). Nonnegative Tucker Decomposition. IEEE Conference on Computer Vision and Pattern Recognition
Yong-Deok Kim et. al., (2008). Nonneegative Tucker Decomposition With Alpha-Divergence. IEEE International Conference on Acoustics, Speech and Signal Processing
Anh Huy Phan, (2008). Fast and efficient algorithms for nonnegative Tucker decomposition. Advances in Neural Networks - ISNN2008
Anh Hyu Phan et. al. (2011). Extended HALS algorithm for nonnegative Tucker decomposition and its applications for multiway analysis and classification. Neurocomputing
# NOT RUN {
tensordata <- toyModel(model = "Tucker")
out <- NTD(tensordata, rank=c(2,2,2), algorithm="Frobenius",
init="Random", num.iter=2)
# }
Run the code above in your browser using DataLab