Supervised tensor decomposition with interactive side information on multiple modes. Main function in the package. The function takes a response tensor, multiple side information matrices, and a desired Tucker rank as input. The output is a rank-constrained M-estimate of the core tensor and factor matrices.
tensor_regress(
tsr,
X_covar1 = NULL,
X_covar2 = NULL,
X_covar3 = NULL,
core_shape,
niter = 20,
cons = c("non", "vanilla", "penalty"),
lambda = 0.1,
alpha = 1,
solver = "CG",
dist = c("binary", "poisson", "normal"),
traj_long = FALSE,
initial = c("random", "QR_tucker")
)
a list containing the following:
W
a list of orthogonal factor matrices - one for each mode, with the number of columns given by core_shape
G
an array, core tensor with the size specified by core_shape
C_ts
an array, coefficient tensor, Tucker product of G
,A
,B
,C
U
linear predictor,i.e. Tucker product of C_ts
,X_covar1
,X_covar2
,X_covar3
lglk
a vector containing loglikelihood at convergence
sigma
a scalar, estimated error variance (for Gaussian tensor) or dispersion parameter (for Bernoulli and Poisson tensors)
violate
a vector listing whether each iteration violates the max norm constraint on the linear predictor, 1
indicates violation
response tensor with 3 modes
side information on first mode
side information on second mode
side information on third mode
the Tucker rank of the tensor decomposition
max number of iterations if update does not convergence
the constraint method, "non" for without constraint, "vanilla" for global scale down at each iteration, "penalty" for adding log-barrier penalty to object function
penalty coefficient for "penalty" constraint
max norm constraint on linear predictor
solver for solving object function when using "penalty" constraint, see "details"
distribution of the response tensor, see "details"
if "TRUE", set the minimal iteration number to 8; if "FALSE", set the minimal iteration number to 0
initialization of the alternating optimiation, "random" for random initialization, "QR_tucker" for deterministic initialization using tucker decomposition
Constraint penalty
adds log-barrier regularizer to
general object function (negative log-likelihood). The main function uses solver in function "optim" to
solve the objective function. The "solver" passes to the argument "method" in function "optim".
dist
specifies three distributions of response tensor: binary, poisson and normal distribution.
If dist
is set to "normal" and initial
is set to "QR_tucker", then the function returns the results after initialization.
seed = 34
dist = 'binary'
data=sim_data(seed, whole_shape = c(20,20,20), core_shape=c(3,3,3),
p=c(5,5,5),dist=dist, dup=5, signal=4)
re = tensor_regress(data$tsr[[1]],data$X_covar1,data$X_covar2,data$X_covar3,
core_shape=c(3,3,3),niter=10, cons = 'non', dist = dist,initial = "random")
Run the code above in your browser using DataLab