50% off | Unlimited Data & AI Learning

Last chance! 50% off unlimited learning

Sale ends in


TRES (version 1.1.5)

square: Square simulated data

Description

Synthetic data generated from tensor predictor regression (TPR) model. Each response observation is univariate, and each predictor observation is a matrix.

Usage

data("square")

Arguments

Format

A list consisting of four components:

x

A 32×32×200 tensor, each matrix x@data[,,i] represents a predictor observation.

y

A 1×200 matrix, each entry represents a response observation.

coefficients

A 32×32×1 tensor with a square pattern.

Gamma

A list consisting of two 32×2 envelope basis.

Details

The dataset is generated from the tensor predictor regression (TPR) model: Yi=B(m+1)vec(Xi)+ϵi,i=1,,n, where n=200 and the regression coefficient BR32×32 is a given image with rank 2, which has a square pattern. All the elements of the coefficient matrix B are either 0.1 or 1. To make the model conform to the envelope structure, we construct the envelope basis Γk and the covariance matrices Σk,k=1,2, of predictor X as following. With the singular value decomposition of B, namely B=Γ1ΛΓ2T, we choose the envelope basis as ΓkR32×2,k=1,2. Then the envelope dimensions are u1=u2=2. We set matrices Ωk=I2 and Ω0k=0.01I30, k=1,2. Then we generate the covariance matrices Σk=ΓkΩkΓkT+Γ0kΩ0kΓ0kT, followed by normalization with their Frobenius norms. The predictor Xi is then generated from two-way tensor (matrix) normal distribution TN(0;Σ1,Σ2). And the error term ϵi is generated from standard normal distribution.

References

Zhang, X. and Li, L., 2017. Tensor envelope partial least-squares regression. Technometrics, 59(4), pp.426-436.

Examples

Run this code
# NOT RUN {
## Fit square dataset with the tensor predictor regression model
data("square")
x <- square$x
y <- square$y
# Model fitting with ordinary least square.
fit_std <- TPR.fit(x, y, method="standard")
# Draw the coefficient plot.
plot(fit_std)

# }

Run the code above in your browser using DataLab