Computes functional linear regression between functional explanatory variable \(\tilde{X}(t)\) and scalar response \(Y\) using penalized Principal Components Analysis (PPC) or Partial Least Squares (PPLS), where \(\tilde{X(t)}=MX(t)\) with \(M=(I+\lambda P)^{-1}\). $$Y=\big<\tilde{X},\beta\big>+\epsilon=\int_{T}{\tilde{X}(t)\beta(t)dt+\epsilon}$$
where \( \big< \cdot , \cdot \big>\) denotes the inner product on \(L_2\) and \(\epsilon\) are random errors with mean zero , finite variance \(\sigma^2\) and \(E[\tilde{X}(t)\epsilon]=0\).
fregre.ppc(fdataobj, y, l =NULL,lambda=0,P=c(0,0,1),...)
fregre.ppls(fdataobj, y=NULL, l = NULL,lambda=0,P=c(0,0,1),...)
fdata
class object.
Scalar response with length n
.
Index of components to include in the model.
Amount of penalization. Default value is 0, i.e. no penalization is used.
If P
is a vector: P
are coefficients to define the penalty matrix object. By default P=c(0,0,1)
penalize the second derivative (curvature) or acceleration.
If P
is a matrix: P is the penalty matrix object.
Further arguments passed to or from other methods.
Return:
The matched call of fregre.pls
function.
Beta coefficient estimated of class fdata
.
A named vector of coefficients.
Estimated scalar response.
y
-fitted values
.
Hat matrix.
The residual degrees of freedom.
Coefficient of determination.
GCV criterion.
Residual variance.
Index of components to include in the model.
Amount of shrinkage.
Fitted object in fdata2pls
function.
Fitted object in lm
function
Functional explanatory data.
Scalar response.
The function computes the \(\left\{\nu_k\right\}_{k=1}^{\infty}\) orthonormal basis of functional PC (or PLS) to represent the functional data as \(\tilde{X}_i(t)=\sum_{k=1}^{\infty}\gamma_{ik}\nu_k\), where \(\tilde{X}=MX\) with \(M=(I+\lambda P)^{-1}\),\(\gamma_{ik}=\Big< \tilde{X}_i(t),\nu_k\Big>\) .
The functional penalized PC are calculated in fdata2ppc
.
Functional (FPLS) algorithm maximizes the covariance between \(\tilde{X}(t)\) and the scalar response \(Y\) via the partial least squares (PLS) components. The functional penalized PLS are calculated in fdata2ppls
by alternative formulation of the NIPALS algorithm proposed by Kraemer and Sugiyama (2011).
Let \(\left\{\tilde{\nu}_k\right\}_{k=1}^{\infty}\) the functional PLS components and \(\tilde{X}_i(t)=\sum_{k=1}^{\infty}\tilde{\gamma}_{ik}\tilde{\nu}_k\) and \(\beta(t)=\sum_{k=1}^{\infty}\tilde{\beta}_k\tilde{\nu}_k\). The functional linear model is estimated by: $$ \hat{y}=\big< \tilde{X},\hat{\beta} \big> \approx \sum_{k=1}^{k_n}\tilde{\gamma}_{k}\tilde{\beta}_k $$
Preda C. and Saporta G. PLS regression on a stochastic process. Comput. Statist. Data Anal. 48 (2005): 149-158.
Kraemer, N., Sugiyama M. (2011). The Degrees of Freedom of Partial Least Squares Regression. Journal of the American Statistical Association. Volume 106, 697-705.
Febrero-Bande, M., Oviedo de la Fuente, M. (2012). Statistical Computing in Functional Data Analysis: The R Package fda.usc. Journal of Statistical Software, 51(4), 1-28. http://www.jstatsoft.org/v51/i04/
See Also as: P.penalty
, fregre.ppc.cv
and fregre.ppls.cv
.
Alternative method: fregre.pc
, and fregre.pls
.