where $<.,.>$ denotes the inner product on $L_2$ and $\epsilon$ are random errors with mean zero , finite variance $\sigma^2$ and $E[\tilde{X}(t)\epsilon]=0$.
\tilde{x},\beta\big>fregre.ppc(fdataobj, y, l =NULL,lambda=0,P=c(0,0,1),...)
fregre.ppls(fdataobj, y=NULL, l = NULL,lambda=0,P=c(0,0,1),...)
fdata
class object.n
.P
is a vector: P
are coefficients to define the penalty matrix object. By default P=c(0,0,1)
penalize the second derivative (curvature) or acceleration.
If P
is a matrix: P is the penalty matrix object.fregre.pls
function.fdata
.y
-fitted values
.fdata2pls
function.lm
functionThe function computes the $\nu_1,...,\nu_\infty$ orthonormal basis of functional PC (or PLS) to represent the functional data as $X(t)=\sum_(k=1:\infty) \gamma_k \nu_k$, where $\tilde{X}=MX$ with $M=(I+\lambda P)^{-1}$,$\gamma_k= < \tilde{X}_i(t), \nu_k >$ .
The functional penalized PC are calculated in fdata2ppc
.
Functional (FPLS) algorithm maximizes the covariance between $\tilde{X}(t)$ and the scalar response $Y$ via the partial least squares (PLS) components. The functional penalized PLS are calculated in fdata2ppls
by alternative formulation of the NIPALS algorithm proposed by Kraemer and Sugiyama (2011).
Let ${\nu_k}_k=1:\infty$ the functional PLS components and $X_i(t)=\sum{k=1:\infty} \gamma_{ik} \nu_k$ and $\beta(t)=\sum{k=1:\infty} \beta_k \nu_k$. The functional linear model is estimated by: $$ \hat{y}=\big< \tilde{X},\hat{\beta} \big> \approx \sum_{k=1}^{k_n}\tilde{\gamma}_{k}\tilde{\beta}_k $$
Kraemer, N., Sugiyama M. (2011). The Degrees of Freedom of Partial Least Squares Regression. Journal of the American Statistical Association. Volume 106, 697-705.
Febrero-Bande, M., Oviedo de la Fuente, M. (2012). Statistical Computing in Functional Data Analysis: The R Package fda.usc. Journal of Statistical Software, 51(4), 1-28. http://www.jstatsoft.org/v51/i04/
P.penalty
, fregre.ppc.cv
and fregre.ppls.cv
.
Alternative method: fregre.pc
, and fregre.pls
.