Learn R Programming

fda.usc (version 1.1.0)

fregre.ppc,fregre.ppls: Functional Penalized PC (or PLS) regression with scalar response

Description

Computes functional linear regression between functional explanatory variable $\tilde{X}(t)$ and scalar response $Y$ using penalized Principal Components Analysis (PPC) or Partial Least Squares (PPLS), where $\tilde{X(t)}=MX(t)$ with $M=(I+\lambda P)^{-1}$. $$Y=\big<\tilde{x},\beta\big>+\epsilon=\int_{T}{\tilde{X}(t)\beta(t)dt+\epsilon}$$ where $\big< \cdot , \cdot \big>$ denotes the inner product on $L_2$ and $\epsilon$ are random errors with mean zero , finite variance $\sigma^2$ and $E[\tilde{X}(t)\epsilon]=0$.

Usage

fregre.ppc(fdataobj, y, l =NULL,lambda=0,P=c(0,0,1),...)
fregre.ppls(fdataobj, y=NULL, l = NULL,lambda=0,P=c(0,0,1),...)

Arguments

fdataobj
fdata class object.
y
Scalar response with length n.
l
Index of components to include in the model.
lambda
Amount of penalization. Default value is 0, i.e. no penalization is used.
P
If P is a vector: P are coefficients to define the penalty matrix object. By default P=c(0,0,1) penalize the second derivative (curvature) or acceleration. If P is a matrix: P is the penalty matrix o
...
Further arguments passed to or from other methods.

Value

  • Return:
  • callThe matched call of fregre.pls function.
  • beta.estBeta coefficient estimated of class fdata.
  • coefficientsA named vector of coefficients.
  • fitted.valuesEstimated scalar response.
  • residualsy-fitted values.
  • HHat matrix.
  • dfThe residual degrees of freedom.
  • r2Coefficient of determination.
  • GCVGCV criterion.
  • sr2Residual variance.
  • lIndex of components to include in the model.
  • rnAmount of shrinkage.
  • fdata.compFitted object in fdata2pls function.
  • lmFitted object in lm function
  • fdataobjFunctional explanatory data.
  • yScalar response.

Details

The function computes the $\left{\nu_k\right}_{k=1}^{\infty}$ orthonormal basis of functional PC (or PLS) to represent the functional data as $\tilde{X}_i(t)=\sum_{k=1}^{\infty}\gamma_{ik}\nu_k$, where $\tilde{X}=MX$ with $M=(I+\lambda P)^{-1}$,$\gamma_{ik}=\Big< \tilde{X}_i(t),\nu_k\Big>$ . The functional penalized PC are calculated in fdata2ppc. Functional (FPLS) algorithm maximizes the covariance between $\tilde{X}(t)$ and the scalar response $Y$ via the partial least squares (PLS) components. The functional penalized PLS are calculated in fdata2ppls by alternative formulation of the NIPALS algorithm proposed by Kraemer and Sugiyama (2011). Let $\left{\tilde{\nu}_k\right}_{k=1}^{\infty}$ the functional PLS components and $\tilde{X}_i(t)=\sum_{k=1}^{\infty}\tilde{\gamma}_{ik}\tilde{\nu}_k$ and $\beta(t)=\sum_{k=1}^{\infty}\tilde{\beta}_k\tilde{\nu}_k$. The functional linear model is estimated by: $$\hat{y}=\big< \tilde{X},\hat{\beta} \big> \approx \sum_{k=1}^{k_n}\tilde{\gamma}_{k}\tilde{\beta}_k$$

References

Preda C. and Saporta G. PLS regression on a stochastic process. Comput. Statist. Data Anal. 48 (2005): 149{-}158. Kraemer, N., Sugiyama M. (2011). The Degrees of Freedom of Partial Least Squares Regression. Journal of the American Statistical Association. Volume 106, 697-705. Febrero-Bande, M., Oviedo de la Fuente, M. (2012). Statistical Computing in Functional Data Analysis: The R Package fda.usc. Journal of Statistical Software, 51(4), 1-28. http://www.jstatsoft.org/v51/i04/

See Also

See Also as: P.penalty, fregre.ppc.cv and fregre.ppls.cv. Alternative method: fregre.pc, and fregre.pls.

Examples

Run this code
# data(tecator)
# x<-tecator$absorp.fdata
# y<-tecator$y$Fat
# res=fregre.ppc(x,y,c(1:8))
# summary(res)
# res2=fregre.ppls(x,y,c(1:8))
# summary(res2)

Run the code above in your browser using DataLab