Fit an additive frailty model using a semiparametric penalized likelihood
estimation or a parametric estimation. The main issue in a meta-analysis
study is how to take into account the heterogeneity between trials and
between the treatment effects across trials. Additive models are
proportional hazard model with two correlated random trial effects that act
either multiplicatively on the hazard function or in interaction with the
treatment, which allows studying for instance meta-analysis or multicentric
datasets. Right-censored data are allowed, but not the left-truncated data.
A stratified analysis is possible (maximum number of strata = 2). This
approach is different from the shared frailty models.In an additive model, the hazard function for the jth
subject in the ith trial with random trial effect ui as
well as the random treatment-by-trial interaction vi is:where
additivePenal(formula, data, correlation = FALSE, recurrentAG =
FALSE, cross.validation = FALSE, n.knots, kappa, maxit = 350, hazard =
"Splines", nb.int, LIMparam = 1e-4, LIMlogl = 1e-4, LIMderiv = 1e-3,
print.times = TRUE)
An additive model or more generally an object of class 'additivePenal'. Methods defined for 'additivePenal' objects are provided for print, plot and summary.
sequence of the corresponding estimation of the splines coefficients, the random effects variances and the regression coefficients.
The code used for fitting the model.
the regression coefficients.
covariance between the two frailty terms
Logical value. Is cross validation procedure used for estimating the smoothing parameters in the penalized likelihood estimation?
Logical value. Are the random effects correlated?
degrees of freedom associated with the "kappa".
the formula part of the code used for the model.
the maximum number of groups used in the fit.
A vector with the smoothing parameters in the penalized likelihood estimation corresponding to each baseline function as components.
the complete marginal penalized log-likelihood in the semiparametric case.
the marginal log-likelihood in the parametric case.
the number of observations used in the fit.
the number of events observed in the fit.
number of iterations needed to converge.
number of knots for estimating the baseline functions.
number of stratum.
the corresponding correlation coefficient for the two frailty terms.
Variance for the random intercept (the random effect associated to the baseline hazard functions).
Variance for the random slope (the random effect associated to the treatment effect across trials).
the variance matrix of all parameters before positivity constraint transformation (Sigma2, Tau2, the regression coefficients and the spline coefficients). Then after, the delta method is needed to obtain the estimated variance parameters.
the robust estimation of the variance matrix of all parameters (Sigma2, Tau2, the regression coefficients and the spline coefficients).
The variance of the estimates of "sigma2".
The variance of the estimates of "tau2".
Variance of the estimates of "cov".
matrix of times where both survival and hazard functions are estimated. By default seq(0,max(time),length=99), where time is the vector of survival times.
array (dim=3) of hazard estimates and confidence bands.
array (dim=3) of baseline survival estimates and confidence bands.
The value of the median survival and its confidence bands. If there are two stratas or more, the first value corresponds to the value for the first strata, etc.
Type of hazard functions (0:"Splines", "1:Piecewise", "2:Weibull").
Type of Piecewise hazard functions (1:"percentile", 0:"equidistant").
Number of intervals (between 1 and 20) for the parametric hazard functions ("Piecewise-per", "Piecewise-equi").
number of parameters.
number of explanatory variables.
indicator of explanatory variable.
the approximated likelihood
cross-validation criterion in the semiparametric case (with H minus the
converged Hessian matrix, and l(.) the full
log-likelihood).
the Akaike information Criterion for the parametric
case.
initial value for the number of knots.
shape parameter for the Weibull hazard function.
scale parameter for the Weibull hazard function.
martingale residuals for each cluster.
empirical Bayes prediction of the first frailty term.
empirical Bayes prediction of the second frailty term.
linear predictor: uses simply "Beta'X + u_i + v_i * X_1" in the additive Frailty models.
a vector with the values of each multivariate Wald test.
a vector with the degree of freedom for each multivariate Wald test.
a binary variable equals to 0 when no multivariate Wald is given, 1 otherwise.
a vector with the p_values for each global multivariate Wald test.
Names of the "as.factor" variables.
vector of the values that factor might have taken.
type of contrast for factor variable.
p-values of the Wald test for the estimated regression coefficients.
a formula object, with the response on the left of a
slope()
function is required. Interactions are possible using *
or :.
a 'data.frame' with the variables used in 'formula'.
Logical value. Are the random effects correlated? If so, the correlation coefficient is estimated. The default is FALSE.
Always FALSE for additive models (left-truncated data are not allowed).
Logical value. Is cross validation procedure used for estimating smoothing parameter in the penalized likelihood estimation? If so a search of the smoothing parameter using cross validation is done, with kappa as the seed. The cross validation is not implemented for two strata. The default is FALSE.
integer giving the number of knots to use. Value required in the penalized likelihood estimation. It corresponds to the (n.knots+2) splines functions for the approximation of the hazard or the survival functions. Number of knots must be between 4 and 20. (See Note)
positive smoothing parameter in the penalized likelihood
estimation. In a stratified additive model, this argument must be a vector
with kappas for both strata. The coefficient kappa of the integral of the
squared second derivative of hazard function in the fit. To obtain an
initial value for kappa
, a solution is to fit the corresponding
shared frailty model using cross validation (see cross.validation). We
advise the user to identify several possible tuning parameters, note their
defaults and look at the sensitivity of the results to varying them. Value
required. (See Note)
maximum number of iterations for the Marquardtt algorithm. Default is 350
Type of hazard functions: "Splines" for semiparametric hazard functions with the penalized likelihood estimation, "Piecewise-per" for piecewise constant hazards functions using percentile, "Piecewise-equi" for piecewise constant hazard functions using equidistant intervals, "Weibull" for parametric Weibull functions. Default is "Splines".
Number of intervals (between 1 and 20) for the parametric hazard functions ("Piecewise-per", "Piecewise-equi").
Convergence threshold of the Marquardt algorithm for the
parameters (see Details),
Convergence threshold of the Marquardt algorithm for the
log-likelihood (see Details),
Convergence threshold of the Marquardt algorithm for the
gradient (see Details),
a logical parameter to print iteration process. Default is TRUE.
The estimated parameter are obtained by maximizing the penalized
log-likelihood or by a simple log-likelihood (in the parametric case) using
the robust Marquardtt algorithm (Marquardtt,1963). The parameters are
initialized with values obtained with Cox proportional hazard model. The
iterations are stopped when the difference between two consecutive
loglikelhoods was small
V. Rondeau, Y. Mazroui and J. R. Gonzalez (2012). Frailtypack: An R package for the analysis of correlated survival data with frailty models using penalized likelihood estimation or parametric estimation. Journal of Statistical Software 47, 1-28.
V. Rondeau, S. Michiels, B. Liquet, and J. P. Pignon (2008). Investigating trial and treatment heterogeneity in an individual patient data meta-analysis of survival data by mean of the maximum penalized likelihood approach. Statistics in Medecine, 27, 1894-1910.
slope
# \donttest{
###--- Additive model with 1 covariate ---###
data(dataAdditive)
modAdd <- additivePenal(Surv(t1,t2,event)~cluster(group)+
var1+slope(var1),correlation=TRUE,data=dataAdditive,
n.knots=8,kappa=10000)
#-- Var1 is boolean as a treatment variable
# }
Run the code above in your browser using DataLab