Learn R Programming

SemiParBIVProbit (version 3.3)

SemiParBIVProbit: Semiparametric Bivariate Probit Modelling

Description

SemiParBIVProbit can be used to fit bivariate probit models where the linear predictors of the two main equations can be flexibly specified using parametric and regression spline components. Several bivariate copula distributions are supported. During the model fitting process, the possible presence of correlated error equations, endogeneity, non-random sample selection or partial observability is accounted for. Regression spline bases are extracted from the package mgcv. Multi-dimensional smooths are available via the use of penalized thin plate regression splines (isotropic). Note that, if it makes sense, the dependence parameter of the bivariate distribution employed can be specified as a function of covariates.

Usage

SemiParBIVProbit(formula, data = list(), weights = NULL, subset = NULL,  
                 start.v = NULL, Model = "B", method = "trwlF", 
                 BivD = "N", fp = FALSE, PL = "P", eqPL = "both", 
                 valPL = c(0,0), fitPL = "pLiksp", spPL = c(0.01,0.01), 
                 hess = TRUE, infl.fac = 1, 
                 rinit = 1, rmax = 100, iterlimsp = 50, pr.tolsp = 1e-6,
                 gc.l = FALSE, awlm = FALSE, parscale, extra.regI = "t")

Arguments

formula
A list of two formulas, one for equation 1 and the other for equation 2. s terms are used to specify smooth smooth functions of predictors. SemiParBIVProbit supports the use shrinkage smoothers for variable
data
An optional data frame, list or environment containing the variables in the model. If not found in data, the variables are taken from environment(formula), typically the environment from which SemiParBIVProbit
weights
Optional vector of prior weights to be used in fitting.
subset
Optional vector specifying a subset of observations to be used in the fitting process.
start.v
Starting values for the parameters of the two equations and association coefficient can be provided here.
Model
It indicates the type of model to be used in the analysis. Possible values are "B" (bivariate model), "BSS" (bivariate model with non-random sample selection) and "BPO" (bivariate model with partial observability).
method
Estimation method used. It can be either "trwl" or "trwlF". The latter option is much faster and more stable when smoothing parameter estimation is employed, and is the default.
BivD
Type of bivariate error distribution employed. Possible choices are "N", "C0", "C90", "C180", "C270", "J0", "J90", "J180", "J270", "G0", "G90", "G180", "G270", "F" which stand for bivariate normal, Clayton, rotated Clayton (90 degrees), surv
fp
If TRUE then a fully parametric model with unpenalised regression splines if fitted. See the example below.
PL
Link function to be employed. Possible choices are "P" (classic probit - default option), "PP" (power probit), "RPP" (reciprocal power probit) and "SN" (skew normal).
eqPL
Equation(s) on which the asymmetric link function approach should be applied. Possible choices are "both", "first" and "second".
valPL
Initial values for link function shape parameters when an asymmetric link fucntion approach is employed.
fitPL
Fitting approach for shape parameters of the asymmetric link functions employed. Possible choices are "fixed", "unpLik", "pLik" and "pLiksp" which stand for fixed shape parameters, classic likelihood optimization, penalized l
spPL
Values for the smoothing parameters of the ridge penalties associated with the shape parameteres of the asymmetric link functions.
hess
If FALSE then the expected/Fisher (rather than observed) information matrix is employed.
infl.fac
Inflation factor for the model degrees of freedom in the UBRE score. Smoother models can be obtained setting this parameter to a value greater than 1.
rinit
Starting trust region radius. The trust region radius is adjusted as the algorithm proceeds. See the documentation of trust for further details.
rmax
Maximum allowed trust region radius. This may be set very large. If set small, the algorithm traces a steepest descent path.
iterlimsp
A positive integer specifying the maximum number of loops to be performed before the smoothing parameter estimation step is terminated.
pr.tolsp
Tolerance to use in judging convergence of the algorithm when automatic smoothing parameter estimation is used.
gc.l
This is relevant when working with big datasets. If TRUE then the garbage collector is called more often than when R decides to. This keeps the memory footprint down but it will slow down the routine.
awlm
If TRUE then the association parameter of the two equations is used in the construction of the working linear model needed for smoothing parameter estimation. This option makes sense only when method = "trwl".
parscale
The algorithm will operate as if optimizing objfun(x / parscale, ...). If missing then no rescaling is done. See the documentation of trust for more details.
extra.regI
If "t" then regularization as from trust is applied to the information matrix if needed. If different from "t" then extra regularization is applied via the options "pC" (pivoted Choleski - this only make

Value

  • The function returns an object of class SemiParBIVProbit as described in SemiParBIVProbitObject.

WARNINGS

In our experience, convergence failure may occur, for instance, when the number of observations is low, there are important mistakes in the model specification (i.e., using C90 when the model equations are positively associated), or there are many smooth components in the model as compared to the number of observations. Convergence failure may also mean that an infinite cycling between steps (i) and (ii) occurs. In this case, the smoothing parameters are set to the values obtained from the non-converged algorithm (sem.checks will give a warning). In such cases, we recommend re-specifying the model, using some extra regularisation (see extra.regI) and/or using some rescaling (see parscale). The use of asymmetric links is not likely to lead to significant improvements as compared to using probit links. We recommend using this option at the end of the empirical analysis as a sensitivity analysis tool, and not during the model building process. Also, the theoretical and numerical complexity introduced by asymmetric links means that convergence failures are more likely to occur and that data requirements are typically high. In the contexts of endogeneity and non-random sample selection, it would not make much sense to specify the dependence parameter as function of covariates. This is because, in these situations, the assumption is that the dependence parameter models the association between the unobserved confounders in the two equations. However, this option does make sense when it is believed that the association coefficient varies across geographical areas, for instance. If the interest is not in modelling unobserved confounding then many possible structures may be possible for the dependence parameter and these can be tried out.

Details

The bivariate models considered in this package have probit links for the two model equations, and model the association between the responses through the correlation parameter $\rho$ of a standardised bivariate normal distribution, or that of a bivariate copula distribution, $\theta$. The linear predictors of the two equations are flexibly specified using parametric components and smooth functions of covariates. The same can be done for the dependence parameter if it makes sense. Replacing the smooth components with their regression spline expressions yields a fully parametric bivariate probit model. In principle, classic maximum likelihood estimation can be employed. However, to avoid overfitting, penalized likelihood maximization is employed instead. Here, the use of penalty matrices allows for the suppression of that part of smooth term complexity which has no support from the data. The trade-off between smoothness and fitness is controlled by smoothing parameters associated with the penalty matrices. Smoothing parameters are chosen to minimize the approximate Un-Biased Risk Estimator (UBRE), which can also be viewed as an approximate AIC. Automatic smoothing parameter estimation is integrated using a performance-oriented iteration approach (Gu, 1992; Wood, 2004). At each iteration, (i) the penalized weighted least squares problem is solved and (ii) the smoothing parameters of that problem estimated by approximate UBRE. Steps (i) and (ii) are iterated until convergence. Details of the underlying fitting methods are given in Radice, Marra and Wojtys (submitted). Releases previous to 3.2-7 were based on the algorithms detailed in Marra and Radice (2011, 2013).

References

Key References: Gu C. (1992), Cross validating non-Gaussian data. Journal of Computational and Graphical Statistics, 1(2), 169-179. Marra G. (2013), On P-values for Semiparametric Bivariate Probit Models. Statistical Methodology, 10(1), 23-28. Marra G. and Radice R. (2011), Estimation of a Semiparametric Recursive Bivariate Probit in the Presence of Endogeneity. Canadian Journal of Statistics, 39(2), 259-279. Marra G. and Radice R. (2013), A Penalized Likelihood Estimation Approach to Semiparametric Sample Selection Binary Response Modeling. Electronic Journal of Statistics, 7, 1432-1455. Marra G. and Radice R. (2015), Flexible Bivariate Binary Models for Estimating the Efficacy of Phototherapy for Newborns with Jaundice. International Journal of Statistics and Probability. Marra G., Radice R. and Missiroli S. (2014), Testing the Hypothesis of Absence of Unobserved Confounding in Semiparametric Bivariate Probit Models. Computational Statistics, 29(3-4), 715-741. Marra G., Radice R. and Filippou P. (submitted), Regression Spline Bivariate Probit Models: A Practical Approach to Testing for Exogeneity. McGovern M.E., Barnighausen T., Marra G. and Radice R. (2015), On the Assumption of Joint Normality in Selection Models: A Copula Approach Applied to Estimating HIV Prevalence. Epidemiology, 26(2), 229-237. Radice R., Marra G. and M. Wojtys (submitted), Copula Regression Spline Models for Binary Outcomes. Marra G. and Wood S.N. (2011), Practical Variable Selection for Generalized Additive Models. Computational Statistics and Data Analysis, 55(7), 2372-2387. Marra G. and Wood S.N. (2012), Coverage Properties of Confidence Intervals for Generalized Additive Model Components. Scandinavian Journal of Statistics, 39(1), 53-74. Poirier D.J. (1980), Partial Observability in Bivariate Probit Models. Journal of Econometrics, 12, 209-217. Wood S.N. (2004), Stable and efficient multiple smoothing parameter estimation for generalized additive models. Journal of the American Statistical Association, 99(467), 673-686.

See Also

AT, adjCov, est.prev, gt.bpm, LM.bpm, VuongClarke.bcm, plot.SemiParBIVProbit, SemiParBIVProbit-package, SemiParBIVProbitObject, sem.checks, summary.SemiParBIVProbit, predict.SemiParBIVProbit

Examples

Run this code
library(SemiParBIVProbit)

############
## EXAMPLE 1
############
## Generate data
## Correlation between the two equations 0.5 - Sample size 400 

set.seed(0)

n <- 400

Sigma <- matrix(0.5, 2, 2); diag(Sigma) <- 1
u     <- rmvnorm(n, rep(0,2), Sigma)

x1 <- round(runif(n)); x2 <- runif(n); x3 <- runif(n)

f1   <- function(x) cos(pi*2*x) + sin(pi*x)
f2   <- function(x) x+exp(-30*(x-0.5)^2)   

y1 <- ifelse(-1.55 + 2*x1    + f1(x2) + u[,1] > 0, 1, 0)
y2 <- ifelse(-0.25 - 1.25*x1 + f2(x2) + u[,2] > 0, 1, 0)

dataSim <- data.frame(y1, y2, x1, x2, x3)

#
#

## CLASSIC BIVARIATE PROBIT

out  <- SemiParBIVProbit(list(y1 ~ x1 + x2 + x3, 
                              y2 ~ x1 + x2 + x3), 
                         data = dataSim)
sem.checks(out)
summary(out)
AIC(out)
BIC(out)

## SEMIPARAMETRIC BIVARIATE PROBIT

## "cr" cubic regression spline basis      - "cs" shrinkage version of "cr"
## "tp" thin plate regression spline basis - "ts" shrinkage version of "tp"
## for smooths of one variable, "cr/cs" and "tp/ts" achieve similar results 
## k is the basis dimension - default is 10
## m is the order of the penalty for the specific term - default is 2
## For COPULA models use BivD argument 

out  <- SemiParBIVProbit(list(y1 ~ x1 + s(x2, bs = "tp", k = 10, m = 2) + s(x3), 
                              y2 ~ x1 + s(x2) + s(x3)),  
                         data = dataSim)
sem.checks(out)
summary(out, cplot = TRUE)
AIC(out)


## estimated smooth function plots - red lines are true curves

x2 <- sort(x2)
f1.x2 <- f1(x2)[order(x2)] - mean(f1(x2))
f2.x2 <- f2(x2)[order(x2)] - mean(f2(x2))
f3.x3 <- rep(0, length(x3))

par(mfrow=c(2,2),mar=c(4.5,4.5,2,2))
plot(out, eq = 1, select = 1, scale = 0); lines(x2, f1.x2, col = "red")
plot(out, eq = 1, select = 2, scale = 0); lines(x3, f3.x3, col = "red")
plot(out, eq = 2, select = 1, scale = 0); lines(x2, f2.x2, col = "red")
plot(out, eq = 2, select = 2, scale = 0); lines(x3, f3.x3, col = "red")

#
## same plots but CIs 'with intercept' 

plot(out, eq = 1, select = 1, seWithMean = TRUE, scale = 0); lines(x2, f1.x2, col = "red")
plot(out, eq = 1, select = 2, seWithMean = TRUE, scale = 0); lines(x3, f3.x3, col = "red")
plot(out, eq = 2, select = 1, seWithMean = TRUE, scale = 0); lines(x2, f2.x2, col = "red")
plot(out, eq = 2, select = 2, seWithMean = TRUE, scale = 0); lines(x3, f3.x3, col = "red")


## p-values suggest to drop x3 from both equations, with a stronger 
## evidence for eq. 2. This can be also achieved using shrinkage smoothers

outSS <- SemiParBIVProbit(list(y1 ~ x1 + s(x2, bs = "cs") + s(x3, bs = "cs"), 
                               y2 ~ x1 + s(x2, bs = "cs") + s(x3, bs = "cs")), 
                          data = dataSim)
sem.checks(outSS)                          

plot(outSS, eq = 1, select = 1, scale = 0)
plot(outSS, eq = 1, select = 2, ylim = c(-0.1,0.1))
plot(outSS, eq = 2, select = 1, scale = 0)
plot(outSS, eq = 2, select = 2, ylim = c(-0.1,0.1))


## SEMIPARAMETRIC BIVARIATE PROBIT with association parameter 
## depending on covariates as well

outD <- SemiParBIVProbit(list(y1 ~ x1 + s(x2), 
                              y2 ~ x1 + s(x2),
                                 ~ x1 + s(x2) ), 
                          data = dataSim)
sem.checks(outD)  
summary(outD)
outD$rho

plot(outD, eq = 1, seWithMean = TRUE)
plot(outD, eq = 2, seWithMean = TRUE)
plot(outD, eq = 3, seWithMean = TRUE)
graphics.off()

#
#

############
## EXAMPLE 2
############
## Generate data with one endogenous variable 
## and exclusion restriction

set.seed(0)

n <- 300

Sigma <- matrix(0.5, 2, 2); diag(Sigma) <- 1
u     <- rmvnorm(n, rep(0,2), Sigma)

cov   <- rmvnorm(n, rep(0,2), Sigma, method = "svd")
cov   <- pnorm(cov)
x1 <- round(cov[,1]); x2 <- cov[,2]

f1   <- function(x) cos(pi*2*x) + sin(pi*x)
f2   <- function(x) x+exp(-30*(x-0.5)^2)   

y1 <- ifelse(-1.55 + 2*x1    + f1(x2) + u[,1] > 0, 1, 0)
y2 <- ifelse(-0.25 - 1.25*y1 + f2(x2) + u[,2] > 0, 1, 0)

dataSim <- data.frame(y1, y2, x1, x2)

#

## Testing the hypothesis of absence of endogeneity... 

LM.bpm(list(y1 ~ x1 + s(x2), y2 ~ y1 + s(x2)), dataSim, Model = "B")

# p-value suggests presence of endogeneity, hence fit a bivariate model


## CLASSIC RECURSIVE BIVARIATE PROBIT

out <- SemiParBIVProbit(list(y1 ~ x1 + x2, 
                             y2 ~ y1 + x2), 
                        data = dataSim)
sem.checks(out)                        
summary(out)

## SEMIPARAMETRIC RECURSIVE BIVARIATE PROBIT

out <- SemiParBIVProbit(list(y1 ~ x1 + s(x2), 
                             y2 ~ y1 + s(x2)), 
                        data = dataSim)
sem.checks(out)                        
summary(out)

#

## Testing the hypothesis of absence of endogeneity post estimation... 

gt.bpm(out)

#
## average treatment effect with CI

mb(y1, y2, Model = "B")
AT(out, eq = 2, nm.bin = "y1") 
AT(out, eq = 2, nm.bin = "y1", naive = TRUE) 


## try a Clayton copula model... 

outC <- SemiParBIVProbit(list(y1 ~ x1 + s(x2), 
                              y2 ~ y1 + s(x2)), 
                         data = dataSim, BivD = "C0")
sem.checks(outC)                         
summary(outC)
AT(outC, eq = 2, nm.bin = "y1") 

## try a Joe copula model... 

outJ <- SemiParBIVProbit(list(y1 ~ x1 + s(x2), 
                              y2 ~ y1 + s(x2)), 
                         data = dataSim, BivD = "J0")
sem.checks(outJ)
summary(outJ, cplot = TRUE)
AT(outJ, eq = 2, nm.bin = "y1") 


VuongClarke.bcm(out, outJ)

#
## recursive bivariate probit modelling with unpenalized splines 
## can be achieved as follows

outFP <- SemiParBIVProbit(list(y1 ~ x1 + s(x2, bs = "cr", k = 5), 
                               y2 ~ y1 + s(x2, bs = "cr", k = 6)), fp = TRUE, 
                          data = dataSim)
sem.checks(outFP)                            
summary(outFP)

#
## SEMIPARAMETRIC RECURSIVE BIVARIATE PROBIT 
## WITH ASYMMETRIC LINK FUNCTIONS

out <- SemiParBIVProbit(list(y1 ~ x1 + s(x2), 
                             y2 ~ y1 + s(x2)), 
                        data = dataSim, PL = "SN", hess = FALSE)
sem.checks(out)                        
summary(out) 
# same output as model with probit links

############
## EXAMPLE 3
############
## Generate data with a non-random sample selection mechanism 
## and exclusion restriction

set.seed(0)

n <- 1100

Sigma <- matrix(0.5, 2, 2); diag(Sigma) <- 1
u     <- rmvnorm(n, rep(0,2), Sigma)

SigmaC <- matrix(0.5, 3, 3); diag(SigmaC) <- 1
cov    <- rmvnorm(n, rep(0,3), SigmaC, method = "svd")
cov    <- pnorm(cov)
bi <- round(cov[,1]); x1 <- cov[,2]; x2 <- cov[,3]
  
f11 <- function(x) -0.7*(4*x + 2.5*x^2 + 0.7*sin(5*x) + cos(7.5*x))
f12 <- function(x) -0.4*( -0.3 - 1.6*x + sin(5*x))  
f21 <- function(x) 0.6*(exp(x) + sin(2.9*x)) 

ys <-  0.58 + 2.5*bi + f11(x1) + f12(x2) + u[, 1] > 0
y  <- -0.68 - 1.5*bi + f21(x1) +         + u[, 2] > 0
yo <- y*(ys > 0)
  
dataSim <- data.frame(y, ys, yo, bi, x1, x2)

## Testing the hypothesis of absence of non-random sample selection... 

LM.bpm(list(ys ~ bi + s(x1) + s(x2), yo ~ bi + s(x1)), dataSim, Model = "BSS")

# p-value suggests presence of sample selection, hence fit a bivariate model

#
## SEMIPARAMETRIC SAMPLE SELECTION BIVARIATE PROBIT
## the first equation MUST be the selection equation

out <- SemiParBIVProbit(list(ys ~ bi + s(x1) + s(x2), 
                             yo ~ bi + s(x1)), 
                        data = dataSim, Model = "BSS")
sem.checks(out)                          
gt.bpm(out)                        

## compare the two summary outputs
## the second output produces a summary of the results obtained when
## selection bias is not accounted for

summary(out)
summary(out$gam2)

## corrected predicted probability that 'yo' is equal to 1

mb(ys, yo, Model = "BSS")
est.prev(out)
est.prev(out, naive = TRUE)

## estimated smooth function plots
## the red line is the true curve
## the blue line is the naive curve not accounting for selection bias

x1.s <- sort(x1[dataSim$ys>0])
f21.x1 <- f21(x1.s)[order(x1.s)]-mean(f21(x1.s))

plot(out, eq = 2, ylim = c(-1.65,0.95)); lines(x1.s, f21.x1, col="red")
par(new = TRUE)
plot(out$gam2, se = FALSE, col = "blue", ylim = c(-1.65,0.95), ylab = "", rug = FALSE)

#
#
## try a Clayton copula model... 

outC <- SemiParBIVProbit(list(ys ~ bi + s(x1) + s(x2), 
                              yo ~ bi + s(x1)), 
                         data = dataSim, Model = "BSS", BivD = "C0")
sem.checks(outC)
summary(outC, cplot = TRUE)
est.prev(outC)

#

############
## EXAMPLE 4
############
## Generate data with partial observability

set.seed(0)

n <- 10000

Sigma <- matrix(0.5, 2, 2); diag(Sigma) <- 1
u     <- rmvnorm(n, rep(0,2), Sigma)

x1 <- round(runif(n)); x2 <- runif(n); x3 <- runif(n)

y1 <- ifelse(-1.55 + 2*x1 + x2 + u[,1] > 0, 1, 0)
y2 <- ifelse( 0.45 - x3        + u[,2] > 0, 1, 0)
y  <- y1*y2

dataSim <- data.frame(y, x1, x2, x3)


## BIVARIATE PROBIT with Partial Observability

out  <- SemiParBIVProbit(list(y ~ x1 + x2, 
                              y ~ x3), 
                         data = dataSim, Model = "BPO")
sem.checks(out)
summary(out)


outC <- SemiParBIVProbit(list(y ~ x1 + x2, 
                              y ~ x3), 
                         data = dataSim, Model = "BPO", BivD = "C0")
sem.checks(outC)
summary(outC)

# first ten estimated probabilities for the four events from object out

cbind(out$p11, out$p10, out$p00, out$p01)[1:10,]


# case with smooth function 
# (a bit more computationally intensive)  

f1 <- function(x) cos(pi*2*x) + sin(pi*x)

y1 <- ifelse(-1.55 + 2*x1 + f1(x2) + u[,1] > 0, 1, 0)
y2 <- ifelse( 0.45 - x3            + u[,2] > 0, 1, 0)
y  <- y1*y2

dataSim <- data.frame(y, x1, x2, x3)

out  <- SemiParBIVProbit(list(y ~ x1 + s(x2), 
                              y ~ x3), 
                         data = dataSim, Model = "BPO")

sem.checks(out)
summary(out, cplot = TRUE)


# plot estimated and true functions

x2 <- sort(x2); f1.x2 <- f1(x2)[order(x2)] - mean(f1(x2))
plot(out, eq = 1, scale = 0); lines(x2, f1.x2, col = "red")

#
#

Run the code above in your browser using DataLab