Learn R Programming

SemiParBIVProbit (version 3.4)

SemiParBIVProbit: Semiparametric Bivariate Regression Models

Description

SemiParBIVProbit can be used to fit bivariate binary models where the linear predictors of the two main equations can be flexibly specified using parametric and regression spline components. Several bivariate copula distributions are supported. During the model fitting process, the possible presence of associated error equations, endogeneity, non-random sample selection or partial observability is accounted for. Regression spline bases are extracted from the package mgcv. Multi-dimensional smooths are available via the use of penalized thin plate regression splines. Note that, if it makes sense, the dependence parameter of the copula distribution can be specified as a function of covariates. SemiParBIVProbit can also be used to fit bivariate models in which one response is binary and the other is continuous. This case is relevant for estimating, for instance, the effect that a binary endogenous variable has on a continuous response. There are many continuous distributions that can be employed within this context and we plan to include several options on a case-study basis. Other bivariate models will be implemented from time to time, depending on the case study at hand.

Usage

SemiParBIVProbit(formula, data = list(), weights = NULL, subset = NULL,  
                 start.v = NULL, Model = "B", BivD = "N", 
                 margins = c("probit","probit"), gamlssfit = FALSE,
                 fp = FALSE, hess = TRUE, infl.fac = 1, 
                 rinit = 1, rmax = 100, 
                 iterlimsp = 50, pr.tolsp = 1e-6,
                 gc.l = FALSE, parscale, extra.regI = "t")

Arguments

formula
In the basic setup this will be a list of two formulas, one for equation 1 and the other for equation 2. s terms are used to specify smooth smooth functions of predictors. SemiParBIVProbit sup
data
An optional data frame, list or environment containing the variables in the model. If not found in data, the variables are taken from environment(formula), typically the environment from which SemiParBIVProbit
weights
Optional vector of prior weights to be used in fitting.
subset
Optional vector specifying a subset of observations to be used in the fitting process.
start.v
Starting values for the parameters of the two equations and association coefficient can be provided here.
Model
It indicates the type of model to be used in the analysis. Possible values are "B" (bivariate model), "BSS" (bivariate model with non-random sample selection), "BPO" (bivariate model with partial observability) and "BPO0" (biv
margins
It indicates the distributions used for the two margins. The first is always probit which means that the response of the first equation is always assumed to be binary. The response for the second equation can be binary (case in which a
gamlssfit
In the case of continuous margin for the outcome equation, if gamlssfit = TRUE then a gamlss if fitted for the outcome equation. This is useful to calculate, for instance, the average treatment effect from
BivD
Type of bivariate error distribution employed. Possible choices are "N", "C0", "C90", "C180", "C270", "J0", "J90", "J180", "J270", "G0", "G90", "G180", "G270", "F" which stand for bivariate normal, Clayton, rotated Clayton (90 degrees), surv
fp
If TRUE then a fully parametric model with unpenalised regression splines if fitted. See the example below.
hess
If FALSE then the expected/Fisher (rather than observed) information matrix is employed.
infl.fac
Inflation factor for the model degrees of freedom in the UBRE score. Smoother models can be obtained setting this parameter to a value greater than 1.
rinit
Starting trust region radius. The trust region radius is adjusted as the algorithm proceeds. See the documentation of trust for further details.
rmax
Maximum allowed trust region radius. This may be set very large. If set small, the algorithm traces a steepest descent path.
iterlimsp
A positive integer specifying the maximum number of loops to be performed before the smoothing parameter estimation step is terminated.
pr.tolsp
Tolerance to use in judging convergence of the algorithm when automatic smoothing parameter estimation is used.
gc.l
This is relevant when working with big datasets. If TRUE then the garbage collector is called more often than it is usually done. This keeps the memory footprint down but it will slow down the routine.
parscale
The algorithm will operate as if optimizing objfun(x / parscale, ...) where parscale is a scalar. If missing then no rescaling is done. See the documentation of trust for more details.
extra.regI
If "t" then regularization as from trust is applied to the information matrix if needed. If different from "t" then extra regularization is applied via the options "pC" (pivoted Choleski - this only make

Value

  • The function returns an object of class SemiParBIVProbit as described in SemiParBIVProbitObject.

WARNINGS

In our experience, convergence failure may occur, for instance, when the number of observations is low, there are important mistakes in the model specification (i.e., using C90 when the model errors are positively associated), or the model is 'over-complicated'. Convergence failure may also mean that an infinite cycling between steps (i) and (ii) occurs. In this case, the smoothing parameters are set to the values obtained from the non-converged algorithm (conv.check will notify the user if this is the case). In such cases, we recommend re-specifying the model, using some extra regularisation (see extra.regI) and/or using some rescaling (see parscale). In the contexts of endogeneity and non-random sample selection, it would not make much sense to specify the dependence parameter as function of the same covariates used for the two main equations and some meaningful grouping should be used instead. This is because, in these situations, the assumption is that the dependence parameter models the association between the unobserved confounders in the two equations. This option would make sense, for instance, when it is believed that the association coefficient varies across geographical areas. If the interest is not in modelling unobserved confounding then many possible structures may be possible for the dependence parameter and these can be tried out.

Details

The bivariate models considered in this package consist of two model equations which depend on flexible linear predictors and whose association between the responses is modelled through parameter $\theta$ of a standardised bivariate normal distribution or that of a bivariate copula distribution. The linear predictors of the two equations are flexibly specified using parametric components and smooth functions of covariates. The same can be done for the dependence parameter if it makes sense. Replacing the smooth components with their regression spline expressions yields a fully parametric bivariate model. In principle, classic maximum likelihood estimation can be employed. However, to avoid overfitting, penalized likelihood maximization is employed instead. Here, the use of penalty matrices allows for the suppression of that part of smooth term complexity which has no support from the data. The trade-off between smoothness and fitness is controlled by smoothing parameters associated with the penalty matrices. Smoothing parameters are chosen to minimize the approximate Un-Biased Risk Estimator (UBRE), which can also be viewed as an approximate AIC. Automatic smoothing parameter estimation is integrated using a performance-oriented iteration approach (Gu, 1992; Wood, 2004). Roughly speaking, at each iteration, (i) the penalized weighted least squares problem is solved and (ii) the smoothing parameters of that problem estimated by approximate UBRE. Steps (i) and (ii) are iterated until convergence. Details of the underlying fitting methods are given in Radice, Marra and Wojtys (in press). Releases previous to 3.2-7 were based on the algorithms detailed in Marra and Radice (2011, 2013).

References

Gu C. (1992), Cross validating non-Gaussian data. Journal of Computational and Graphical Statistics, 1(2), 169-179. Marra G. (2013), On P-values for Semiparametric Bivariate Probit Models. Statistical Methodology, 10(1), 23-28. Marra G. and Radice R. (2011), Estimation of a Semiparametric Recursive Bivariate Probit in the Presence of Endogeneity. Canadian Journal of Statistics, 39(2), 259-279. Marra G. and Radice R. (2013), A Penalized Likelihood Estimation Approach to Semiparametric Sample Selection Binary Response Modeling. Electronic Journal of Statistics, 7, 1432-1455. Marra G., Radice R. and Missiroli S. (2014), Testing the Hypothesis of Absence of Unobserved Confounding in Semiparametric Bivariate Probit Models. Computational Statistics, 29(3-4), 715-741. Marra G., Radice R. and Filippou P. (in press), Regression Spline Bivariate Probit Models: A Practical Approach to Testing for Exogeneity. Communications in Statistics - Simulation and Computation. McGovern M.E., Barnighausen T., Marra G. and Radice R. (2015), On the Assumption of Joint Normality in Selection Models: A Copula Approach Applied to Estimating HIV Prevalence. Epidemiology, 26(2), 229-237. Radice R., Marra G. and M. Wojtys (in press), Copula Regression Spline Models for Binary Outcomes. Statistics and Computing. Marra G. and Wood S.N. (2011), Practical Variable Selection for Generalized Additive Models. Computational Statistics and Data Analysis, 55(7), 2372-2387. Marra G. and Wood S.N. (2012), Coverage Properties of Confidence Intervals for Generalized Additive Model Components. Scandinavian Journal of Statistics, 39(1), 53-74. Poirier D.J. (1980), Partial Observability in Bivariate Probit Models. Journal of Econometrics, 12, 209-217. Wood S.N. (2004), Stable and efficient multiple smoothing parameter estimation for generalized additive models. Journal of the American Statistical Association, 99(467), 673-686.

See Also

AT, adjCov, prev, gt.bpm, LM.bpm, VuongClarke, plot.SemiParBIVProbit, SemiParBIVProbit-package, SemiParBIVProbitObject, conv.check, summary.SemiParBIVProbit, predict.SemiParBIVProbit

Examples

Run this code
library(SemiParBIVProbit)

############
## EXAMPLE 1
############
## Generate data
## Correlation between the two equations 0.5 - Sample size 400 

set.seed(0)

n <- 400

Sigma <- matrix(0.5, 2, 2); diag(Sigma) <- 1
u     <- rMVN(n, rep(0,2), Sigma)

x1 <- round(runif(n)); x2 <- runif(n); x3 <- runif(n)

f1   <- function(x) cos(pi*2*x) + sin(pi*x)
f2   <- function(x) x+exp(-30*(x-0.5)^2)   

y1 <- ifelse(-1.55 + 2*x1    + f1(x2) + u[,1] > 0, 1, 0)
y2 <- ifelse(-0.25 - 1.25*x1 + f2(x2) + u[,2] > 0, 1, 0)

dataSim <- data.frame(y1, y2, x1, x2, x3)

#
#

## CLASSIC BIVARIATE PROBIT

out  <- SemiParBIVProbit(list(y1 ~ x1 + x2 + x3, 
                              y2 ~ x1 + x2 + x3), 
                         data = dataSim)
conv.check(out)
summary(out)
AIC(out)
BIC(out)

## SEMIPARAMETRIC BIVARIATE PROBIT

## "cr" cubic regression spline basis      - "cs" shrinkage version of "cr"
## "tp" thin plate regression spline basis - "ts" shrinkage version of "tp"
## for smooths of one variable, "cr/cs" and "tp/ts" achieve similar results 
## k is the basis dimension - default is 10
## m is the order of the penalty for the specific term - default is 2
## For COPULA models use BivD argument 

out  <- SemiParBIVProbit(list(y1 ~ x1 + s(x2, bs = "tp", k = 10, m = 2) + s(x3), 
                              y2 ~ x1 + s(x2) + s(x3)),  
                         data = dataSim)
conv.check(out)
summary(out, cm.plot = TRUE)
AIC(out)


## estimated smooth function plots - red lines are true curves

x2 <- sort(x2)
f1.x2 <- f1(x2)[order(x2)] - mean(f1(x2))
f2.x2 <- f2(x2)[order(x2)] - mean(f2(x2))
f3.x3 <- rep(0, length(x3))

par(mfrow=c(2,2),mar=c(4.5,4.5,2,2))
plot(out, eq = 1, select = 1, scale = 0); lines(x2, f1.x2, col = "red")
plot(out, eq = 1, select = 2, scale = 0); lines(x3, f3.x3, col = "red")
plot(out, eq = 2, select = 1, scale = 0); lines(x2, f2.x2, col = "red")
plot(out, eq = 2, select = 2, scale = 0); lines(x3, f3.x3, col = "red")

#
## same plots but CIs 'with intercept' 

plot(out, eq = 1, select = 1, seWithMean = TRUE, scale = 0)
lines(x2, f1.x2, col = "red")
plot(out, eq = 1, select = 2, seWithMean = TRUE, scale = 0)
lines(x3, f3.x3, col = "red")
plot(out, eq = 2, select = 1, seWithMean = TRUE, scale = 0)
lines(x2, f2.x2, col = "red")
plot(out, eq = 2, select = 2, seWithMean = TRUE, scale = 0)
lines(x3, f3.x3, col = "red")


## p-values suggest to drop x3 from both equations, with a stronger 
## evidence for eq. 2. This can be also achieved using shrinkage smoothers

outSS <- SemiParBIVProbit(list(y1 ~ x1 + s(x2, bs = "ts") + s(x3, bs = "cs"), 
                               y2 ~ x1 + s(x2, bs = "cs") + s(x3, bs = "ts")), 
                          data = dataSim)
conv.check(outSS)                          

plot(outSS, eq = 1, select = 1, scale = 0)
plot(outSS, eq = 1, select = 2, ylim = c(-0.1,0.1))
plot(outSS, eq = 2, select = 1, scale = 0)
plot(outSS, eq = 2, select = 2, ylim = c(-0.1,0.1))

## SEMIPARAMETRIC BIVARIATE PROBIT with association parameter 
## depending on covariates as well

outD <- SemiParBIVProbit(list(y1 ~ x1 + s(x2), 
                              y2 ~ x1 + s(x2),
                                 ~ x1 + s(x2) ), 
                          data = dataSim)
conv.check(outD)  
summary(outD)
outD$theta

plot(outD, eq = 1, seWithMean = TRUE)
plot(outD, eq = 2, seWithMean = TRUE)
plot(outD, eq = 3, seWithMean = TRUE)
graphics.off()

#
#

############
## EXAMPLE 2
############
## Generate data with one endogenous variable 
## and exclusion restriction

set.seed(0)

n <- 400

Sigma <- matrix(0.5, 2, 2); diag(Sigma) <- 1
u     <- rMVN(n, rep(0,2), Sigma)

cov   <- rMVN(n, rep(0,2), Sigma)
cov   <- pnorm(cov)
x1 <- round(cov[,1]); x2 <- cov[,2]

f1   <- function(x) cos(pi*2*x) + sin(pi*x)
f2   <- function(x) x+exp(-30*(x-0.5)^2)   

y1 <- ifelse(-1.55 + 2*x1    + f1(x2) + u[,1] > 0, 1, 0)
y2 <- ifelse(-0.25 - 1.25*y1 + f2(x2) + u[,2] > 0, 1, 0)

dataSim <- data.frame(y1, y2, x1, x2)

#

## Testing the hypothesis of absence of endogeneity... 

LM.bpm(list(y1 ~ x1 + s(x2), y2 ~ y1 + s(x2)), dataSim, Model = "B")

# p-value suggests presence of endogeneity, hence fit a bivariate model


## CLASSIC RECURSIVE BIVARIATE PROBIT

out <- SemiParBIVProbit(list(y1 ~ x1 + x2, 
                             y2 ~ y1 + x2), 
                        data = dataSim)
conv.check(out)                        
summary(out)

## SEMIPARAMETRIC RECURSIVE BIVARIATE PROBIT

out <- SemiParBIVProbit(list(y1 ~ x1 + s(x2), 
                             y2 ~ y1 + s(x2)), 
                        data = dataSim)
conv.check(out)                        
summary(out)

#

## Testing the hypothesis of absence of endogeneity post estimation... 

gt.bpm(out)

#
## average treatment effect with CI

mb(y1, y2, Model = "B")
AT(out, eq = 2, nm.bin = "y1") 
AT(out, eq = 2, nm.bin = "y1", naive = TRUE) 


## try a Clayton copula model... 

outC <- SemiParBIVProbit(list(y1 ~ x1 + s(x2), 
                              y2 ~ y1 + s(x2)), 
                         data = dataSim, BivD = "C0")
conv.check(outC)                         
summary(outC)
AT(outC, eq = 2, nm.bin = "y1") 

## try a Joe copula model... 

outJ <- SemiParBIVProbit(list(y1 ~ x1 + s(x2), 
                              y2 ~ y1 + s(x2)), 
                         data = dataSim, BivD = "J0")
conv.check(outJ)
summary(outJ, cm.plot = TRUE)
AT(outJ, eq = 2, nm.bin = "y1") 


VuongClarke(out, outJ)

#
## recursive bivariate probit modelling with unpenalized splines 
## can be achieved as follows

outFP <- SemiParBIVProbit(list(y1 ~ x1 + s(x2, bs = "cr", k = 5), 
                               y2 ~ y1 + s(x2, bs = "cr", k = 6)), 
                          fp = TRUE, data = dataSim)
conv.check(outFP)                            
summary(outFP)

#
#

############
## EXAMPLE 3
############
## Generate data with a non-random sample selection mechanism 
## and exclusion restriction

set.seed(0)

n <- 2000

Sigma <- matrix(0.5, 2, 2); diag(Sigma) <- 1
u     <- rMVN(n, rep(0,2), Sigma)

SigmaC <- matrix(0.5, 3, 3); diag(SigmaC) <- 1
cov    <- rMVN(n, rep(0,3), SigmaC)
cov    <- pnorm(cov)
bi <- round(cov[,1]); x1 <- cov[,2]; x2 <- cov[,3]
  
f11 <- function(x) -0.7*(4*x + 2.5*x^2 + 0.7*sin(5*x) + cos(7.5*x))
f12 <- function(x) -0.4*( -0.3 - 1.6*x + sin(5*x))  
f21 <- function(x) 0.6*(exp(x) + sin(2.9*x)) 

ys <-  0.58 + 2.5*bi + f11(x1) + f12(x2) + u[, 1] > 0
y  <- -0.68 - 1.5*bi + f21(x1) +         + u[, 2] > 0
yo <- y*(ys > 0)
  
dataSim <- data.frame(y, ys, yo, bi, x1, x2)

## Testing the hypothesis of absence of non-random sample selection... 

LM.bpm(list(ys ~ bi + s(x1) + s(x2), yo ~ bi + s(x1)), dataSim, Model = "BSS")

# p-value suggests presence of sample selection, hence fit a bivariate model

#
## SEMIPARAMETRIC SAMPLE SELECTION BIVARIATE PROBIT
## the first equation MUST be the selection equation

out <- SemiParBIVProbit(list(ys ~ bi + s(x1) + s(x2), 
                             yo ~ bi + s(x1)), 
                        data = dataSim, Model = "BSS")
conv.check(out)                          
gt.bpm(out)                        

## compare the two summary outputs
## the second output produces a summary of the results obtained when
## selection bias is not accounted for

summary(out)
summary(out$gam2)

## corrected predicted probability that 'yo' is equal to 1

mb(ys, yo, Model = "BSS")
prev(out)
prev(out, naive = TRUE)

## estimated smooth function plots
## the red line is the true curve
## the blue line is the naive curve not accounting for selection bias

x1.s <- sort(x1[dataSim$ys>0])
f21.x1 <- f21(x1.s)[order(x1.s)]-mean(f21(x1.s))

plot(out, eq = 2, ylim = c(-1.65,0.95)); lines(x1.s, f21.x1, col="red")
par(new = TRUE)
plot(out$gam2, se = FALSE, col = "blue", ylim = c(-1.65,0.95), 
     ylab = "", rug = FALSE)

#
#
## try a Clayton copula model... 

outC <- SemiParBIVProbit(list(ys ~ bi + s(x1) + s(x2), 
                              yo ~ bi + s(x1)), 
                         data = dataSim, Model = "BSS", BivD = "C0")
conv.check(outC)
summary(outC, cm.plot = TRUE)
prev(outC)

#

############
## EXAMPLE 4
############
## Generate data with partial observability

set.seed(0)

n <- 10000

Sigma <- matrix(0.5, 2, 2); diag(Sigma) <- 1
u     <- rMVN(n, rep(0,2), Sigma)

x1 <- round(runif(n)); x2 <- runif(n); x3 <- runif(n)

y1 <- ifelse(-1.55 + 2*x1 + x2 + u[,1] > 0, 1, 0)
y2 <- ifelse( 0.45 - x3        + u[,2] > 0, 1, 0)
y  <- y1*y2

dataSim <- data.frame(y, x1, x2, x3)


## BIVARIATE PROBIT with Partial Observability

out  <- SemiParBIVProbit(list(y ~ x1 + x2, 
                              y ~ x3), 
                         data = dataSim, Model = "BPO")
conv.check(out)
summary(out)

# first ten estimated probabilities for the four events from object out

cbind(out$p11, out$p10, out$p00, out$p01)[1:10,]


# case with smooth function 
# (a bit more computationally intensive)  

f1 <- function(x) cos(pi*2*x) + sin(pi*x)

y1 <- ifelse(-1.55 + 2*x1 + f1(x2) + u[,1] > 0, 1, 0)
y2 <- ifelse( 0.45 - x3            + u[,2] > 0, 1, 0)
y  <- y1*y2

dataSim <- data.frame(y, x1, x2, x3)

out  <- SemiParBIVProbit(list(y ~ x1 + s(x2), 
                              y ~ x3), 
                         data = dataSim, Model = "BPO")

conv.check(out)
summary(out, cm.plot = TRUE)


# plot estimated and true functions

x2 <- sort(x2); f1.x2 <- f1(x2)[order(x2)] - mean(f1(x2))
plot(out, eq = 1, scale = 0); lines(x2, f1.x2, col = "red")

#
#

############
## EXAMPLE 5
############
## Generate data with one endogenous binary variable 
## and continuous outcome

set.seed(0)

n <- 1000

Sigma <- matrix(0.5, 2, 2); diag(Sigma) <- 1
u     <- rMVN(n, rep(0,2), Sigma)

cov   <- rMVN(n, rep(0,2), Sigma)
cov   <- pnorm(cov)
x1 <- round(cov[,1]); x2 <- cov[,2]

f1   <- function(x) cos(pi*2*x) + sin(pi*x)
f2   <- function(x) x+exp(-30*(x-0.5)^2)   

y1 <- ifelse(-1.55 + 2*x1    + f1(x2) + u[,1] > 0, 1, 0)
y2 <-        -0.25 - 1.25*y1 + f2(x2) + u[,2] 

dataSim <- data.frame(y1, y2, x1, x2)


## RECURSIVE Model

resp.check(y2, margin = "GU") # does not seem appropriate
resp.check(y2, margin = "N")  # seems appropriate

out <- SemiParBIVProbit(list(y1 ~ x1 + x2, 
                             y2 ~ y1 + x2), 
                        data = dataSim, margins = c("probit","N"))
conv.check(out)                        
summary(out)

## SEMIPARAMETRIC RECURSIVE Model

out <- SemiParBIVProbit(list(y1 ~ x1 + s(x2), 
                             y2 ~ y1 + s(x2), ~ 1, ~ 1), data = dataSim, 
                             margins = c("probit","N"), gamlssfit = TRUE)
conv.check(out)                        
summary(out)
AT(out, nm.bin = "y1")
AT(out, nm.bin = "y1", naive = TRUE)


#
## Generate data with one endogenous binary variable 
## and continuous outcome following a log-normal distribution


set.seed(0)

y2 <- rlnorm(n, meanlog = - 0.25 - 1.25*y1 + f2(x2), sdlog = sqrt(0.5))

dataSim <- data.frame(y1, y2, x1, x2)

resp.check(y2, margin = "WEI") # does not seem appropriate
resp.check(y2, margin = "LN")  # seems appropriate

out <- SemiParBIVProbit(list(y1 ~ x1 + s(x2), 
                             y2 ~ y1 + s(x2), ~ 1, ~ 1), data = dataSim, 
                             margins = c("probit","LN"), gamlssfit = TRUE)
conv.check(out)                        
summary(out, cm.plot = TRUE)
AT(out, nm.bin = "y1")
AT(out, nm.bin = "y1", naive = TRUE)

Run the code above in your browser using DataLab