Learn R Programming

McSpatial (version 1.1.1)

cparprobit: Conditionally Parametric Probit

Description

Estimates a probit model by maximizing a locally weighted likelihood function -- the probit equivalent of cparlwr

Usage

cparprobit(form,nonpar,window=.25,bandwidth=0,kern="tcub",
    distance="Mahal",alldata=FALSE,data=NULL)

Arguments

form
Model formula
nonpar
List of either one or two variables for z. Formats: cparprobit(y~xlist, nonpar=~z1, ...) or cparprobit(y~xlist, nonpar=~z1+z2, ...). Important: note the "~" before the first z variable.
window
Window size. Default: 0.25.
bandwidth
Bandwidth. Default: not used.
kern
Kernel weighting functions. Default is the tri-cube. Options include "rect", "tria", "epan", "bisq", "tcub", "trwt", and "gauss".
distance
Options: "Euclid", "Mahal", or "Latlong" for Euclidean, Mahalanobis, or "great-circle" geographic distance. May be abbreviated to the first letter but must be capitalized. Note: cparprobit looks for the first two letters to determine whi
alldata
If alldata=T, each observation is used as a target value for z. When alldata=F, the function is estimated at a set of points chosen by the locfit program using an adaptive decision tree approach, and the akima<
data
A data frame containing the data. Default: use data in the current working directory

Value

  • targetThe target points for the original estimation of the function.
  • xcoef.targetEstimated coefficients, B(z), at the target values of z.
  • xcoef.target.seStandard errors for B(z) at the target values of z.
  • xcoefEstimated coefficients, B(z), at the original data points.
  • xcoef.seStandard errors for B(z) with z evaluated at all points in the data set.
  • pThe estimated probabilities.
  • lnlThe log-likelihood value.

Details

The list of explanatory variables is specified in the base model formula while Z is specified using nonpar. X can include any number of explanatory variables, but Z must have at most two. The model is estimated by maximizing the following weighted log-likelihood function at each target point: $$\sum_{i=1}^n w_i { y_i log(\Phi (X_i \beta)) + (1-y_i) log(1-\Phi (X_i \beta) ) }$$ where y is the discrete dependent variable and X is the set of explanatory variables. When Z includes a single variable, $w_i$ is a simple kernel weighting function: $w_i = K((z_i - z_0 )/(sd(z)*h))$. When Z includes two variables (e.g., nonpar=~z1+z2), the method for specifying w depends on the distance option. Under either option, the ith row of the matrix Z = (z1, z2) is transformed such that $z_i = \sqrt{z_i * V * t(z_i)}.$ Under the "Mahal" option, V is the inverse of cov(Z). Under the "Euclid" option, V is the inverse of diag(cov(Z)). After this transformation, the weights again reduce to the simple kernel weighting function $K((z_i - z_0 )/(sd(z)*h))$. h is specified by the bandwidth or window option. The great circle formula is used to constuct the distances used to form the weights when distance = "Latlong"; in this case, the variable list for nonpar must be listed as nonpar = ~latitude+longitude (or ~lo+la or ~lat+long, etc), with the longitude and latitude variables expressed in degrees (e.g., -87.627800 and 41.881998 for one observation of longitude and latitude, respectively). The order in which latitude and longitude are listed does not matter and the function only looks for the first two letters to determine which variable is latitude and which is the longitude. It is important to note that the great circle distance measure is left in miles rather than being standardized. Thus, the window option should be specified when distance = "Latlong" or the bandwidth should be adjusted to account for the scale. The kernel weighting function becomes K(distance/h) under the "Latlong" option. Following White (1982), the covariance matrix for a quasi-maximum likelihood model is $A^{-1}BA^{-1}$, where $$A = \sum_{i=1}^n w_i \frac{\partial^2 LnL_i}{\partial \beta \partial \beta ^\prime}$$ $$B = \sum_{i=1}^n w_i^2 \frac{\partial LnL_i}{\partial \beta}\frac{\partial LnL_i}{\partial \beta ^\prime}$$ For the probit model, $$A = \sum_{i=1}^n w_i \frac{ \phi_i^2}{\Phi_i(1-\Phi_i)} X_i X_i ^\prime$$ $$B = \sum_{i=1}^n w_i^2 \frac{ (y_i - X_i \beta)^2 \phi_i^2}{(\Phi_i(1-\Phi_i))^2} X_i X_i ^\prime$$ The covariance matrix is calculated at each target point and the implied standard errors are then interpolated to each data point. Estimation can be very slow when alldata=T. When alldata=F, the package locfit is used to find a good set of target points at which to evaluate the function. See Loader (1999, section 12.2) for a description of the algorithm used to determine the target points. The akima package is then used to interpolate the coefficient estimates and standard errors. Available kernel weighting functions include the following: lll{ Kernel Call abbreviation Kernel function K(z) Rectangular ``rect'' $\frac{1}{2} I(|z|

References

Fan, Jianqing, Nancy E. Heckman, and M.P. Wand, "Local Polynomial Kernel Regression for Generalized Linear Models and Quasi-Likelihood Functions," Journal of the American Statistical Association 90 (1995), 141-150. Loader, Clive. Local Regression and Likelihood. New York: Springer, 1999. McMillen, Daniel P. and John F. McDonald, "Locally Weighted Maximum Likelihood Estimation: Monte Carlo Evidence and an Application," in Luc Anselin, Raymond J.G.M. Florax, and Sergio J. Rey, eds., Advances in Spatial Econometrics, Springer-Verlag, New York (2004), 225-239. Tibshirani, Robert and Trevor Hastie, "Local Likelihood Estimation," Journal of the American Statistical Association 82 (1987), 559-568.

See Also

cparlogit cparmlogit gmmlogit gmmprobit splogit spprobit spprobitml

Examples

Run this code
library(spdep)
data(cookdata)
cookdata <- cookdata[cookdata$CHICAGO==1,]
cookdata$DCBD2 <- cookdata$DCBD^2

# Monte Carlo
set.seed(484909)
n = length(cookdata$DCBD)
o <- order(cookdata$DCBD)
cookdata$ybase <- 2*cookdata$DCBD -.15*cookdata$DCBD2 
cookdata$yvar <- cookdata$ybase + 2*rnorm(n)
fit <- lm(yvar~DCBD+DCBD2,data=cookdata)
summary(fit)
cookdata$yvar <- cookdata$yvar>5.6
table(cookdata$yvar)
fit1 <- glm(yvar~DCBD+DCBD2,family=binomial(link="probit"),data=cookdata)
summary(fit1)
fit2 <- glm(yvar~DCBD,family=binomial(link="probit"),data=cookdata)
summary(fit2)
nlfit1 <- cparprobit(yvar~DCBD,nonpar=~DCBD, window=.5, kern="rect", data=cookdata) 
nlfit1$lnl
nlfit2 <- cparprobit(yvar~DCBD+DCBD2,nonpar=~DCBD, window=.5, kern="rect", data=cookdata) 
nlfit2$lnl
plot(cookdata$DCBD[o],fitted(fit1)[o],xlab="Distance from CBD",ylab="p",type="l")
lines(cookdata$DCBD[o],fitted(fit2)[o],col="red")
lines(cookdata$DCBD[o],nlfit1$p[o],col="blue")
lines(cookdata$DCBD[o],nlfit2$p[o],col="green")
legend("topright",c("True Model","Linear","CPAR-linear","CPAR-True"),
  col=c("black","red","blue","green"),lwd=1)

Run the code above in your browser using DataLab