Unlimited learning, half price | 50% off

Last chance! 50% off unlimited learning

Sale ends in


DTRlearn (version 1.0)

wsvm: Improved Single Stage O-learning

Description

The wsvm is an implementation of improved single stage outcome weighted learning. It solve the optimization problem of maximizing the expected value function by transforming it into a classification problem, mapping the feature variables to the optimal treatment choice. The function wsvm implements weighted SVM with gaussian or linear kernel. Improving from Zhao 2012, the improved outcome weighted learning first take main effect out by regression; the weights are absolute value of the residual; more details can be found in our paper in submission.

Usage

wsvm(X, A, wR, kernel = "linear", sigma = 0.05, C = 1, e = 1e-05)

Arguments

X
a n by p matrix, n is the sample size, p is the number of feature variables.
A
a vector of n entries coded 1 and -1 for the treatment assignments
wR
This is weighted outcome computed before hand, it is the outcome $R_i$ weighted by inverse randomzation or observational probability. $wR_i=R_i/\pi_i$
kernel
Kernel function for weighted SVM, can be 'linear' or 'rbf' (radial basis kernel), default is 'linear'. When 'rbf' is specified, one can specify the sigma parameter for radial basis kernel.
sigma
Tuning parameter for 'rbf' kernal, this is from rbfdot function in kernlab, $Kernel(x,y)=exp(-sigma*(x-y)^2)$
C
C is the tuning parameter for weighted SVM $$\min \frac{1}{2}\|\beta\|^2+C\Sigma_{i=1}^N x_i |wR_i|$$, subject to $\xi_i\ge 0$ ,$sign(wR_i)A_i(X_i\beta+\beta_0\ge 1-\xi_i)$
e
The rounding error for that numerically, $|\alpha|

Value

  • If kernel 'linear' is specified, it returns a an object of class 'linearcl', and including the following elements:
  • alpha1It is the scaled solution for the dual problem, that $X\beta= X*X'*alpha1$
  • biasThe intercept $\beta_0$ in $f(X)$.
  • fitEstimated value for f(x), $fit=bias+X\beta=bias+X*X'*alpha1$.
  • betaThe coefficients for linear SVM, $f(X)=bias+X*beta$.
  • If kernel 'rbf' is specified, it returns a an object of class 'rbfcl', and including the following elements:
  • alpha1It is the scaled solution for the dual problem, that $X\beta= K(X,X)*alpha1$
  • biasThe intercept $\beta_0$ in $f(X)$.
  • fitEstimated value for f(x), $fit=bias+h(X)\beta=bias+K(X,X)*alpha1$.
  • SigmaThe bandwidth parameter for the rbf kernel
  • XThe training feature variable matrix

References

Zhao, Y., Zeng, D., Rush, A. J., & Kosorok, M. R. (2012). Estimating individualized treatment rules using outcome weighted learning. Journal of the American Statistical Association, 107(499), 1106-1118.

See Also

plot.linearcl predict.linearcl predict.rbfcl

Examples

Run this code
#generating random asigned treatment vector A
n=200
A=2*rbinom(n,1,0.5)-1
p=20
mu=numeric(p)
Sigma=diag(p)
#feature variable is multi variate normal
X=mvrnorm(n,mu,Sigma)
#the outcome is generated where the true optimal treatment 
#is sign of the interaction term(of treatment and feature)
R=X[,1:3]%*%c(1,1,-2)+X[,3:5]%*%c(1,1,-2)*A+rnorm(n)

# linear SVM
model1=wsvm(X,A,R)
#Check the total number that agress with the true optimal treatment among n=200 patients
sum(sign(model1$fit)==sign(X[,3:5]%*%c(1,1,-2)))

# SVM with rbf kernel and sigma=0.05
model2=wsvm(X,A,R,'rbf',0.05)
#Check the total number that agress with the true optimal treatment among n=200 patients
sum(sign(model2$fit)==sign(X[,3:5]%*%c(1,1,-2)))

Run the code above in your browser using DataLab