
cv.bst(x, y, K = 10, cost = 0.5, family = c("hinge", "gaussian"),
learner = c("tree", "ls", "sm"), ctrl = bst_control(),
type = c("risk", "misc"), plot.it = TRUE, se = TRUE, ...)
y
must be in {1, -1} for family
= "hinge".cost
< 1; price of false negative is 1-cost
.family
= "hinge" for hinge loss and family
="gaussian" for squared error loss.
Implementing the negative gradient corresponding
to the loss function to be minimized. By default, hinge loss
for +1/-1ls
linear models,
sm
smoothing splines,
tree
regression trees.bst_control
.family="hinge"
, type="risk"
is hinge risk and type="misc"
is misclassification error. For family="gaussian"
, only empirical risks.TRUE
.bst
x <- matrix(rnorm(100*5),ncol=5)
c <- 2*x[,1]
p <- exp(c)/(exp(c)+exp(-c))
y <- rbinom(100,1,p)
y[y != 1] <- -1
x <- as.data.frame(x)
cv.bst(x, y, ctrl = bst_control(mstop=50), family = "hinge", learner = "ls")
cv.bst(x, y, ctrl = bst_control(mstop=50), family = "hinge", learner = "ls", type="misc")
Run the code above in your browser using DataLab