
Last chance! 50% off unlimited learning
Sale ends in
LBoost(resp, Xs, anneal.params, nBS = 100, kfold = 5, nperm = 1,
PI.imp = NULL, pred.imp = FALSE)
logreg.anneal.control
in the LogicReg
package. If missing, default annealing parameters are set at start
=1, end
=-2, and iter
=50000.
"LBoost"
which is a list including values
print.LBoost
, predict.LBoost
, BoostVimp.plot
, submatch.plot
,
persistence.plot
data(LF.data)
#Set using annealing parameters using the logreg.anneal.control
#function from LogicReg package
newanneal<-logreg.anneal.control(start=1, end=-2, iter=2000)
#typically more than 2000 iterations (>25000) would be used for
#the annealing algorithm. A typical LBoost models also contains at
#least 100 trees. These parameters were set to allow for faster
#run time
#The data set LF.data contains 50 binary predictors and a binary response Ybin
#Looking at only the Permutation Measure
LBfit.1<-LBoost(resp=LF.data$Ybin, Xs=LF.data[,1:50], nBS=10, kfold=2,
anneal.params=newanneal, nperm=2, PI.imp="Permutation")
print(LBfit.1)
#Looking at only the Add-in/Leave-out importance measure
LBfit.2<-LBoost(resp=LF.data$Ybin, Xs=LF.data[,1:50], nBS=10, kfold=2,
anneal.params=newanneal, PI.imp="AddRemove")
print(LBfit.2)
#Looking at both measures of importance plus predictor importance
LBfit.3<-LBoost(resp=LF.data$Ybin, Xs=LF.data[,1:50], nBS=10, kfold=2,
anneal.params=newanneal, nperm=2, PI.imp="Both", pred.imp=TRUE)
print(LBfit.3)
Run the code above in your browser using DataLab