Learn R Programming

ModelMap (version 2.0.2.1)

model.build: Model Building

Description

Create sophisticated models using either Random Forest or Stochastic Gradient Boosting from training data

Usage

model.build(model.type = NULL, qdata.trainfn = NULL, folder = NULL, MODELfn = NULL, predList = NULL, predFactor = FALSE, response.name = NULL, response.type = NULL, seed=NULL, na.action="na.omit",ntree = 500, mtry = NULL, replace=TRUE, strata=NULL, sampsize = NULL, n.trees = NULL, shrinkage = 0.001, interaction.depth = 10, bag.fraction = 0.5, train.fraction = 1, n.minobsinnode = 10)

Arguments

model.type
String. Model type. "RF" or "SGB". (Eventually planned to include "GAM".) If model.obj is specified, the model.type will be extracted from model.obj, and the argument m
qdata.trainfn
String. The name (full path or base name with path specified by folder) of the training data file used for building the model (file should include columns for both response and predictor variables). The file must be a comma-delimited file <
folder
String. The folder used for all output from predictions and/or maps. Do not add ending slash to path string. If folder = NULL (default), a GUI interface prompts user to browse to a folder. To use the working directory, specify folde
MODELfn
String. The file name to use to save the generated model object. If MODELfn = NULL (the default), a default name is generated by pasting model.type_response.type_response.name. If the other output filenames are left unspecified
predList
String. A character vector of the predictor short names used to build the model. These names must match the column names in the training/test data files and the names in column two of the rastLUT. If predList = NULL (the defau
predFactor
String. A character vector of predictor short names of the predictors from predList that are factors (i.e categorical predictors). These must be a subset of the predictor names given in predList Categorical predictors may have
response.name
String. The name of the response variable used to build the model. If response.name = NULL, a GUI interface prompts user to select a variable from the list of column names from training data file. response.name must be column
response.type
String. Response type: "binary" or "continuous". binary response must be binary 0/1 variable with only 2 categories. All zeros will be treated as one category, and everything else will be treated as the second category.
seed
Integer. The number used to initialize randomization to build RF or SGB models. If you want to produce the same model later, use the same seed. If seed = NULL (the default), a new seed is created each run.
na.action
String. Model validation. Specifies the action to take if there are NA values in the prediction data or if there is a level or class of a ctegorical predictor variable in the validation test set or the production (mapping) data set, but not
ntree
Integer. RF models. The number of random forest trees for a RF model. The default is 500 trees.
mtry
Integer. RF models. Number of variables to try at each node of Random Forest trees. By default, will use the "tuneRF()" function to optimize mtry.
replace
Logical. RF models. Should sampling of cases be done with or without replacement?
strata
Factor or String. RF models. A (factor) variable that is used for stratified sampling. Can be in the form of either the name of the column in qdata or a factor or vector with one element for each row of qdata.
sampsize
Vector. RF models. Size(s) of sample to draw. For classification, if sampsize is a vector of the length the number of factor levels strata, then sampling is stratified by strata, and the elements of sampsize
n.trees
Integer. SGB models. The number of stochastic gradient boosting trees for an SGB model. If n.trees=NULL (the default) the model creation code will increase the number of trees 100 at a time until OOB error rate stops improving. The gb
shrinkage
Numeric. SGB models. A shrinkage parameter applied to each tree in the expansion. Also known as the learning rate or step-size reduction.
interaction.depth
Integer. SGB models. The maximum depth of variable interactions. interaction.depth = 1 implies an additive model, interaction.depth = 2 implies a model with up to 2-way interactions, etc...
bag.fraction
Numeric. SGB models. bag.fraction must be a number between 0 and 1, giving the fraction of the training set observations randomly selected to propose the next tree in the expansion. This introduces randomnesses in
train.fraction
Numeric. SGB models. The first train.fraction * nrows(data) observations are used to fit the model and the remainder are used for computing out-of-sample estimates of the loss function.
n.minobsinnode
Integer. SGB models. Minimum number of observations in the trees terminal nodes. Note that this is the actual number of observations not the total weight.

Value

  • The function will return the model object. Additionally, it will write a text file to disk, in the folder specified by folder. This file lists the values of each argument as choosen from GUI propts used for the function call.

Details

This package provides a push button appraoch to complex model building and production mapping. It contains four functions: a simple function get.test() that can be used to radomly divide a training dataset into training and test/validation sets; and the workhorse functions model.build(),model.diagnostics(), and model.map(). These functions can be run in a traditional R command mode, where all arguments are specified in the function call. However they can also be used in a full push button mode, where you type in, for example, the simple command model.build(), and GUI pop up windows will ask questions about the type of model, the file locations of the data, etc... When running model.map() on non-Windows platforms, file names and folders need to be specified in the argument list, but other pushbutton selections are handled by the select.list() function, which is platform independent. Random Forest is implemented through the randomForest package within R. Random Forest is more user friendly than Stochastic Gragient Boosting, as it has fewer parameters to be set by the user, and is less sensitive to tuning of these parameters. A Random Forest model consists of multiple trees that vote on predictions. For each tree a random subset of the training data is used to construct the tree, with the remaining data points used to construct out-of-bag (OOB) error estimates. At each node of the tree a random selection of predictors is chosen to determine the split. The number of predictors used to select the splits (argument mtry) is the primary user specified parameter that can affect model performance. By default this parameter will be automatically optimized using the tuneRF() function. Random Forest will not over fit data, therefore the only penalty of increasing the number of trees is computation time. Random Forest can compute variable importance, an advantage over some "black box" modeling techniques if it is important to understand the ecological relationships underlying a model (Brieman, 2001). Stochastic gradient boosting (Friedman 2001, 2002), is related to both boosting and bagging. Many small classification or regression trees are built sequentially from "pseudo"-residuals (the gradient of the loss function of the previous tree). At each iteration, a tree is built from a random sub-sample of the dataset (selected without replacement) and an incremental improvement in the model. Using only a fraction of the training data increases both the computation speed and the prediction accuracy, while also helping to avoid over-fitting the data. An advantage of stochastic gradient boosting is that it is not necessary to pre-select or transform predictor variables. It is also resistant to outliers, as the steepest gradient algorithm emphasizes points that are close to their correct classification. Stochastic gradient boosting is implemented through the gbm package within R. One disadvantege of Stochastic Gradient Boosting, compared to Random Forest, is increased number of user specified parameters, and the SGB models tend to be more sensitive to these parameters. Model fitting parameter options include distribution, interaction depth, bagging fraction, shrinkage rate, and training fraction. These parameters can be set in the argument list when calling model.map(). Values for these parameters other than the defaults can not be set by point and click in the GUI pop up windows, and must be set in the argument list when calling model.map(). Friedman (2001, 2002) and Ridgeway (1999) provide guidelines on appropriate settings for model fitting options. Also, unlike Random Forest models, in Stochastic Gradient Boosting, there is a penaly for using too many trees. The default behavior in model.map() is to increase the number of trees 100 at a time until the model stops improving, then call the gbm subfunction gbm.perf(method="OOB") to select the best number of iterations. ALternatively, the model.map() argument ntrees can be used to set some large number of trees to be calculated all at once and, again, the gbm.perf(method="OOB") function will be used to select the best number of trees. Note that the gbm package warns that "OOB generally underestimates the optimal number of iterations although predictive performance is reasonably competitive." The gbm package offers two alternative techniques for calculating the best number of trees, but these are not yet implemented in the ModelMap package, as they require the use of a formula interface for model building.

References

Breiman, L. (2001) Random Forests. Machine Learning, 45:5-32. Friedman, J.H. (2001). Greedy function approximation: a gradient boosting machine. Ann. Stat., 29(5):1189-1232. Friedman, J.H. (2002). Stochastic gradient boosting. Comput. Stat. Data An., 38(4):367-378. Liaw, A. and Wiener, M. (2002). Classification and Regression by randomForest. R News 2(3), 18--22. Ridgeway, G., (1999). The state of boosting. Comp. Sci. Stat. 31:172-181

See Also

get.test, model.diagnostics, model.mapmake

Examples

Run this code
###########################################################################
############################# Run this set up code: #######################
###########################################################################

# set seed:
seed=38

# Define training and test files:

qdata.trainfn = system.file("external", "helpexamples","DATATRAIN.csv", package = "ModelMap")

# Define folder for all output:
folder=getwd()	


###########################################################################
############## Pick one of the following sets of definitions: #############
###########################################################################


########## Continuous Response, Continuous Predictors ############

#file name to store model:
MODELfn="RF_Bio_TC"				

#predictors:
predList=c("TCB","TCG","TCW")	

#define which predictors are categorical:
predFactor=FALSE	

# Response name and type:
response.name="BIO"
response.type="continuous"


########## binary Response, Continuous Predictors ############

#file name to store model:
MODELfn="RF_CONIFTYP_TC"				

#predictors:
predList=c("TCB","TCG","TCW")		

#define which predictors are categorical:
predFactor=FALSE

# Response name and type:
response.name="CONIFTYP"

# This variable is 1 if a conifer or mixed conifer type is present, 
# otherwise 0.

response.type="binary"


########## Continuous Response, Categorical Predictors ############

# In this example, NLCD is a categorical predictor.
#
# You must decide what you want to happen if there are categories
# present in the data to be predicted (either the validation/test set
# or in the image file) that were not present in the original training data.
# Choices:
#       na.action = "na.omit"
#                    Any validation datapoint or image pixel with a value for any
#                    categorical predictor not found in the training data will be
#                    returned as NA.
#       na.action = "na.roughfix"
#                    Any validation datapoint or image pixel with a value for any
#                    categorical predictor not found in the training data will have
#                    the most common category for that predictor substituted,
#                    and the a prediction will be made.

# You must also let R know which of the predictors are categorical, in other
# words, which ones R needs to treat as factors.
# This vector must be a subset of the predictors given in predList

#file name to store model:
MODELfn="RF_BIO_TCandNLCD"			

#predictors:
predList=c("TCB","TCG","TCW","NLCD")

#define which predictors are categorical:
predFactor=c("NLCD")

# Response name and type:
response.name="BIO"
response.type="continuous"



###########################################################################
########################### build model: ##################################
###########################################################################


### create model before batching (only run this code once ever!) ###

model.obj = model.build( model.type="RF",
                       qdata.trainfn=qdata.trainfn,
                       folder=folder,		
                       MODELfn=MODELfn,
                       predList=predList,
                       predFactor=predFactor,
                       response.name=response.name,
                       response.type=response.type,
                       seed=seed
)

Run the code above in your browser using DataLab