mlr_pipeops_encodelmer

0th

Percentile

Impact Encoding with Random Intercept Models

Encodes columns of type factor, character and ordered.

PipeOpEncodeLmer() converts factor levels of each factorial column to the estimated coefficients of a simple random intercept model. Models are fitted with the glmer function of the lme4 package and are of the type target ~ 1 + (1 | factor). If the task is a regression task, the numeric target variable is used as dependent variable and the factor is used for grouping. If the task is a classification task, the target variable is used as dependent variable and the factor is used for grouping. If the target variable is multiclass, for each level of the multiclass target variable, binary "one vs. rest" models are fitted.

For training, multiple models can be estimated in a cross-validation scheme to ensure that the same factor level does not always result in identical values in the converted numerical feature. For prediction, a global model (which was fitted on all observations during training) is used for each factor. New factor levels are converted to the value of the intercept coefficient of the global model for prediction. NAs are ignored by the CPO.

Use the PipeOpTaskPreproc $affect_columns functionality to only encode a subset of columns, or only encode columns of a certain type.

Keywords
datasets
Format

R6Class object inheriting from PipeOpTaskPreprocSimple/PipeOpTaskPreproc/PipeOp.

Construction

PipeOpEncodeLmer$new(id = "encodelmer", param_vals = list())

Identifier of resulting object, default "encodelmer".

  • param_vals :: named list List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction. Default list().

Input and Output Channels

Input and output channels are inherited from PipeOpTaskPreproc.

The output is the input Task with all affected factor, character or ordered parameters encoded according to the method parameter.

State

The $state is a named list with the $state elements inherited from PipeOpTaskPreproc, as well as:

  • target_levels :: character Levels of the target columns.

  • control :: a named list List of coefficients learned via glmer

Parameters

  • fast_optim :: logical(1) Initialized to TRUE. If “fast_optim” is TRUE (default), a faster (up to 50 percent) optimizer from the nloptr package is used when fitting the lmer models. This uses additional stopping criteria which can give suboptimal results.

Internals

Uses the lme4::glmer. This is relatively inefficient for features with a large number of levels.

Methods

Only methods inherited PipeOpTaskPreproc/PipeOp.

See Also

Other PipeOps: PipeOpEnsemble, PipeOpImpute, PipeOpTaskPreproc, PipeOp, mlr_pipeops_boxcox, mlr_pipeops_branch, mlr_pipeops_chunk, mlr_pipeops_classbalancing, mlr_pipeops_classifavg, mlr_pipeops_classweights, mlr_pipeops_colapply, mlr_pipeops_collapsefactors, mlr_pipeops_copy, mlr_pipeops_encodeimpact, mlr_pipeops_encode, mlr_pipeops_featureunion, mlr_pipeops_filter, mlr_pipeops_fixfactors, mlr_pipeops_histbin, mlr_pipeops_ica, mlr_pipeops_imputehist, mlr_pipeops_imputemean, mlr_pipeops_imputemedian, mlr_pipeops_imputenewlvl, mlr_pipeops_imputesample, mlr_pipeops_kernelpca, mlr_pipeops_learner, mlr_pipeops_missind, mlr_pipeops_modelmatrix, mlr_pipeops_mutate, mlr_pipeops_nop, mlr_pipeops_pca, mlr_pipeops_quantilebin, mlr_pipeops_regravg, mlr_pipeops_removeconstants, mlr_pipeops_scalemaxabs, mlr_pipeops_scalerange, mlr_pipeops_scale, mlr_pipeops_select, mlr_pipeops_smote, mlr_pipeops_spatialsign, mlr_pipeops_subsample, mlr_pipeops_unbranch, mlr_pipeops_yeojohnson, mlr_pipeops

Aliases
  • mlr_pipeops_encodelmer
  • PipeOpEncodeLmer
Examples
# NOT RUN {
library("mlr3")
poe = po("encodelmer")

task = TaskClassif$new("task",
  data.table::data.table(
    x = c("a", "a", "a", "b", "b"),
    y = c("a", "a", "b", "b", "b")),
  "x")

poe$train(list(task))[[1]]$data()

poe$state
# }
Documentation reproduced from package mlr3pipelines, version 0.1.1, License: LGPL-3

Community examples

Looks like there are no examples yet.