recipes (version 0.1.6)

step_normalize: Center and scale numeric data

Description

step_normalize creates a specification of a recipe step that will normalize numeric data to have a standard deviation of one and a mean of zero.

Usage

step_normalize(recipe, ..., role = NA, trained = FALSE, means = NULL,
  sds = NULL, na_rm = TRUE, skip = FALSE,
  id = rand_id("normalize"))

# S3 method for step_normalize tidy(x, ...)

Arguments

recipe

A recipe object. The step will be added to the sequence of operations for this recipe.

...

One or more selector functions to choose which variables are affected by the step. See selections() for more details. For the tidy method, these are not currently used.

role

Not used by this step since no new variables are created.

trained

A logical to indicate if the quantities for preprocessing have been estimated.

means

A named numeric vector of means. This is NULL until computed by prep.recipe().

sds

A named numeric vector of standard deviations This is NULL until computed by prep.recipe().

na_rm

A logical value indicating whether NA values should be removed when computing the standard deviation and mean.

skip

A logical. Should the step be skipped when the recipe is baked by bake.recipe()? While all operations are baked when prep.recipe() is run, some operations may not be able to be conducted on new data (e.g. processing the outcome variable(s)). Care should be taken when using skip = TRUE as it may affect the computations for subsequent operations

id

A character string that is unique to this step to identify it.

x

A step_normalize object.

Value

An updated version of recipe with the new step added to the sequence of existing steps (if any). For the tidy method, a tibble with columns terms (the selectors or variables selected), value (the standard deviations and means), and statistic for the type of value.

Details

Centering data means that the average of a variable is subtracted from the data. Scaling data means that the standard deviation of a variable is divided out of the data. step_normalize estimates the variable standard deviations and means from the data used in the training argument of prep.recipe. bake.recipe then applies the scaling to new data sets using these estimates.

Examples

Run this code
# NOT RUN {
data(biomass)

biomass_tr <- biomass[biomass$dataset == "Training",]
biomass_te <- biomass[biomass$dataset == "Testing",]

rec <- recipe(HHV ~ carbon + hydrogen + oxygen + nitrogen + sulfur,
              data = biomass_tr)

norm_trans <- rec %>%
  step_normalize(carbon, hydrogen)

norm_obj <- prep(norm_trans, training = biomass_tr)

transformed_te <- bake(norm_obj, biomass_te)

biomass_te[1:10, names(transformed_te)]
transformed_te
tidy(norm_trans, number = 1)
tidy(norm_obj, number = 1)

# }

Run the code above in your browser using DataCamp Workspace