sparklyr (version 0.3.0)

sdf_mutate: Mutate a Spark DataFrame

Description

Use Spark's feature transformers to mutate a Spark DataFrame.

Usage

sdf_mutate(.data, ...)
sdf_mutate_(.data, ..., .dots)

Arguments

.data
A spark_tbl.
...
Named arguments, mapping new column names to the transformation to be applied.
.dots
A named list, mapping output names to transformations.

Transforming Spark DataFrames

The family of functions prefixed with sdf_ generally access the Scala Spark DataFrame API directly, as opposed to the dplyr interface which uses Spark SQL. These functions will 'force' any pending SQL in a dplyr pipeline, such that the resulting tbl_spark object returned will no longer have the attached 'lazy' SQL operations. Note that the underlying Spark DataFrame does execute its operations lazily, so that even though the pending set of operations (currently) are not exposed at the R level, these operations will only be executed when you explicitly collect() the table.

See Also

Other feature transformation routines: ft_binarizer, ft_bucketizer, ft_discrete_cosine_transform, ft_elementwise_product, ft_index_to_string, ft_one_hot_encoder, ft_quantile_discretizer, ft_sql_transformer, ft_string_indexer, ft_vector_assembler

Examples

Run this code
## Not run: 
# # using the 'beaver1' dataset, binarize the 'temp' column
# data(beavers, package = "datasets")
# beaver_tbl <- copy_to(sc, beaver1, "beaver")
# beaver_tbl %>%
#   mutate(squared = temp ^ 2) %>%
#   sdf_mutate(warm = ft_binarizer(squared, 1000)) %>%
#   sdf_register("mutated")
# 
# # view our newly constructed tbl
# head(beaver_tbl)
# 
# # note that we have two separate tbls registered
# dplyr::src_tbls(sc)
# ## End(Not run)

Run the code above in your browser using DataCamp Workspace