Last chance! 50% off unlimited learning
Sale ends in
Use Spark's feature transformers to mutate a Spark DataFrame.
sdf_mutate(.data, ...)sdf_mutate_(.data, ..., .dots)
A spark_tbl
.
Named arguments, mapping new column names to the transformation to be applied.
A named list, mapping output names to transformations.
The family of functions prefixed with sdf_
generally access the Scala
Spark DataFrame API directly, as opposed to the dplyr
interface which
uses Spark SQL. These functions will 'force' any pending SQL in a
dplyr
pipeline, such that the resulting tbl_spark
object
returned will no longer have the attached 'lazy' SQL operations. Note that
the underlying Spark DataFrame does execute its operations lazily, so
that even though the pending set of operations (currently) are not exposed at
the R level, these operations will only be executed when you explicitly
collect()
the table.
Other feature transformation routines: ft_binarizer
,
ft_bucketizer
,
ft_discrete_cosine_transform
,
ft_elementwise_product
,
ft_index_to_string
,
ft_one_hot_encoder
,
ft_quantile_discretizer
,
ft_sql_transformer
,
ft_string_indexer
,
ft_vector_assembler
# NOT RUN {
# using the 'beaver1' dataset, binarize the 'temp' column
data(beavers, package = "datasets")
beaver_tbl <- copy_to(sc, beaver1, "beaver")
beaver_tbl %>%
mutate(squared = temp ^ 2) %>%
sdf_mutate(warm = ft_binarizer(squared, 1000)) %>%
sdf_register("mutated")
# view our newly constructed tbl
head(beaver_tbl)
# note that we have two separate tbls registered
dplyr::src_tbls(sc)
# }
Run the code above in your browser using DataLab