sparklyr (version 0.5.2)

sdf_copy_to: Copy an Object into Spark

Description

Copy an object into Spark, and return an R object wrapping the copied object (typically, a Spark DataFrame).

Usage

sdf_copy_to(sc, x, name, memory, repartition, overwrite, ...)

sdf_import(x, sc, name, memory, repartition, overwrite, ...)

Arguments

sc

The associated Spark connection.

x

An R object from which a Spark DataFrame can be generated.

name

The name to assign to the copied table in Spark.

memory

Boolean; should the table be cached into memory?

repartition

The number of partitions to use when distributing the table across the Spark cluster. The default (0) can be used to avoid partitioning.

overwrite

Boolean; overwrite a pre-existing table with the name name if one already exists?

...

Optional arguments, passed to implementing methods.

Advanced Usage

sdf_copy_to is an S3 generic that, by default, dispatches to sdf_import. Package authors that would like to implement sdf_copy_to for a custom object type can accomplish this by implementing the associated method on sdf_import.

See Also

Other Spark data frames: sdf_partition, sdf_predict, sdf_register, sdf_sample, sdf_sort

Examples

Run this code
# NOT RUN {
sc <- spark_connect(master = "spark://HOST:PORT")
sdf_copy_to(sc, iris)

# }

Run the code above in your browser using DataCamp Workspace