copy_to.spark_connection

0th

Percentile

Copy an R Data Frame to Spark

Copy an R data.frame to Spark, and return a reference to the generated Spark DataFrame as a tbl_spark. The returned object will act as a dplyr-compatible interface to the underlying Spark table.

Usage
# S3 method for spark_connection
copy_to(dest, df,
  name = spark_table_name(substitute(df)), overwrite = FALSE,
  memory = TRUE, repartition = 0L, ...)
Arguments
dest

A spark_connection.

df

An R data.frame.

name

The name to assign to the copied table in Spark.

overwrite

Boolean; overwrite a pre-existing table with the name name if one already exists?

memory

Boolean; should the table be cached into memory?

repartition

The number of partitions to use when distributing the table across the Spark cluster. The default (0) can be used to avoid partitioning.

...

Optional arguments; currently unused.

Value

A tbl_spark, representing a dplyr-compatible interface to a Spark DataFrame.

Aliases
  • copy_to.spark_connection
Documentation reproduced from package sparklyr, version 0.6.1, License: Apache License 2.0 | file LICENSE

Community examples

Looks like there are no examples yet.