spark_write_parquet

0th

Percentile

Write a Spark DataFrame to a Parquet file

Serialize a Spark DataFrame to the Parquet format.

Usage
spark_write_parquet(x, path, mode = NULL, options = list())
Arguments
x

A Spark DataFrame or dplyr operation

path

The path to the file. Needs to be accessible from the cluster. Supports the "hdfs://", "s3n://" and "file://" protocols.

mode

Specifies the behavior when data or table already exists.

options

A list of strings with additional options. See http://spark.apache.org/docs/latest/sql-programming-guide.html#configuration.

See Also

Other Spark serialization routines: spark_load_table, spark_read_csv, spark_read_json, spark_read_parquet, spark_save_table, spark_write_csv, spark_write_json

Aliases
  • spark_write_parquet
Documentation reproduced from package sparklyr, version 0.5.4, License: Apache License 2.0 | file LICENSE

Community examples

Looks like there are no examples yet.