spark_write_csv
From sparklyr v0.2.32
by Javier Luraschi
Write a Spark DataFrame to a CSV
Write a Spark DataFrame to a CSV
Usage
spark_write_csv(x, path, header = TRUE, delimiter = ",", quote = "\"", escape = "\\", charset = "UTF-8", null_value = NULL, options = list())
Arguments
- x
- A Spark DataFrame or dplyr operation
- path
- The path to the file. Needs to be accessible from the cluster. Supports: "hdfs://" or "s3n://"
- header
- Should the first row of data be used as a header? Defaults to
TRUE
. - delimiter
- The character used to delimit each column, defaults to
,
. - quote
- The character used as a quote, defaults to
"hdfs://"
. - escape
- The chatacter used to escape other characters, defaults to
\
. - charset
- The character set, defaults to
"UTF-8"
. - null_value
- The character to use for default values, defaults to
NULL
. - options
- A list of strings with additional options.
See Also
Other reading and writing data: spark_read_csv
,
spark_read_json
,
spark_read_parquet
,
spark_write_json
,
spark_write_parquet
Community examples
Looks like there are no examples yet.