Writes a Spark dataframe stream into a tabular (typically, comma-separated) stream.
stream_write_csv(x, path, mode = c("append", "complete", "update"),
trigger = stream_trigger_interval(), checkpoint = file.path(path,
"checkpoint"), header = TRUE, delimiter = ",", quote = "\"",
escape = "\\", charset = "UTF-8", null_value = NULL,
options = list(), ...)
A Spark DataFrame or dplyr operation
The path to the file. Needs to be accessible from the cluster. Supports the "hdfs://", "s3a://" and "file://" protocols.
Specifies how data is written to a streaming sink. Valid values are
"append"
, "complete"
or "update"
.
The trigger for the stream query, defaults to micro-batches runnnig
every 5 seconds. See stream_trigger_interval
and
stream_trigger_continuous
.
The location where the system will write all the checkpoint information to guarantee end-to-end fault-tolerance.
Should the first row of data be used as a header? Defaults to TRUE
.
The character used to delimit each column, defaults to ,
.
The character used as a quote. Defaults to '"'.
The character used to escape other characters, defaults to \
.
The character set, defaults to "UTF-8"
.
The character to use for default values, defaults to NULL
.
A list of strings with additional options.
Optional arguments; currently unused.
Other Spark stream serialization: stream_read_csv
,
stream_read_json
,
stream_read_kafka
,
stream_read_orc
,
stream_read_parquet
,
stream_read_text
,
stream_write_json
,
stream_write_kafka
,
stream_write_memory
,
stream_write_orc
,
stream_write_parquet
,
stream_write_text