sparklyr (version 0.9.3)

stream_write_memory: Write Memory Stream

Description

Writes a Spark dataframe stream into a memory stream.

Usage

stream_write_memory(x, name = random_string("sparklyr_tmp_"),
  mode = c("append", "complete", "update"),
  trigger = stream_trigger_interval(),
  checkpoint = file.path("checkpoints", name, random_string("")),
  options = list(), ...)

Arguments

x

A Spark DataFrame or dplyr operation

name

The name to assign to the newly generated stream.

mode

Specifies how data is written to a streaming sink. Valid values are "append", "complete" or "update".

trigger

The trigger for the stream query, defaults to micro-batches runnnig every 5 seconds. See stream_trigger_interval and stream_trigger_continuous.

checkpoint

The location where the system will write all the checkpoint information to guarantee end-to-end fault-tolerance.

options

A list of strings with additional options.

...

Optional arguments; currently unused.

See Also

Other Spark stream serialization: stream_read_csv, stream_read_json, stream_read_kafka, stream_read_orc, stream_read_parquet, stream_read_text, stream_write_csv, stream_write_json, stream_write_kafka, stream_write_orc, stream_write_parquet, stream_write_text

Examples

Run this code
# NOT RUN {
sc <- spark_connect(master = "local")

dir.create("iris-in")
write.csv(iris, "iris-in/iris.csv", row.names = FALSE)

stream <- stream_read_csv(sc, "iris-in") %>% stream_write_memory()

stop_stream(stream)

# }
# NOT RUN {
# }

Run the code above in your browser using DataCamp Workspace