
Last chance! 50% off unlimited learning
Sale ends in
Writes a Spark dataframe stream into an kafka stream.
stream_write_kafka(
x,
mode = c("append", "complete", "update"),
trigger = stream_trigger_interval(),
checkpoint = file.path("checkpoints", random_string("")),
options = list(),
partition_by = NULL,
...
)
A Spark DataFrame or dplyr operation
Specifies how data is written to a streaming sink. Valid values are
"append"
, "complete"
or "update"
.
The trigger for the stream query, defaults to micro-batches runnnig
every 5 seconds. See stream_trigger_interval
and
stream_trigger_continuous
.
The location where the system will write all the checkpoint information to guarantee end-to-end fault-tolerance.
A list of strings with additional options.
Partitions the output by the given list of columns.
Optional arguments; currently unused.
Please note that Kafka requires installing the appropriate
package by setting the packages
parameter to "kafka"
in spark_connect()
Other Spark stream serialization:
stream_read_csv()
,
stream_read_delta()
,
stream_read_json()
,
stream_read_kafka()
,
stream_read_orc()
,
stream_read_parquet()
,
stream_read_socket()
,
stream_read_text()
,
stream_write_console()
,
stream_write_csv()
,
stream_write_delta()
,
stream_write_json()
,
stream_write_memory()
,
stream_write_orc()
,
stream_write_parquet()
,
stream_write_text()
# NOT RUN {
library(sparklyr)
sc <- spark_connect(master = "local", version = "2.3", packages = "kafka")
read_options <- list(kafka.bootstrap.servers = "localhost:9092", subscribe = "topic1")
write_options <- list(kafka.bootstrap.servers = "localhost:9092", topic = "topic2")
stream <- stream_read_kafka(sc, options = read_options) %>%
stream_write_kafka(options = write_options)
stream_stop(stream)
# }
# NOT RUN {
# }
Run the code above in your browser using DataLab