Read a tabular data file into a Spark DataFrame.
spark_read_csv( sc, name = NULL, path = name, header = TRUE, columns = NULL, infer_schema = is.null(columns), delimiter = ",", quote = "\"", escape = "\\", charset = "UTF-8", null_value = NULL, options = list(), repartition = 0, memory = TRUE, overwrite = TRUE, ... )
The name to assign to the newly generated table.
The path to the file. Needs to be accessible from the cluster. Supports the "hdfs://", "s3a://" and "file://" protocols.
Boolean; should the first row of data be used as a header?
A vector of column names or a named vector of column types.
If specified, the elements can be
Boolean; should column types be automatically inferred?
Requires one extra pass over the data. Defaults to
The character used to delimit each column. Defaults to ','.
The character used as a quote. Defaults to '"'.
The character used to escape other characters. Defaults to '\'.
The character set. Defaults to "UTF-8".
The character to use for null, or missing, values. Defaults to
A list of strings with additional options.
The number of partitions used to distribute the generated table. Use 0 (the default) to avoid partitioning.
Boolean; should the data be loaded eagerly into memory? (That is, should the table be cached?)
Boolean; overwrite the table with the given name if it already exists?
Optional arguments; currently unused.
You can read data from HDFS (
hdfs://), S3 (
as well as the local file system (
If you are reading from a secure S3 bucket be sure to set the following in your spark-defaults.conf
spark.hadoop.fs.s3a.secret.key or any of the methods outlined in the aws-sdk
documentation Working with AWS credentials
In order to work with the newer
s3a:// protocol also set the values for
In addition, to support v4 of the S3 api be sure to pass the
-Dcom.amazonaws.services.s3.enableV4 driver options
for the config key
For instructions on how to configure
s3n:// check the hadoop documentation:
s3n authentication properties
FALSE, the column names are generated with a
V prefix; e.g.
V1, V2, ....
Other Spark serialization routines: