Read a tabular data file into a Spark DataFrame.
spark_read_csv(sc, name, path, header = TRUE, columns = NULL,
infer_schema = TRUE, delimiter = ",", quote = "\"", escape = "\\",
charset = "UTF-8", null_value = NULL, options = list(),
repartition = 0, memory = TRUE, overwrite = TRUE)A spark_connection.
The name to assign to the newly generated table.
The path to the file. Needs to be accessible from the cluster. Supports the "hdfs://", "s3n://" and "file://" protocols.
Boolean; should the first row of data be used as a header?
Defaults to TRUE.
A named vector specifying column types.
Boolean; should column types be automatically inferred?
Requires one extra pass over the data. Defaults to TRUE.
The character used to delimit each column. Defaults to ','.
The character used as a quote. Defaults to '"'.
The character used to escape other characters. Defaults to '\'.
The character set. Defaults to "UTF-8".
The character to use for null, or missing, values. Defaults to NULL.
A list of strings with additional options.
The number of partitions used to distribute the generated table. Use 0 (the default) to avoid partitioning.
Boolean; should the data be loaded eagerly into memory? (That is, should the table be cached?)
Boolean; overwrite the table with the given name if it already exists?
You can read data from HDFS (hdfs://), S3 (s3n://),
as well as the local file system (file://).
If you are reading from a secure S3 bucket be sure that the
AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment
variables are both defined.
When header is FALSE, the column names are generated with a
V prefix; e.g. V1, V2, ....
Other Spark serialization routines: spark_load_table,
spark_read_json,
spark_read_parquet,
spark_save_table,
spark_write_csv,
spark_write_json,
spark_write_parquet