spark_read_orc

0th

Percentile

Read a ORC file into a Spark DataFrame

Read a ORC file into a Spark DataFrame.

Usage
spark_read_orc(sc, name = NULL, path = name, options = list(),
  repartition = 0, memory = TRUE, overwrite = TRUE, columns = NULL,
  schema = NULL, ...)
Arguments
sc

A spark_connection.

name

The name to assign to the newly generated table.

path

The path to the file. Needs to be accessible from the cluster. Supports the "hdfs://", "s3a://" and "file://" protocols.

options

A list of strings with additional options. See http://spark.apache.org/docs/latest/sql-programming-guide.html#configuration.

repartition

The number of partitions used to distribute the generated table. Use 0 (the default) to avoid partitioning.

memory

Boolean; should the data be loaded eagerly into memory? (That is, should the table be cached?)

overwrite

Boolean; overwrite the table with the given name if it already exists?

columns

A vector of column names or a named vector of column types.

schema

A (java) read schema. Useful for optimizing read operation on nested data.

...

Optional arguments; currently unused.

Details

You can read data from HDFS (hdfs://), S3 (s3a://), as well as the local file system (file://).

See Also

Other Spark serialization routines: spark_load_table, spark_read_csv, spark_read_jdbc, spark_read_json, spark_read_libsvm, spark_read_parquet, spark_read_source, spark_read_table, spark_read_text, spark_save_table, spark_write_csv, spark_write_jdbc, spark_write_json, spark_write_orc, spark_write_parquet, spark_write_source, spark_write_table, spark_write_text

Aliases
  • spark_read_orc
Documentation reproduced from package sparklyr, version 1.0.3, License: Apache License 2.0 | file LICENSE

Community examples

Looks like there are no examples yet.