sdf_schema

0th

Percentile

Read the Schema of a Spark DataFrame

Read the schema of a Spark DataFrame.

Usage
sdf_schema(x, expand_nested_cols = FALSE)
Arguments
x

A spark_connection, ml_pipeline, or a tbl_spark.

expand_nested_cols

Whether to expand columns containing nested array of structs (which are usually created by tidyr::nest on a Spark data frame)

Details

The type column returned gives the string representation of the underlying Spark type for that column; for example, a vector of numeric values would be returned with the type "DoubleType". Please see the Spark Scala API Documentation for information on what types are available and exposed by Spark.

Value

An R list, with each list element describing the name and type of a column.

Aliases
  • sdf_schema
Documentation reproduced from package sparklyr, version 1.4.0, License: Apache License 2.0 | file LICENSE

Community examples

Looks like there are no examples yet.