spark_read_jdbc
Read from JDBC connection into a Spark DataFrame.
Read from JDBC connection into a Spark DataFrame.
Usage
spark_read_jdbc(sc, name, options = list(), repartition = 0,
memory = TRUE, overwrite = TRUE, columns = NULL, ...)
Arguments
- sc
A
spark_connection
.- name
The name to assign to the newly generated table.
- options
A list of strings with additional options. See http://spark.apache.org/docs/latest/sql-programming-guide.html#configuration.
- repartition
The number of partitions used to distribute the generated table. Use 0 (the default) to avoid partitioning.
- memory
Boolean; should the data be loaded eagerly into memory? (That is, should the table be cached?)
- overwrite
Boolean; overwrite the table with the given name if it already exists?
- columns
A vector of column names or a named vector of column types.
- ...
Optional arguments; currently unused.
See Also
Other Spark serialization routines: spark_load_table
,
spark_read_csv
,
spark_read_json
,
spark_read_libsvm
,
spark_read_parquet
,
spark_read_source
,
spark_read_table
,
spark_read_text
,
spark_save_table
,
spark_write_csv
,
spark_write_jdbc
,
spark_write_json
,
spark_write_parquet
,
spark_write_source
,
spark_write_table
,
spark_write_text
Community examples
library(sparklyr) config <- spark_config() #config$`sparklyr.shell.driver-class-path` <- "mysql-connector-java-5.1.43/mysql-connector-java-5.1.43-bin.jar" sc <- spark_connect(master = "local") db_tbl <- spark_read_jdbc(sc, name = "table_name", options = list(url = "jdbc:mysql://localhost:3306/schema_name", user = "root", password = "password",