Create Databricks SQL Connector Client
db_sql_client(
id,
catalog = NULL,
schema = NULL,
compute_type = c("warehouse", "cluster"),
use_cloud_fetch = FALSE,
session_configuration = list(),
host = db_host(),
token = db_token(),
workspace_id = db_current_workspace_id(),
...
)
DatabricksSqlClient()
String, ID of either the SQL warehouse or all purpose cluster.
Important to set compute_type
to the associated type of id
.
Initial catalog to use for the connection. Defaults to NULL
in which case the default catalog will be used.
Initial schema to use for the connection. Defaults to NULL
in which case the default catalog will be used.
One of "warehouse"
(default) or "cluster"
, corresponding to
associated compute type of the resource specified in id
.
Boolean (default is FALSE
). TRUE
to send fetch
requests directly to the cloud object store to download chunks of data.
FALSE
to send fetch requests directly to Databricks.
If use_cloud_fetch
is set to TRUE
but network access is blocked, then
the fetch requests will fail.
A optional named list of Spark session
configuration parameters. Setting a configuration is equivalent to using the
SET key=val
SQL command.
Run the SQL command SET -v
to get a full list of available configurations.
Databricks workspace URL, defaults to calling db_host()
.
Databricks workspace token, defaults to calling db_token()
.
String, workspace Id used to build the http path for the
connection. This defaults to using db_wsid()
to get DATABRICKS_WSID
environment variable. Not required if compute_type
is "cluster"
.
passed onto DatabricksSqlClient()
.
Create client using Databricks SQL Connector.
if (FALSE) {
client <- db_sql_client(id = "", use_cloud_fetch = TRUE)
}
Run the code above in your browser using DataLab