Trigger A New Job Run
db_jobs_run_now(
job_id,
jar_params = list(),
notebook_params = list(),
python_params = list(),
spark_submit_params = list(),
host = db_host(),
token = db_token(),
perform_request = TRUE
)
The canonical identifier of the job.
Named list. Parameters are used to invoke the main
function of the main class specified in the Spark JAR task. If not specified
upon run-now, it defaults to an empty list. jar_params
cannot be specified
in conjunction with notebook_params
.
Named list. Parameters is passed to the notebook
and is accessible through the dbutils.widgets.get
function. If not specified
upon run-now, the triggered run uses the job’s base parameters.
Named list. Parameters are passed to Python file as command-line parameters. If specified upon run-now, it would overwrite the parameters specified in job setting.
Named list. Parameters are passed to spark-submit script as command-line parameters. If specified upon run-now, it would overwrite the parameters specified in job setting.
Databricks workspace URL, defaults to calling db_host()
.
Databricks workspace token, defaults to calling db_token()
.
If TRUE
(default) the request is performed, if
FALSE
the httr2 request is returned without being performed.
*_params
parameters cannot exceed 10,000 bytes when serialized to JSON.
jar_params
and notebook_params
are mutually exclusive.
Other Jobs API:
db_jobs_create()
,
db_jobs_delete()
,
db_jobs_get()
,
db_jobs_list()
,
db_jobs_reset()
,
db_jobs_runs_cancel()
,
db_jobs_runs_delete()
,
db_jobs_runs_export()
,
db_jobs_runs_get()
,
db_jobs_runs_get_output()
,
db_jobs_runs_list()
,
db_jobs_runs_submit()
,
db_jobs_update()