Set target options, including default arguments to
tar_target()
such as packages, storage format,
iteration type, and cue. See default options with tar_option_get()
.
To use tar_option_set()
effectively, put it in your workflow's
_targets.R
script before calls to tar_target()
or tar_target_raw()
.
tar_option_set(
tidy_eval = NULL,
packages = NULL,
imports = NULL,
library = NULL,
envir = NULL,
format = NULL,
iteration = NULL,
error = NULL,
memory = NULL,
garbage_collection = NULL,
deployment = NULL,
priority = NULL,
backoff = NULL,
resources = NULL,
storage = NULL,
retrieval = NULL,
cue = NULL,
debug = NULL,
workspaces = NULL
)
Logical, whether to enable tidy evaluation
when interpreting command
and pattern
. If TRUE
, you can use the
"bang-bang" operator !!
to programmatically insert
the values of global objects.
Character vector of packages to load right before
the target builds. Use tar_option_set()
to set packages
globally for all subsequent targets you define.
Character vector of package names to track
global dependencies. For example, if you write
tar_option_set(imports = "yourAnalysisPackage")
early in _targets.R
,
then tar_make()
will automatically rerun or skip targets
in response to changes to the R functions and objects defined in
yourAnalysisPackage
. Does not account for low-level compiled code
such as C/C++ or Fortran. If you supply multiple packages,
e.g. tar_option_set(imports = c("p1", "p2"))
, then the objects in
p1
override the objects in p2
if there are name conflicts.
Similarly, objects in tar_option_get("envir")
override
everything in tar_option_get("imports")
.
Character vector of library paths to try
when loading packages
.
Environment containing functions and global objects
used in the R commands to run targets. Defaults to the global environment.
If envir
is the global environment, all the promise objects
are diffused before sending the data to parallel workers
in tar_make_future()
and tar_make_clustermq()
,
but otherwise the environment is unmodified.
This behavior improves performance by decreasing
the size of data sent to workers.
If envir
is not the global environment, then it should at least inherit
from the global environment or base environment
so targets
can access attached packages.
In the case of a non-global envir
, targets
attempts to remove
potentially high memory objects that come directly from targets
.
That includes tar_target()
objects of class "tar_target"
,
as well as objects of class "tar_pipeline"
or "tar_algorithm"
.
This behavior improves performance by decreasing
the size of data sent to workers.
Package environments should not be assigned to envir
.
To include package objects as upstream dependencies in the pipeline,
assign the package to the packages
and imports
arguments
of tar_option_set()
.
Optional storage format for the target's return value.
With the exception of format = "file"
, each target
gets a file in _targets/objects
, and each format is a different
way to save and load this file.
Possible formats:
"rds"
: Default, uses saveRDS()
and readRDS()
. Should work for
most objects, but slow.
"qs"
: Uses qs::qsave()
and qs::qread()
. Should work for
most objects, much faster than "rds"
. Optionally set the
preset for qsave()
through the resources
argument, e.g.
tar_target(..., resources = list(preset = "archive"))
.
Requires the qs
package (not installed by default).
"feather"
: Uses arrow::write_feather()
and
arrow::read_feather()
(version 2.0). Much faster than "rds"
,
but the value must be a data frame. Optionally set
compression
and compression_level
in arrow::write_feather()
through the resources
argument, e.g.
tar_target(..., resources = list(compression = ...))
.
Requires the arrow
package (not installed by default).
"parquet"
: Uses arrow::write_parquet()
and
arrow::read_parquet()
(version 2.0). Much faster than "rds"
,
but the value must be a data frame. Optionally set
compression
and compression_level
in arrow::write_parquet()
through the resources
argument, e.g.
tar_target(..., resources = list(compression = ...))
.
Requires the arrow
package (not installed by default).
"fst"
: Uses fst::write_fst()
and fst::read_fst()
.
Much faster than "rds"
, but the value must be
a data frame. Optionally set the compression level for
fst::write_fst()
through the resources
argument, e.g.
tar_target(..., resources = list(compress = 100))
.
Requires the fst
package (not installed by default).
"fst_dt"
: Same as "fst"
, but the value is a data.table
.
Optionally set the compression level the same way as for "fst"
.
"fst_tbl"
: Same as "fst"
, but the value is a tibble
.
Optionally set the compression level the same way as for "fst"
.
"keras"
: Uses keras::save_model_hdf5()
and
keras::load_model_hdf5()
. The value must be a Keras model.
Requires the keras
package (not installed by default).
"torch"
: Uses torch::torch_save()
and torch::torch_load()
.
The value must be an object from the torch
package
such as a tensor or neural network module.
Requires the torch
package (not installed by default).
"file"
: A dynamic file. To use this format,
the target needs to manually identify or save some data
and return a character vector of paths
to the data. (These paths must be existing files
and nonempty directories.)
Then, targets
automatically checks those files and cues
the appropriate build decisions if those files are out of date.
Those paths must point to files or directories,
and they must not contain characters |
or *
.
All the files and directories you return must actually exist,
or else targets
will throw an error. (And if storage
is "worker"
,
targets
will first stall out trying to wait for the file
to arrive over a network file system.)
"url"
: A dynamic input URL. It works like format = "file"
except the return value of the target is a URL that already exists
and serves as input data for downstream targets. Optionally
supply a custom curl
handle through the resources
argument, e.g.
tar_target(..., resources = list(handle = curl::new_handle(nobody = TRUE)))
. # nolint
in new_handle()
, nobody = TRUE
is important because it
ensures targets
just downloads the metadata instead of
the entire data file when it checks time stamps and hashes.
The data file at the URL needs to have an ETag or a Last-Modified
time stamp, or else the target will throw an error because
it cannot track the data. Also, use extreme caution when
trying to use format = "url"
to track uploads. You must be absolutely
certain the ETag and Last-Modified time stamp are fully updated
and available by the time the target's command finishes running.
targets
makes no attempt to wait for the web server.
"aws_rds"
, "aws_qs"
, "aws_parquet"
, "aws_fst"
, "aws_fst_dt"
,
"aws_fst_tbl"
, "aws_keras"
: AWS-powered versions of the
respective formats "rds"
, "qs"
, etc. The only difference
is that the data file is uploaded to the AWS S3 bucket
you supply to resources
. See the cloud computing chapter
of the manual for details.
"aws_file"
: arbitrary dynamic files on AWS S3. The target
should return a path to a temporary local file, then
targets
will automatically upload this file to an S3
bucket and track it for you. Unlike format = "file"
,
format = "aws_file"
can only handle one single file,
and that file must not be a directory.
tar_read()
and downstream targets
download the file to _targets/scratch/
locally and return the path.
_targets/scratch/
gets deleted at the end of tar_make()
.
Requires the same resources
and other configuration details
as the other AWS-powered formats. See the cloud computing
chapter of the manual for details.
Character of length 1, name of the iteration mode of the target. Choices:
"vector"
: branching happens with vctrs::vec_slice()
and
aggregation happens with vctrs::vec_c()
.
"list"
, branching happens with [[]]
and aggregation happens with
list()
.
"group"
: dplyr::group_by()
-like functionality to branch over
subsets of a data frame. The target's return value must be a data
frame with a special tar_group
column of consecutive integers
from 1 through the number of groups. Each integer designates a group,
and a branch is created for each collection of rows in a group.
See the tar_group()
function to see how you can
create the special tar_group
column with dplyr::group_by()
.
Character of length 1, what to do if the target
runs into an error. If "stop"
, the whole pipeline stops
and throws an error. If "continue"
, the error is recorded,
but the pipeline keeps going. error = "workspace"
is just like
error = "stop"
except targets
saves a special workspace file
to support interactive debugging outside the pipeline.
(Visit https://books.ropensci.org/targets/debugging.html
to learn how to debug targets using saved workspaces.)
Character of length 1, memory strategy.
If "persistent"
, the target stays in memory
until the end of the pipeline (unless storage
is "worker"
,
in which case targets
unloads the value from memory
right after storing it in order to avoid sending
copious data over a network).
If "transient"
, the target gets unloaded
after every new target completes.
Either way, the target gets automatically loaded into memory
whenever another target needs the value.
For cloud-based dynamic files such as format = "aws_file"
,
this memory policy applies to
temporary local copies of the file in _targets/scratch/"
:
"persistent"
means they remain until the end of the pipeline,
and "transient"
means they get deleted from the file system
as soon as possible. The former conserves bandwidth,
and the latter conserves local storage.
Logical, whether to run base::gc()
just before the target runs.
Character of length 1, only relevant to
tar_make_clustermq()
and tar_make_future()
. If "worker"
,
the target builds on a parallel worker. If "main"
,
the target builds on the host machine / process managing the pipeline.
Numeric of length 1 between 0 and 1. Controls which
targets get deployed first when multiple competing targets are ready
simultaneously. Targets with priorities closer to 1 get built earlier
(and polled earlier in tar_make_future()
).
Only applies to tar_make_future()
and tar_make_clustermq()
(not tar_make()
). tar_make_future()
with no extra settings is
a drop-in replacement for tar_make()
in this case.
Numeric of length 1, must be greater than or equal to 0.01.
Maximum upper bound of the random polling interval
for the priority queue (seconds).
In high-performance computing (e.g. tar_make_clustermq()
and tar_make_future()
) it can be expensive to repeatedly poll the
priority queue if no targets are ready to process. The number of seconds
between polls is runif(1, 0.01, max(backoff, 0.01 * 1.5 ^ index))
,
where index
is the number of consecutive polls so far that found
no targets ready to skip or run.
(If no target is ready, index
goes up by 1. If a target is ready,
index
resets to 0. For more information on exponential,
backoff, visit https://en.wikipedia.org/wiki/Exponential_backoff).
Raising backoff
is kinder to the CPU etc. but may incur delays
in some instances.
A named list of computing resources. Uses:
Template file wildcards for future::future()
in tar_make_future()
.
Template file wildcards clustermq::workers()
in tar_make_clustermq()
.
Custom target-level future::plan()
, e.g.
resources = list(plan = future.callr::callr)
.
Custom curl
handle if format = "url"
,
e.g. resources = list(handle = curl::new_handle(nobody = TRUE))
.
In custom handles, most users should manually set nobody = TRUE
so targets
does not download the entire file when it
only needs to check the time stamp and ETag.
Custom preset for qs::qsave()
if format = "qs"
, e.g.
resources = list(handle = "archive")
.
Arguments compression
and compression_level
to
arrow::write_feather()
and arrow:write_parquet()
if format
is
"feather"
, "parquet"
, "aws_feather"
, or "aws_parquet"
.
Custom compression level for fst::write_fst()
if
format
is "fst"
, "fst_dt"
, or "fst_tbl"
, e.g.
resources = list(compress = 100)
.
AWS bucket and prefix for the "aws_"
formats, e.g.
resources = list(bucket = "your-bucket", prefix = "folder/name")
.
bucket
is required for AWS formats. See the cloud computing chapter
of the manual for details.
Character of length 1, only relevant to
tar_make_clustermq()
and tar_make_future()
.
If "main"
, the target's return value is sent back to the
host machine and saved locally. If "worker"
, the worker
saves the value.
Character of length 1, only relevant to
tar_make_clustermq()
and tar_make_future()
.
If "main"
, the target's dependencies are loaded on the host machine
and sent to the worker before the target builds.
If "worker"
, the worker loads the targets dependencies.
An optional object from tar_cue()
to customize the
rules that decide whether the target is up to date.
Character vector of names of targets to run in debug mode.
To use effectively, you must set callr_function = NULL
and
restart your R session just before running. You should also
tar_make()
, tar_make_clustermq()
, or tar_make_future()
.
For any target mentioned in debug
, targets
will force the target to
build locally (with tar_cue(mode = "always")
and deployment = "main"
in the settings) and pause in an interactive debugger to help you diagnose
problems. This is like inserting a browser()
statement at the
beginning of the target's expression, but without invalidating any
targets.
Character vector of names of targets to save workspace
files. Workspace files let you re-create a target's runtime environment
in an interactive R session using tar_workspace()
. tar_workspace()
loads a target's random number generator seed and dependency objects
as long as those target objects are still in the data store
(usually _targets/objects/
).
NULL
(invisibly).
Other configuration:
tar_config_get()
,
tar_config_set()
,
tar_envvars()
,
tar_option_get()
,
tar_option_reset()
# NOT RUN {
tar_option_get("format") # default format before we set anything
tar_target(x, 1)$settings$format
tar_option_set(format = "fst_tbl") # new default format
tar_option_get("format")
tar_target(x, 1)$settings$format
tar_option_reset() # reset the format
tar_target(x, 1)$settings$format
if (identical(Sys.getenv("TAR_LONG_EXAMPLES"), "true")) {
tar_dir({ # tar_dir() runs code from a temporary directory.
tar_script({
tar_option_set(cue = tar_cue(mode = "always")) # All targets always run.
list(tar_target(x, 1), tar_target(y, 2))
})
tar_make()
tar_make()
})
}
# }
Run the code above in your browser using DataLab