snowWrapper is a wrapper function around many
functionalities of the snowWrapper(cl, seq, fun,
cldata, name = "cldata", use.env=FALSE,
lib = NULL, dir = NULL, evalq=NULL,
size = 1, balancing = c("none", "load", "size", "both"),
rng.type = c("none", "RNGstream", "SPRNG"),
cleanup = TRUE, unload = FALSE, envir = .GlobalEnv, ...)makeCluster, or
an integer. It can also be NULL, see Details.name in the global environment
(an already existing object with same name will be
saved and replaced back in the end).
This list is then exported to the cluster by
cldata as to be assigned
to the global environment and used in fun.name is to be treated as a
list object or an environment.NULL working
directory is not set on workers (default).
Can be a vector to set different directories on workers.clusterEvalQ).
More than one expressions can be specified as character vector.seq (recycled if needed).
The default 1 indicates equality of problem sizes."none" or the type of RNG on the workers
(see clusterSetupRNG).
The logical value !(rng.type == "none") is used for
forking (e.g. when cl icldata should be removed from
the workers after applying fun.
If TRUE, effects of dir argument is also cleaned up.pkg should be unloaded after applying fun.fun, that are simple values and not objects.
(Arguments passed as objects should be specified in cldata,
otherwise those are not exported to the cluster by this function.)cl is a cluster
object. The function uses 'multicore' type forking (shared memory)
when cl is an integer.
The value from getOption("mc.cores") is used if the
argument is NULL.
The function sets the random seeds, loads packages lib
onto the cluster, sets the working directory as dir,
exports cldata and evaluates fun on seq.
No balancing (balancing = "none") means, that the problem
is splitted into roughly equal
subsets, without respect to size
(see clusterSplit). This splitting
is deterministic (reproducible).
Load balancing (balancing = "load") means,
that the problem is not splitted into subsets
a priori, but subsequent items are placed on the
worker which is empty
(see clusterApplyLB for load balancing).
This splitting is non-deterministic (might not be reproducible).
Size balancing (balancing = "size") means,
that the problem is splitted into
subsets, with respect to size
(see clusterSplitSB and parLapplySB).
In size balancing, the problem is re-ordered from
largest to smallest, and then subsets are
determined by minimizing the total approximate processing time.
This splitting is deterministic (reproducible).
Size and load balancing (balancing = "both") means,
that the problem is re-ordered from largest to smallest,
and then undeterministic load balancing
is used (see parLapplySLB).
If size is correct, this is identical to size balancing.
This splitting is non-deterministic (might not be reproducible).parLapplySB, parLapplySLB,
mclapplySB
Optimizing the number of workers:
clusterSize, plotClusterSize.
snowWrapper is used internally by parallel jags.parfit, dc.parfit,
parJagsModel, parUpdate,
parCodaSamples.cl <- makePSOCKcluster(2)
## wrapper
fun <- function(i) cldata$a * i - cldata$b
cldata <- list(a=10, b=5)
snowWrapper(cl, 1:5, fun, cldata)
stopCluster(cl)Run the code above in your browser using DataLab