Learn R Programming

doRedis (version 3.0.3)

registerDoRedis: Register the Redis back end for foreach.

Description

The doRedis package imlpements a simple but flexible parallel back end for foreach that uses Redis for inter-process communication. The work queue name specifies the base name of a small set of Redis keys that the coordinator and worker processes use to exchange data.

Usage

registerDoRedis(
  queue,
  host = "localhost",
  port = 6379,
  password,
  ftinterval = 30,
  chunkSize = 1,
  progress = FALSE,
  ...
)

Value

NULL is invisibly returned; this function is called for side effect of registering a foreach backend.

Arguments

queue

A work queue name

host

The Redis server host name or IP address

port

The Redis server port number

password

An optional Redis database password

ftinterval

Default fault tolerance interval in seconds

chunkSize

Default iteration granularity, see setChunkSize

progress

(logical) Show progress bar for computations?

...

Optional arguments passed to redisConnect

Details

Back-end worker R processes advertise their availablility for work with the redisWorker function.

The doRedis parallel back end tolerates faults among the worker processes and automatically resubmits failed tasks. It is also portable and supports heterogeneous sets of workers, even across operative systems. The back end supports dynamic pools of worker processes. New workers may be added to work queues at any time and can be used by running foreach computations.

See Also

doRedis-package, setChunkSize, removeQueue

Examples

Run this code
# Only run if a Redis server is running
if (redux::redis_available()) {
## The example assumes that a Redis server is running on the local host
## and standard port.

# 1. Start a single local R worker process
startLocalWorkers(n=1, queue="jobs", linger=1)

# 2. Run a simple sampling approximation of pi:
registerDoRedis("jobs")
pie = foreach(j=1:10, .combine=sum, .multicombine=TRUE) %dopar%
        4 * sum((runif(1000000) ^ 2 + runif(1000000) ^ 2) < 1) / 10000000
removeQueue("jobs")
print(pie)

# Note that removing the work queue automatically terminates worker processes.
}

Run the code above in your browser using DataLab