Rmpi (version 0.6-9)

mpi.applyLB: (Load balancing) parallel apply

Description

(Load balancing) parallellapply and related functions.

Usage

mpi.applyLB(X, FUN, ..., apply.seq=NULL, comm=1)
mpi.parApply(X, MARGIN, FUN, ..., job.num = mpi.comm.size(comm)-1,
                    apply.seq=NULL, comm=1)
mpi.parLapply(X, FUN, ..., job.num=mpi.comm.size(comm)-1, apply.seq=NULL, 
		comm=1)  
mpi.parSapply(X, FUN, ..., job.num=mpi.comm.size(comm)-1, apply.seq=NULL, 
		simplify=TRUE, USE.NAMES = TRUE, comm=1)  
mpi.parRapply(X, FUN, ..., job.num=mpi.comm.size(comm)-1, apply.seq=NULL, 
		comm=1)  
mpi.parCapply(X, FUN, ..., job.num=mpi.comm.size(comm)-1, apply.seq=NULL, 
		comm=1)  
mpi.parReplicate(n, expr, job.num=mpi.comm.size(comm)-1, apply.seq=NULL, 
		simplify = TRUE, comm=1)
mpi.parMM (A, B, job.num=mpi.comm.size(comm)-1, comm=1)

Arguments

X

an array or matrix.

MARGIN

vector specifying the dimensions to use.

FUN

a function.

simplify

logical; should the result be simplified to a vector or matrix if possible?

USE.NAMES

logical; if TRUE and if X is character, use X as names for the result unless it had names already.

n

number of replications.

A

a matrix

B

a matrix

expr

expression to evaluate repeatedly.

job.num

Total job numbers. If job numbers is bigger than total slave numbers (default value), a load balancing approach is used.

apply.seq

if reproducing the same computation (simulation) is desirable, set it to the integer vector .mpi.applyLB generated in previous computation (simulation).

...

optional arguments to FUN

comm

a communicator number

Warning

When using the argument apply.seq with .mpi.applyLB, be sure all settings are the same as before, i.e., the same data, job.num, slave.num, and seed. Otherwise a deadlock could occur. Notice that apply.seq is useful only if job.num is bigger than slave.num.

Details

Unless length of X is no more than total slave numbers (slave.num) and in this case mpi.applyLB is the same as mpi.apply, mpi.applyLB sends a next job to a slave who just delivered a finished job. The sequence of slaves who deliver results to master are saved into .mpi.applyLB. It keeps track which part of results done by which slaves. .mpi.applyLB can be used to reproduce the same simulation result if the same seed is used and the argument apply.seq is equal to .mpi.applyLB.

With the default value of argument job.num which is slave.num, mpi.parApply, mpi.parLapply, mpi.parSapply, mpi.parRapply, mpi.parCapply, mpi.parSapply, and mpi.parMM are clones of snow's parApply, parLappy, parSapply, parRapply, parCapply, parSapply, and parMM, respectively. When job.num is bigger than slave.num, a load balancing approach is used.

See Also

mpi.apply

Examples

Run this code
# NOT RUN {
#Assume that there are some slaves running

#mpi.applyLB
x=1:7
mpi.applyLB(x,rnorm,mean=2,sd=4)

#get the same simulation 
mpi.remote.exec(set.seed(111))
mpi.applyLB(x,rnorm,mean=2,sd=4)
mpi.remote.exec(set.seed(111))
mpi.applyLB(x,rnorm,mean=2,sd=4,apply.seq=.mpi.applyLB)

#mpi.parApply
x=1:24
dim(x)=c(2,3,4)
mpi.parApply(x, MARGIN=c(1,2), FUN=mean,job.num = 5)

#mpi.parLapply
mdat <- matrix(c(1,2,3, 7,8,9), nrow = 2, ncol=3, byrow=TRUE,
                    dimnames = list(c("R.1", "R.2"), c("C.1", "C.2", "C.3")))
mpi.parLapply(mdat, rnorm) 

#mpi.parSapply
mpi.parSapply(mdat, rnorm) 

#mpi.parMM
A=matrix(1:1000^2,ncol=1000)
mpi.parMM(A,A)
# }

Run the code above in your browser using DataLab