Learn R Programming

SimDesign (version 0.4.1)

runSimulation: Run a Monte Carlo simulation given a data.frame of conditions and simulation functions

Description

This function runs a Monte Carlo simulation study given the simulation functions, the design conditions, and the number of replications. Results can be saved as temporary files in case of interruptions and may be restored by rerunning the exact function calls again, provided that the respective temp file can be found in the working directory. To conserve RAM, temporary objects (such as generated data across conditions and replications) are discarded. For longer simulations, however, it is recommended to use save = TRUE and/or save_results = TRUE to temporarily save the simulation state and to write results to separate external .rds files, respectively. Supports parallel and cluster computing, global and local debugging, error handling, and is designed to be cross-platform.

Usage

runSimulation(design, replications, generate, analyse, summarise,
  fixed_design_elements = NULL, parallel = FALSE, MPI = FALSE,
  try_errors = TRUE, save = FALSE, save_results = FALSE, seed = NULL,
  compname = Sys.info()["nodename"], filename = paste0(compname, "_Final_",
  replications), results_filename = paste0(compname, "_results_"),
  tmpfilename = paste0(compname, "_tmpsim.rds"),
  ncores = parallel::detectCores(), edit = "none", verbose = TRUE)

Arguments

design
a data.frame object containing the Monte Carlo simulation conditions to be studied, where each row represents a unique condition
replications
number of replication to perform per condition (i.e., each row in design)
generate
user-defined data and parameter generating function. See generate for details
analyse
user-defined computation function which acts on the data generated from generate. See analyse for details
summarise
user-defined summary function to be used after all the replications have completed. See summarise for details
fixed_design_elements
(optional) an object (usually a list) containing fixed design elements which can be used across all simulation conditions. This is useful when including long fixed vectors of population coefficients, including data which should be used across all conditio
parallel
logical; use parallel processing from the parallel package over each unique condition?

NOTE: When using packages other than the basic packages which are attached by default (e.g., stats, graphics, utils<

MPI
logical; use the doMPI package to run simulation in parallel on a cluster? Default is FALSE
try_errors
logical; include information about which error how often they occurred from try() chunks or check_error? If TRUE, this information will be stacked at the end of the returned simulation resu
save
logical; save the final simulation and temp files to the hard-drive? This is useful for simulations which require an extended amount of time. Default is FALSE
save_results
logical; save the results returned from analyse to external .rds files located in a 'SimDesign_results' directory/folder? If a 'SimDesign_results' folder does not exist in the current working directory then
seed
a vector of integers (or single number) to be used for reproducibility. The length of the vector must be equal to either 1 or the number of rows in design; if 1, this will be repeated for each condition. This argument calls
compname
name of the computer running the simulation. Normally this doesn't need to be modified, but in the event that a node breaks down while running a simulation the results from the tmp files may be resumed on another computer by changing the name of the node
filename
the name of the .rds file to save the final simulation results to. Default is the system name with the number of replications and 'Final' appended to the string
results_filename
the general name of the .rds file to save individual simulation results to (before calling the summarise function). Default is the system name with '_results_' and the row ID information appended
tmpfilename
the name of the temporary file, default is the system name with 'tmpsim.rds' appended at the end. This file will be read in if it is in the working directory, and the simulation will continue where at the last point this file was saved (useful in case of
ncores
number of cores to be used in parallel execution. Default uses all available
edit
a string indicating where to initiate a browser() call for editing and debugging. General options are 'none' (default) and 'recover' to disable debugging or to use the options(error = 'recover'). Specifi
verbose
logical; print messages to the R console?

code

TRY_ERROR_MESSAGE

Storing and resuming temporary results

In the event of a computer crash, power outage, etc, if save = TRUE was used then the original code in the main source file need only be rerun again to resume the simulation. The saved temp file will be read into the function, and the simulation will continue where it left off before the simulation was terminated. Upon completion, a data.frame with the simulation will be returned in the R session and a '.rds' file will be saved to the hard-drive (with the file name corresponding to the filename argument below). To save the complete list of results returned from analyse to unique files use save_results = TRUE.

Cluster computing

If the package is installed across a cluster of computers, and all the computers are accessible on the same LAN network, then the package may be run within the MPI paradigm. This simply requires that the computers be setup using the usual MPI requirements (typically, running some flavor of Linux, have password-less openSSH access, addresses have been added to the /etc/hosts file, etc). To setup the R code for an MPI cluster one need only add the argument MPI = TRUE and submit the files using the suitable BASH commands.

For instances, if the following code is run on the master node through a terminal then 16 processes will be summoned (1 master, 15 slaves) across the computers named localhost, slave1, and slave2.

mpirun -np 16 -H localhost,slave1,slave2 R --slave -f simulation.R

Poor man's cluster computing for indedependent nodes

In the event that you do not have access to a Beowulf-type cluster but have multiple personal computers, then the simulation code can be manually distributed across each independent computer instead. This simply requires passing a smaller value to the replications argument on each computer, and later aggregating the results using the aggregate_simulations function.

For instance, if you have two computers available and wanted 500 replications you could pass replications = 300 to one computer and replications = 200 to the other along with a save = TRUE argument. This will create two distinct .rds files which can be combined later with the aggregate_simulations function. The benefit of this approach over MPI is that computers need not be linked over a LAN network, and should the need arise the temporary simulation results can be migrated to another computer in case of a complete hardware failure by modifying the suitable compname input (or, if the filename and tmpfilename were modified, matching those files as well).

Details

For a skeleton version of the work-flow which may be useful when initially defining a simulation, see SimDesign_functions. Additional examples can be found on the package wiki, located at https://github.com/philchalmers/SimDesign/wiki.

The strategy for organizing the Monte Carlo simulation work-flow is to

[object Object],[object Object],[object Object],[object Object]

See Also

generate, analyse, summarise, SimDesign_functions

Examples

Run this code
#### Step 1 --- Define your conditions under study and create design data.frame

# (use EXPLICIT names, avoid things like N <- 100. That's fine in functions, not here)
sample_sizes <- c(30, 60, 90, 120)
standard_deviation_ratios <- c(1, 4, 8)
group_size_ratios <- c(.5, 1, 2)

Design <- expand.grid(sample_size=sample_sizes,
                      group_size_ratio=group_size_ratios,
                      standard_deviation_ratio=standard_deviation_ratios)
dim(Design)
head(Design)

#~~~~~~~~~~~~~~~~~~~~~~~~
#### Step 2 --- Define generate, analyse, and summarise functions

# skeleton functions to be edited
SimDesign_functions()

# help(generate)
Generate <- function(condition, fixed_design_elements = NULL){

    #require packages/define functions if needed, or better yet index with the :: operator

    N <- condition$sample_size
    grs <- condition$group_size_ratio
    sd <- condition$standard_deviation_ratio

    if(grs < 1){
        N2 <- N / (1/grs + 1)
        N1 <- N - N2
    } else {
        N1 <- N / (grs + 1)
        N2 <- N - N1
    }
    group1 <- rnorm(N1)
    group2 <- rnorm(N2, sd=sd)
    dat <- data.frame(group = c(rep('g1', N1), rep('g2', N2)), DV = c(group1, group2))

    return(dat)
}

# help(analyse)

Analyse <- function(condition, dat, fixed_design_elements = NULL, parameters = NULL){

    # require packages/define functions if needed, or better yet index with the :: operator
    require(stats)
    mygreatfunction <- function(x) print('Do some stuff')

    #wrap computational statistics in try() statements to control estimation problems
    welch <- try(t.test(DV ~ group, dat), silent=TRUE)
    ind <- try(t.test(DV ~ group, dat, var.equal=TRUE), silent=TRUE)

    # check if any errors occurred. This will re-draw the data
    check_error(welch, ind)

    # In this function the p values for the t-tests are returned,
    #  and make sure to name each element, for future reference
    ret <- c(welch = welch$p.value, independent = ind$p.value)

    return(ret)
}

# help(summarise)

Summarise <- function(condition, results, fixed_design_elements = NULL, parameters_list = NULL){

    #find results of interest here (e.g., alpha < .1, .05, .01)
    lessthan.05 <- EDR(results, alpha = .05)

    # return the results that will be appended to the design input
    ret <- c(lessthan.05=lessthan.05)
    return(ret)
}


#~~~~~~~~~~~~~~~~~~~~~~~~
#### Step 3 --- Collect results by looping over the rows in design

# test to see if it works and for debugging
Final <- runSimulation(design=Design, replications=5, parallel=FALSE,
                       generate=Generate, analyse=Analyse, summarise=Summarise)

# complete run with 1000 replications per condition
Final <- runSimulation(design=Design, replications=1000, parallel=TRUE,
                       generate=Generate, analyse=Analyse, summarise=Summarise)
head(Final)
View(Final)

## Debug the generate function. See ?browser for help on debugging
##   Type help to see available commands (e.g., n, c, where, ...),
##   ls() to see what has been defined, and type Q to quit the debugger
runSimulation(design=Design, replications=1000,
              generate=Generate, analyse=Analyse, summarise=Summarise,
              parallel=TRUE, edit='generate')

## Alternatively, place a browser() within the desired function line to
##   jump to a specific location
Summarise <- function(condition, results, parameters_list = NULL){

    #find results of interest here (e.g., alpha < .1, .05, .01)
    nms <- c('welch', 'independent')
    lessthan.05 <- EDR(results[,nms], alpha = .05)

    browser()

    # return the results that will be appended to the design input
    ret <- c(lessthan.05=lessthan.05)
    return(ret)
}

runSimulation(design=Design, replications=1000,
              generate=Generate, analyse=Analyse, summarise=Summarise,
              parallel=TRUE)




## EXTRA: To run the simulation on a MPI cluster, use the following setup on each node (not run)
# library(doMPI)
# cl <- startMPIcluster()
# registerDoMPI(cl)
# Final <- runSimulation(design=Design, replications=1000, MPI=TRUE,
#                        generate=Generate, analyse=Analyse, summarise=Summarise)
# closeCluster(cl)
# mpi.quit()



#~~~~~~~~~~~~~~~~~~~~~~~~
# Step 4 --- Post-analysis: Create a new R file for analyzing the Final data.frame with R based
# regression stuff, so use the lm() function to find main effects, interactions, plots, etc.
# This is where you get to be a data analyst!

psych::describe(Final)
psych::describeBy(Final, group = Final$standard_deviation_ratio)

# make into factors (if helpful)
Final$f_gsr <- with(Final, factor(group_size_ratio))
Final$f_sdr <- with(Final, factor(standard_deviation_ratio))

#lm analysis (might want to change DV to a logit for better stability)
mod <- lm(lessthan.05.welch ~ f_gsr * f_sdr, Final)
car::Anova(mod)

mod2 <- lm(lessthan.05.independent ~ f_gsr * f_sdr, Final)
car::Anova(mod2)

# make some plots
library(ggplot2)
library(reshape2)
welch_ind <- Final[,c('group_size_ratio', "standard_deviation_ratio",
    "lessthan.05.welch", "lessthan.05.independent")]
dd <- melt(welch_ind, id.vars = names(welch_ind)[1:2])

ggplot(dd, aes(factor(group_size_ratio), value)) +
    geom_abline(intercept=0.05, slope=0, col = 'red') +
    geom_abline(intercept=0.075, slope=0, col = 'red', linetype='dotted') +
    geom_abline(intercept=0.025, slope=0, col = 'red', linetype='dotted') +
    geom_boxplot() + facet_wrap(~variable)

ggplot(dd, aes(factor(group_size_ratio), value, fill = factor(standard_deviation_ratio))) +
    geom_abline(intercept=0.05, slope=0, col = 'red') +
    geom_abline(intercept=0.075, slope=0, col = 'red', linetype='dotted') +
    geom_abline(intercept=0.025, slope=0, col = 'red', linetype='dotted') +
    geom_boxplot() + facet_grid(variable~standard_deviation_ratio) +
    theme(legend.position = 'none')

Run the code above in your browser using DataLab