Learn R Programming

nem (version 2.46.0)

set.default.parameters: Get/set hyperparameters

Description

Allows to set and retrieve various hyperparameters for different inference methods.

Usage

set.default.parameters(Sgenes, ...)

Arguments

Sgenes
character vector of S-gene identifiers
...
parameters to set (see details)

Value

A list containing all parameters described above.

Details

Since version 2.5.4 functions in the nem package do not have any more a large amount of individual parameters. Instead there is just one hyperparameter, which is passed to all functions. Parameter values with the hyperparameter can be set with this function.
type
mLL or FULLmLL or CONTmLL or CONTmLLBayes or CONTmLLMAP or depn. CONTmLLDens and CONTmLLRatio are identical to CONTmLLBayes and CONTmLLMAP and are still supported for compatibility reasons. mLL and FULLmLL are used for binary data (see BoutrosRNAiDiscrete) and CONTmLL for a matrix of effect probabilities. CONTmLLBayes and CONTmLLMAP are used, if log-odds ratios, p-value densities or any other model specifies effect likelihoods. CONTmLLBayes refers to an inference scheme, were the linking positions of effect reporters to network nodes are integrated out, and CONTmLLMAP to an inference scheme, were a MAP estimate for the linking positions is calculated. depn indicates Deterministic Effects Propagation Networks (DEPNs).

para
vector of length two: false positive rate and false negative rate for binary data. Used by mLL
hyperpara
vector of length four: used by FULLmLL() for binary data

Pe
prior of effect reporter positions in the phenotypic hierarchy (same dimension as D). Not used type depn. Default: NULL

Pm
prior over models (n x n matrix). Default: NULL

Pm.frac_edges
expected fraction of edges in the true S-gene graph

Pmlocal
local model prior for pairwise and triple learning. For pairwise learning generated by local.model.prior according to arguments local.prior.size and local.prior.bias

local.prior.size
prior expected number of edges in the graph (for pairwise learning). Default: no. nodes

local.prior.bias
bias towards double-headed edges. Default: 1 (no bias; for pairwise learning)

triples.thrsh
threshold for model averaging to combine triple models for each edge. Default: 0.5

lambda
regularization parameter to incorporate prior assumptions. May also be a vector of possible values, if nemModelSelection is used, Default: 0 (no regularization)

delta
regularization parameter for automated subset selection of effect reporters. If no E-gene selection is wanted, set delta to 0. Default: 1/ (no. S-genes + 1)

selEGenes.method
If "regularization", E-gene selection is performed by introducing a "null" S-gene to which E-genes are attached with probability delta/ (no. S-genes + 1). If "iterative" and selEGenes=TRUE, getRelevantEGenes is called and a new model is trained on the selected E-genes. The process is then repeated until convergence. Default: "regularization"

selEGenes
Tune parameter delta for automated selection of E-genes. Default: FALSE. NOTE: Since version > 2.18.0 E-gene selection is now performed per default by using the regularization mechanism with parameter delta. If no E-gene selection is wanted, set delta to 0.
trans.close
Should always transitive closed graphs be computed? Default: TRUE. NOTE: This has only an impact for type nem.greedyMAP and depn. Default: TRUE

backward.elimination
For module networks and greedy hillclimbing inference: Try to eliminate edges increasing the likelihood. Works only, if trans.close=FALSE. Default: FALSE

mode
For Bayesian network inference and DEPNs: binary_ML: effects come from a binomial distribution - ML learning of parameters (Bayesian networks only); binary_Bayesian: effects come from a binomial distribution - Bayesian learning of parameters (Bayesian networks only); continuous_ML: effects come from a normal distribution - ML learning of parameters; continuous_Bayesian: effects come from a normal distribution - Bayesian learning of parameters.

nu.intervention, lambda.intervention
For depn: For any perturbed node we suppose the unknown mean mu given its unknown variance sigma2 to be drawn from N(nu.intervention, sigma2/lambda.intervention). Default: nu.intervention=0.6, lambda.intervention=4

nu.no\_intervention, lambda.no\_intervention
The same parameters for unperturbed nodes. Default: nu.no\_intervention=0.95, lambda.no\_intervention=4

df.intervention, scale.intervention
For depn: The unknown variance sigma2 for perturbed nodes is supposed to be drawn from Inv-$\chi^2$(df.intervention, scale.intervention). Default: df.intervention=4.4, scale.intervention=4.4

df.no\_intervention, scale.no\_intervention
The same parameters for unperturbed nodes. Default: df.no\_intervention=4.4, scale.no\_intervention=0.023

map
For depn: Mapping of interventions to network nodes. The format is a named list of strings with names being the interventions and entries being the network nodes. Default: Entries and names are the network nodes.

outputdir
Directory where to put diagnostic plots. Default: folder "QualityControl" in current working directory

debug
Print out or plot diagnostic information. Default: FALSE

mc.cores
number of cores to be used on a multicore processor. Default: 8

mcmc.nsamples
Number of MCMC samples to take. Default: 1e6

mcmc.nburnin
Number of additional samples for burnin phase. Default: 1e6
mcmc.seed
random seed. Default: 1234

mcmc.hyperprior
Parameter for exponential distribution hyperprior for regularization parameter 1/lambda. Default: 1

eminem.maxsteps
Maximum number of iterations for the EM algorithm (MC.EMINEM). Default: 1000

eminem.sdVal
positive number, between 1 and ncol(D)*(ncol(D)-1): number of edges to change in one MCMC step; see paper for the author's choice. Default: 1

eminem.changeHfreq
positive number, mcmc.nsamples must be a multiple: the Empirical Bayes step is performed every steps (see paper for the author's choice); set >= mcmc.nsamples (or leave to default) to exclude this step. Default: NULL

prob.cutoff
Only edges with probability > prob.cutoff are assumed to be present. Default: 0.5

References

Markowetz, F.; Bloch, J. & Spang, R., Non-transcriptional Pathway Features Reconstructed from Secondary Effects of RNA interference. Bioinformatics, 2005, 21, 4026 - 4032\

Markowetz, F.; Kostka, D.; Troyanskaya, O. & Spang, R., Nested Effects Models for High-dimensional Phenotyping Screens. Bioinformatics, 2007, 23, i305 - i312\

Fr\"ohlich, H.; Fellmann, M.; S\"ultmann, H.; Poustka, A. & Beissbarth, T. Large Scale Statistical Inference of Signaling Pathways from RNAi and Microarray Data. BMC Bioinformatics, 2007, 8, 386\

Fr\"ohlich, H.; Fellmann, M.; S\"ultmann, H.; Poustka, A. & Beissbarth, T. Estimating Large Scale Signaling Networks through Nested Effect Models with Intervention Effects from Microarray Data. Bioinformatics, 2008, 24, 2650-2656\

Tresch, A. & Markowetz, F., Structure Learning in Nested Effects Models Statistical Applications in Genetics and Molecular Biology, 2008, 7\

Zeller, C.; Fr\"ohlich, H. & Tresch, A., A Bayesian Network View on Nested Effects Models EURASIP Journal on Bioinformatics and Systems Biology, 2009, 195272\

Fr\"ohlich, H.; Tresch, A. & Beissbarth, T., Nested Effects Models for Learning Signaling Networks from Perturbation Data. Biometrical Journal, 2009, 2, 304 - 323\

Fr\"ohlich, H.; Sahin, \"O.; Arlt, D.; Bender, C. & Beissbarth, T. Deterministic Effects Propagation Networks for Reconstructing Protein Signaling Networks from Multiple Interventions. BMC Bioinformatics, 2009, 10, 322\

Fr\"ohlich, H.; Praveen, P. & Tresch, A., Fast and Efficient Dynamic Nested Effects Models. Bioinformatics, 2011, 27, 238-244\

Niederberger, T.; Etzold, S.; Lidschreiber, M; Maier, K.; Martin, D.; Fr\"ohlich, H.; Cramer, P.; Tresch, A., MC Eminem Maps the Interaction Landscape of the Mediator, PLoS Comp. Biol., 2012, submitted.

Examples

Run this code
control = set.default.parameters(LETTERS[1:5], type="CONTmLLBayes", selEGenes=TRUE) # set inference type and whether to use automatic E-gene selection for a network with nodes "A"-"E".

Run the code above in your browser using DataLab