Contunuous data simulation from a DAG.
rbn2(n, G = NULL, p, nei, low = 0.1, up = 1)
rbn3(n, p, s, a = 0, m, G = NULL, seed = FALSE)
A list including:
The number of outliers.
The adcacency matrix used. For the "rdag" if G[i, j] = 2, then G[j, i] = 3 and this means that there is an arrow from j to i. For the "rdag2" the entries are either G[i, j] = G[j, i] = 0 (no edge) or G[i, j] = 1 and G[j, i] = 0 (indicating i -> j).
The matrix with the with the uniform values in the interval
The simulated data.
A number indicating the sample size.
A number indicating the number of nodes (or vectices, or variables).
The average number of neighbours.
A number in
A number in
A vector equal to the number of nodes. This is the mean vector of the normal distribution from which the data are to be generated. This is used only when
If you already have an an adjacency matrix in mind, plug it in here, otherwise, leave it NULL.
If seed is TRUE, the simulated data will always be the same.
Every child will be a function of some parents. The beta coefficients of the parents will be drawn uniformly from two numbers, low and up. See details for more information on this.
Every child will be a function of some parents. The beta coefficients of the parents will be drawn uniformly from two numbers, low and up. See details for more information on this.
Michail Tsagris.
R implementation and documentation: Michail Tsagris mtsagris@uoc.gr.
In the case where no adjacency matrix is given, an
For the "rdag2", this is a different way of simulating data from DAGs. The first variable is normally generated. Every other variable can be a function of some previous ones. Suppose now that the i-th variable is a child of 4 previous variables. We need for coefficients
Tsagris M. (2019). Bayesian network learning with the PC algorithm: an improved and correct variation. Applied Artificial Intelligence, 33(2): 101--123.
Tsagris M., Borboudakis G., Lagani V. and Tsamardinos I. (2018). Constraint-based Causal Discovery with Mixed Data. International Journal of Data Science and Analytics, 6: 19--30.
Spirtes P., Glymour C. and Scheines R. (2001). Causation, Prediction, and Search. The MIT Press, Cambridge, MA, USA, 3nd edition.
Colombo Diego, and Marloes H. Maathuis (2014). Order-independent constraint-based causal structure learning. Journal of Machine Learning Research, 15(1): 3741--3782.
rbn, pchc, fedhc, mmhc
# \donttest{
x <- pchc::rbn3(100, 20, 0.2)$x
a <- pchc::pchc(x)
# }
Run the code above in your browser using DataLab