
Last chance! 50% off unlimited learning
Sale ends in
Bayesian network structure learning (via constraint-based, score-based and hybrid algorithms), parameter learning (via ML and Bayesian estimators) and inference.
PC (pc.stable
), a modern implementation of the
first practical constraint-based structure learning algorithm.
Grow-Shrink (gs
): based on the Grow-Shrink
Markov Blanket, the first (and simplest) Markov blanket detection
algorithm used in a structure learning algorithm.
Incremental Association (iamb
): based on the
Markov blanket detection algorithm of the same name, which is based on
a two-phase selection scheme (a forward selection followed by an attempt
to remove false positives).
Fast Incremental Association (fast.iamb
): a
variant of IAMB which uses speculative stepwise forward selection to
reduce the number of conditional independence tests.
Interleaved Incremental Association (inter.iamb
):
another variant of IAMB which uses forward stepwise selection to avoid
false positives in the Markov blanket detection phase.
This package includes three implementations of each algorithm:
an optimized implementation (used when the optimized
argument
is set to TRUE
), which uses backtracking to initialize the
learning process of each node.
an unoptimized implementation (used when the optimized
argument is set to FALSE
) which is better at uncovering possible
erratic behaviour of the statistical tests.
a cluster-aware implementation, which requires a running cluster set
up with the makeCluster
function from the parallel package.
The computational complexity of these algorithms is polynomial in the number
of tests, usually
Hill-Climbing (hc
): a hill climbing
greedy search on the space of the directed graphs. The optimized
implementation uses score caching, score decomposability and score
equivalence to reduce the number of duplicated tests.
Tabu Search (tabu
): a modified hill-climbing
able to escape local optima by selecting a network that minimally
decreases the score function.
Random restart with a configurable number of perturbing operations is implemented for both algorithms.
Max-Min Hill-Climbing (mmhc
): a hybrid algorithm
which combines the Max-Min Parents and Children algorithm (to restrict the
search space) and the Hill-Climbing algorithm (to find the optimal network
structure in the restricted space).
Restricted Maximization (rsmax2
): a more general
implementation of the Max-Min Hill-Climbing, which can use any combination
of constraint-based and score-based algorithms.
These algorithms learn the structure of the undirected graph underlying the Bayesian network, which is known as the skeleton of the network or the (partial) correlation graph. Therefore all the arcs are undirected, and no attempt is made to detect their orientation. They are often used in hybrid learning algorithms.
Max-Min Parents and Children (mmpc
): a forward
selection technique for neighbourhood detection based on the maximization
of the minimum association measure observed with any subset of the nodes
selected in the previous iterations.
Hiton Parents and Children (si.hiton.pc
): a fast
forward selection technique for neighbourhood detection designed to
exclude nodes early based on the marginal association. The implementation
follows the Semi-Interleaved variant of the algorithm.
Chow-Liu (chow.liu
): an application of the
minimum-weight spanning tree and the information inequality. It learns
the tree structure closest to the true one in the probability space.
ARACNE (aracne
): an improved version of the
Chow-Liu algorithm that is able to learn polytrees.
All these algorithms have three implementations (unoptimized, optimized and cluster-aware) like other constraint-based algorithms.
The algorithms are aimed at classification, and favour predictive power over the ability to recover the correct network structure. The implementation in bnlearn assumes that all variables, including the classifiers, are discrete.
Naive Bayes (naive.bayes
): a very simple
algorithm assuming that all classifiers are independent and using the
posterior probability of the target variable for classification.
Tree-Augmented Naive Bayes (tree.bayes
): an
improvement over naive Bayes, this algorithms uses Chow-Liu to approximate
the dependence structure of the classifiers.
The conditional independence tests used in constraint-based algorithms in practice are statistical tests on the data set. Available tests (and the respective labels) are:
discrete case (categorical variables)
mutual information: an information-theoretic distance
measure. It's proportional to the log-likelihood ratio (they differ
by a mi
and mi-adf
, with adjusted degrees of freedom), the
Monte Carlo permutation test (mc-mi
), the sequential Monte
Carlo permutation test (smc-mi
), and the semiparametric test
(sp-mi
) are implemented.
shrinkage estimator for the mutual information
(mi-sh
): an improved asymptotic
Pearson's x2
and x2-adf
, with
adjusted degrees of freedom), the Monte Carlo permutation test
(mc-x2
), the sequential Monte Carlo permutation test
(smc-x2
) and semiparametric test (sp-x2
) are
implemented.
discrete case (ordered factors)
Jonckheere-Terpstra: a trend test for ordinal variables.
The asymptotic normal test (jt
), the Monte Carlo permutation
test (mc-jt
) and the sequential Monte Carlo permutation test
(smc-jt
) are implemented.
continuous case (normal variables)
linear correlation: Pearson's linear correlation. The exact
Student's t test (cor
), the Monte Carlo permutation test
(mc-cor
) and the sequential Monte Carlo permutation test
(smc-cor
) are implemented.
Fisher's Z: a transformation of the linear correlation
with asymptotic normal distribution. Used by commercial software
(such as TETRAD II) for the PC algorithm (an R implementation is
present in the pcalg
package on CRAN). The asymptotic normal
test (zf
), the Monte Carlo permutation test (mc-zf
) and
the sequential Monte Carlo permutation test (smc-zf
) are
implemented.
mutual information: an information-theoretic distance
measure. Again it is proportional to the log-likelihood ratio (they
differ by a mi-g
), the Monte Carlo
permutation test (mc-mi-g
) and the sequential Monte Carlo
permutation test (smc-mi-g
) are implemented.
shrinkage estimator for the mutual information
(mi-g-sh
): an improved asymptotic
hybrid case (mixed discrete and normal variables)
mutual information: an information-theoretic distance
measure. Again it is proportional to the log-likelihood ratio (they
differ by a mi-cg
) is implemented.
Available scores (and the respective labels) are:
discrete case (categorical variables)
the multinomial log-likelihood (loglik
) score, which
is equivalent to the entropy measure used in Weka.
the Akaike Information Criterion score (aic
).
the Bayesian Information Criterion score (bic
),
which is equivalent to the Minimum Description Length (MDL)
and is also known as Schwarz Information Criterion.
the logarithm of the Bayesian Dirichlet equivalent score
(bde
), a score equivalent Dirichlet posterior density.
the logarithm of the Bayesian Dirichlet sparse score
(bds
), a sparsity-inducing Dirichlet posterior density
(not score equivalent).
the logarithm of the Bayesian Dirichlet score with Jeffrey's prior (not score equivalent).
the logarithm of the modified Bayesian Dirichlet equivalent
score (mbde
) for mixtures of experimental and observational
data (not score equivalent).
the logarithm of the locally averaged Bayesian Dirichlet
score (bdla
, not score equivalent).
the logarithm of the K2 score (k2
), a Dirichlet
posterior density (not score equivalent).
continuous case (normal variables)
the multivariate Gaussian log-likelihood (loglik-g
)
score.
the corresponding Akaike Information Criterion score
(aic-g
).
the corresponding Bayesian Information Criterion score
(bic-g
).
a score equivalent Gaussian posterior density (bge
).
hybrid case (mixed discrete and normal variables)
the conditional linear Gaussian log-likelihood
(loglik-cg
) score.
the corresponding Akaike Information Criterion score
(aic-cg
).
the corresponding Bayesian Information Criterion score
(bic-cg
).
All learning algorithms support arc whitelisting and blacklisting:
blacklisted arcs are never present in the graph.
arcs whitelisted in one direction only (i.e.
arcs whitelisted in both directions (i.e. both
Any arc whitelisted and blacklisted at the same time is assumed to be whitelisted, and is thus removed from the blacklist.
In algorithms that learn undirected graphs, such as ARACNE and Chow-Liu, an arc must be blacklisted in both directions to blacklist the underlying undirected arc.
Optimized implementations of constraint-based algorithms rely heavily on backtracking to reduce the number of tests needed by the learning algorithm. This approach may sometimes hide errors either in the Markov blanket or the neighbourhood detection steps, such as when hidden variables are present or there are external (logical) constraints on the interactions between the variables.
On the other hand, in the unoptimized implementations of constraint-based algorithms the learning of the Markov blanket and neighbourhood of each node is completely independent from the rest of the learning process. Thus it may happen that the Markov blanket or the neighbourhoods are not symmetric (i.e. A is in the Markov blanket of B but not vice versa), or that some arc directions conflict with each other.
The strict
argument enables some measure of error correction for such
inconsistencies, which may help to retrieve a good model when the learning
process would otherwise fail:
if strict
is set to TRUE
, every error stops the learning
process and results in an error message.
if strict
is set to FALSE
:
v-structures are applied to the network structure in lowest-p-value order; if any arc is already oriented in the opposite direction, the v-structure is discarded.
nodes which cause asymmetries in any Markov blanket are removed from that Markov blanket; they are treated as false positives.
nodes which cause asymmetries in any neighbourhood are removed from that neighbourhood; again they are treated as false positives (see Tsamardinos, Brown and Aliferis, 2006).
Each correction results in a warning.
Package: |
bnlearn |
Type: | Package |
Version: | 4.4.1 |
Date: | 2019-03-05 |
This package implements some algorithms for learning the structure of Bayesian networks.
Constraint-based algorithms, also known as conditional independence learners, are all optimized derivatives of the Inductive Causation algorithm (Verma and Pearl, 1991). These algorithms use conditional independence tests to detect the Markov blankets of the variables, which in turn are used to compute the structure of the Bayesian network.
Score-based learning algorithms are general purpose heuristic optimization algorithms which rank network structures with respect to a goodness-of-fit score.
Hybrid algorithms combine aspects of both constraint-based and score-based algorithms, as they use conditional independence tests (usually to reduce the search space) and network scores (to find the optimal network in the reduced space) at the same time.
Several functions for parameter estimation, parametric inference, bootstrap, cross-validation and stochastic simulation are available. Furthermore, advanced plotting capabilities are implemented on top of the Rgraphviz and lattice packages.
Nagarajan R, Scutari M, Lebre S (2013). "Bayesian Networks in R with Applications in Systems Biology". Springer.
Scutari M (2010). "Learning Bayesian Networks with the bnlearn R Package". Journal of Statistical Software, 35(3):1--22.
Scutari M (20107). "Bayesian Network Constraint-Based Structure Learning Algorithms: Parallel and Optimized Implementations in the bnlearn R Package". Journal of Statistical Software, 77(2):1--20.
Koller D, Friedman N (2009). Probabilistic Graphical Models: Principles and Techniques. MIT Press.
Korb K, Nicholson AE (2010). Bayesian Artificial Intelligence. Chapman & Hall/CRC, 2nd edition.
Pearl J (1988). Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann.
# NOT RUN {
library(bnlearn)
data(learning.test)
## Simple learning
# first try the Grow-Shrink algorithm
res = gs(learning.test)
# plot the network structure.
plot(res)
# now try the Incremental Association algorithm.
res2 = iamb(learning.test)
# plot the new network structure.
plot(res2)
# the network structures seem to be identical, don't they?
all.equal(res, res2)
# how many tests each of the two algorithms used?
ntests(res)
ntests(res2)
# and the unoptimized implementation of these algorithms?
# }
# NOT RUN {
ntests(gs(learning.test, optimized = FALSE))
# }
# NOT RUN {
ntests(iamb(learning.test, optimized = FALSE))
# }
# NOT RUN {
## Greedy search
res = hc(learning.test)
plot(res)
## Another simple example (Gaussian data)
data(gaussian.test)
# first try the Grow-Shrink algorithm
res = gs(gaussian.test)
plot(res)
## Blacklist and whitelist use
# the arc B - F should not be there?
blacklist = data.frame(from = c("B", "F"), to = c("F", "B"))
blacklist
res3 = gs(learning.test, blacklist = blacklist)
plot(res3)
# force E - F direction (E -> F).
whitelist = data.frame(from = c("E"), to = c("F"))
whitelist
res4 = gs(learning.test, whitelist = whitelist)
plot(res4)
# use both blacklist and whitelist.
res5 = gs(learning.test, whitelist = whitelist, blacklist = blacklist)
plot(res5)
## Debugging
# use the debugging mode to see the learning algorithms
# in action.
res = gs(learning.test, debug = TRUE)
res = hc(learning.test, debug = TRUE)
# log the learning process for future reference.
# }
# NOT RUN {
sink(file = "learning-log.txt")
res = gs(learning.test, debug = TRUE)
sink()
# if something seems wrong, try the unoptimized version
# in strict mode (inconsistencies trigger errors):
res = gs(learning.test, optimized = FALSE, strict = TRUE, debug = TRUE)
# or disable strict mode to let the algorithm fix errors on the fly:
res = gs(learning.test, optimized = FALSE, strict = FALSE, debug = TRUE)
# }
Run the code above in your browser using DataLab