Learn R Programming

causalDisco (version 1.0.1)

evaluate: Evaluate Causal Graph Estimates

Description

Computes various metrics to evaluate the difference between estimated and truth causal graph. Designed primarily for assessing the performance of causal discovery algorithms.

Metrics are supplied as a list with three slots: $adj, $dir, and $other.

$adj

Metrics applied to the adjacency confusion matrix (see confusion()).

$dir

Metrics applied to the conditional orientation confusion matrix (see confusion()).

$other

Metrics applied directly to the adjacency matrices without computing confusion matrices.

Adjacency confusion matrix and conditional orientation confusion matrix only works for caugi::caugi objects with these edge types present -->, <-->, --- and no edge.

Usage

evaluate(truth, est, metrics = "all")

Value

A data.frame with one column for each computed metric. Adjacency metrics are prefixed with "adj_", orientation metrics are prefixed with "dir_", other metrics do not get a prefix.

Arguments

truth

truth caugi::caugi object.

est

Estimated caugi::caugi object.

metrics

List of metrics, see details. If metrics = "all", all available metrics are computed.

See Also

Other metrics: confusion(), f1_score(), false_omission_rate(), fdr(), g1_score(), npv(), precision(), recall(), reexports, specificity()

Examples

Run this code
cg1 <- caugi::caugi(A %-->% B + C)
cg2 <- caugi::caugi(B %-->% A + C)
evaluate(cg1, cg2)
evaluate(
  cg1,
  cg2,
  metrics = list(
    adj = c("precision", "recall"),
    dir = c("f1_score"),
    other = c("shd")
  )
)

Run the code above in your browser using DataLab