Learn R Programming

CliquePercolation (version 0.4.0)

cpPermuteEntropy: Confidence Intervals Of Entropy Based On Random Permutations Of Network

Description

Function for determining confidence intervals of entropy values calculated for community partition from clique percolation based on randomly permuted networks of original network.

Usage

cpPermuteEntropy(
  W,
  cpThreshold.object,
  n = 100,
  interval = 0.95,
  CFinder = FALSE,
  ncores,
  seed = NULL
)

Value

A list object with the following elements:

Confidence.Interval

a data frame with lower and upper bound of confidence interval for each k

Extracted.Rows

rows extracted from cpThreshold.object that are larger than the upper bound of the specified confidence interval for each k

Settings

user-specified settings

Arguments

W

A qgraph object or a symmetric matrix; see also qgraph

cpThreshold.object

A cpThreshold object; see also cpThreshold

n

number of permutations (default is 100)

interval

requested confidence interval (larger than zero and smaller 1; default is 0.95)

CFinder

logical indicating whether clique percolation for weighted networks should be performed as in CFinder ; see also cpAlgorithm

ncores

Numeric. Number of cores to use in computing results. Defaults to parallel::detectCores() / 2 or half of your computer's processing power. Set to 1 to not use parallel computing

seed

Numeric. Set seed for reproducible results. Defaults to NULL

Author

Jens Lange, lange.jens@outlook.com

Details

The function generates n random permutations of the network specified in W. For each randomly permuted network, it runs cpThreshold (see cpThreshold for more information) with k and I values extracted from the cpThreshold object specified in cpThreshold.object. Across permutations, the confidence intervals of the entropy values are determined for each k separately.

The confidence interval of the entropy values is determined separately for each k. This is because larger k have to produce less communities on average, which will decrease entropy. Comparing confidence intervals of smaller k to those of larger k would therefore be disadvantageous for larger k.

In the output, one can check the confidence intervals of each k. Moreover, a data frame is produced that takes the cpThreshold object that was specified in cpThreshold.object and removes all rows that do not exceed the upper bound of the confidence interval of the respective k.

Examples

Run this code
## Example with fictitious data

# create qgraph object
W <- matrix(c(0,1,1,1,0,0,0,0,
              0,0,1,1,0,0,0,0,
              0,0,0,0,0,0,0,0,
              0,0,0,0,1,1,1,0,
              0,0,0,0,0,1,1,0,
              0,0,0,0,0,0,1,0,
              0,0,0,0,0,0,0,1,
              0,0,0,0,0,0,0,0), nrow = 8, ncol = 8, byrow = TRUE)
W <- Matrix::forceSymmetric(W)
W <- qgraph::qgraph(W)

# create cpThreshold object
cpThreshold.object <- cpThreshold(W = W, method = "unweighted", k.range = c(3,4),
                                  threshold = "entropy")

# run cpPermuteEntropy with 100 permutations and 95% confidence interval
# \donttest{
results <- cpPermuteEntropy(W = W, cpThreshold.object = cpThreshold.object,
                            n = 100, interval = 0.95, ncores = 1, seed = 4186)

# check results
results
# }

## Example with Obama data set (see ?Obama)

# get data
data(Obama)

# estimate network
net <- qgraph::EBICglasso(qgraph::cor_auto(Obama), n = nrow(Obama))

# create cpThreshold object
# \donttest{
threshold <- cpThreshold(net, method = "weighted",
                         k.range = 3:4,
                         I.range = seq(0.1, 0.5, 0.01),
                         threshold = "entropy")
# }
                          
# run cpPermuteEntropy with 50 permutations and 99% confidence interval
# \donttest{
permute <- cpPermuteEntropy(net, cpThreshold.object = threshold,
                            interval = 0.99, n = 50, ncores = 1, seed = 4186)

# check results
permute
# }

Run the code above in your browser using DataLab