Learn R Programming

abess (version 0.4.11)

slide: Sparsity Learning for Ising moDel rEconstruction (SLIDE)

Description

Sparsity Learning for Ising moDel rEconstruction (SLIDE)

Usage

slide(
  x,
  weight = NULL,
  c.max = 8,
  max.support.size = NULL,
  tune.type = "cv",
  foldid = NULL,
  support.size = NULL,
  ic.scale = 1,
  graph.threshold = 0,
  newton = "approx"
)

Value

a sparse interaction matrix estimation

Arguments

x

Input matrix, of dimension \(n \times p\); each row is an observation vector and each column is a predictor/feature/variable. Can be in sparse matrix format (inherit from class "dgCMatrix" in package Matrix).

weight

Observation weights. When weight = NULL, we set weight = 1 for each observation as default.

c.max

an integer splicing size. Default is: c.max = 2.

max.support.size

The maximum node degree in the estimated graph. If prior information is available, we recommend setting this value accordingly. Otherwise, it is internally set to \(n / (\log p \log \log n)\) by default.

tune.type

The type of criterion for choosing the support size. Available options are "gic", "ebic", "bic", "aic" and "cv". Default is "gic".

foldid

an optional integer vector of values between 1, ..., nfolds identifying what fold each observation is in. The default foldid = NULL would generate a random foldid.

support.size

An integer vector representing the alternative support sizes. Only used for tune.path = "sequence". Default is 0:min(n, round(n/(log(log(n))log(p)))).

ic.scale

A non-negative value used for multiplying the penalty term in information criterion. Default: ic.scale = 1.

graph.threshold

A numeric value specifying the post-thresholding level for graph estimation. If prior knowledge about the minimum signal strength is available, this can be set to approximately half of that value. The default is 0.0, which means no thresholding is applied.

newton

A character specify the Newton's method for fitting generalized linear models, it should be either newton = "exact" or newton = "approx". If newton = "exact", then the exact hessian is used, while newton = "approx" uses diagonal entry of the hessian, and can be faster (especially when family = "cox").

Examples

Run this code
p <- 16
n <- 1e3
library(abess)
train <- generate.bmn.data(n, p, type = 3, graph.seed = 1, seed = 1, beta = 0.4)
res <- slide(train[["data"]], train[["weight"]], tune.type = "gic", 
             max.support.size = rep(4, p), support.size = rep(4, p))
all((res[[1]] != 0) == (train[["theta"]] != 0))

Run the code above in your browser using DataLab