spam (version 2.1-1)

chol: Cholesky Factorization for Sparse Matrices

Description

chol performs a Cholesky decomposition of a symmetric positive definite sparse matrix x of class spam.

Usage

# chol(x, \dots)

# S4 method for spam chol(x, pivot = "MMD", method = "NgPeyton", memory = list(), eps = getOption("spam.eps"),…)

# update.spam.chol.NgPeyton(object, x,...) # S4 method for spam.chol.NgPeyton update(object, x,...)

Arguments

x

symmetric positive definite matrix of class spam.

pivot

should the matrix be permuted, and if, with what algorithm, see ‘Details’ below.

method

Currently, only NgPeyton is implemented.

memory

Parameters specific to the method, see ‘Details’ below.

eps

threshold to test symmetry. Defaults to getOption("spam.eps").

further arguments passed to or from other methods.

object

an object from a previous call to chol.

Value

The function returns the Cholesky factor in an object of class spam.chol.method. Recall that the latter is the Cholesky factor of a reordered matrix x, see also ordering.

Details

chol performs a Cholesky decomposition of a symmetric positive definite sparse matrix x of class spam. Currently, there is only the block sparse Cholesky algorithm of Ng and Peyton (1993) implemented (method="NgPeyton").

To pivot/permute the matrix, you can choose between the multiple minimum degree (pivot="MMD") or reverse Cuthill-Mckee (pivot="RCM") from George and Lui (1981). It is also possible to furnish a specific permutation in which case pivot is a vector. For compatibility reasons, pivot can also take a logical in which for FALSE no permutation is done and for TRUE is equivalent to MMD.

Often the sparsity structure is fixed and does not change, but the entries do. In those cases, we can update the Cholesky factor with update.spam.chol.NgPeyton by suppling a Cholesky factor and the updated matrix. Notice that the structure is effectively object <- update(object, x). The update feature without assignement has been disabled.

The option cholupdatesingular determines how singular matrices are handled by update. The function hands back an error ("error"), a warning ("warning") or the value NULL ("null").

The Cholesky decompositions requires parameters, linked to memory allocation. If the default values are too small the Fortran routine returns an error to R, which allocates more space and calls the Fortran routine again. The user can also pass better estimates of the allocation sizes to chol with the argument memory=list(nnzR=..., nnzcolindices=...). The minimal sizes for a fixed sparsity structure can be obtained from a summary call, see ‘Examples’.

The output of chol can be used with forwardsolve and backsolve to solve a system of linear equations.

Notice that the Cholesky factorization of the package SparseM is also based on the algorithm of Ng and Peyton (1993). Whereas the Cholesky routine of the package Matrix are based on CHOLMOD by Timothy A. Davis (C code).

References

Ng, E. G. and Peyton, B. W. (1993) Block sparse Cholesky algorithms on advanced uniprocessor computers, SIAM J. Sci. Comput., 14, 1034--1056.

Gilbert, J. R., Ng, E. G. and Peyton, B. W. (1994) An efficient algorithm to compute row and column counts for sparse Cholesky factorization, SIAM J. Matrix Anal. Appl., 15, 1075--1091.

George, A. and Liu, J. (1981) Computer Solution of Large Sparse Positive Definite Systems, Prentice Hall.

See Also

det.spam, solve.spam, forwardsolve.spam, backsolve.spam and ordering.

Examples

Run this code
# NOT RUN {
# generate multivariate normals:
set.seed(13)
n <- 25    # dimension
N <- 1000  # sample size
Sigma <- .25^abs(outer(1:n,1:n,"-"))
Sigma <- as.spam( Sigma, eps=1e-4)

cholS <- chol( Sigma)    
# cholS is the upper triangular part of the permutated matrix Sigma
iord <- ordering(cholS, inv=TRUE)

R <- as.spam(cholS)
mvsample <- ( array(rnorm(N*n),c(N,n)) %*% R)[,iord]
# It is often better to order the sample than the matrix
# R itself. 

# 'mvsample' is of class 'spam'. We need to transform it to a
# regular matrix, as there is no method 'var' for 'spam' (should there?).
norm( var( as.matrix( mvsample)) - Sigma, type='m')
norm( t(R) %*% R - Sigma)


# To speed up factorizations, memory allocations can be optimized:
opt <- summary(cholS)
# here, some elements of Sigma may be changed...
cholS <- chol( Sigma, memory=list(nnzR=opt$nnzR,nnzcolindices=opt$nnzc))

# }

Run the code above in your browser using DataCamp Workspace