
Last chance! 50% off unlimited learning
Sale ends in
Estimate the reciprocal of the condition number of a matrix.
This is a generic function with several methods, as seen by
showMethods(rcond)
.
rcond(x, norm, ...)# S4 method for sparseMatrix,character
rcond(x, norm, useInv=FALSE, ...)
An estimate of the reciprocal condition number of x
.
an R object that inherits from the Matrix
class.
character string indicating the type of norm to be used in
the estimate. The default is "O"
for the 1-norm ("O"
is
equivalent to "1"
). For sparse matrices, when useInv=TRUE
,
norm
can be any of the kind
s allowed for norm
;
otherwise, the other possible value is "I"
for the infinity
norm, see also norm
.
logical (or "Matrix"
containing
solve(x)
). If not false, compute the reciprocal
condition number as solve(x)
.
This may be an efficient alternative (only) in situations where
solve(x)
is fast (or known), e.g., for (very) sparse or
triangular matrices.
Note that the result may differ depending on useInv
,
as per default, when it is false, an approximation is
computed.
further arguments passed to or from other methods.
The condition number of a regular (square) matrix is the product of
the norm
of the matrix and the norm of its inverse (or
pseudo-inverse).
More generally, the condition number is defined (also for
non-square matrices x
is not a square matrix, in our method
definitions, this is typically computed via rcond(qr.R(qr(X)), ...)
where X
is x
or t(x)
.
The condition number takes on values between 1 and infinity, inclusive, and can be viewed as a factor by which errors in solving linear systems with this matrix as coefficient matrix could be magnified.
rcond()
computes the reciprocal condition number
Condition numbers are usually estimated, since exact computation is costly in terms of floating-point operations. An (over) estimate of reciprocal condition number is given, since by doing so overflow is avoided. Matrices are well-conditioned if the reciprocal condition number is near 1 and ill-conditioned if it is near zero.
Golub, G., and Van Loan, C. F. (1989). Matrix Computations, 2nd edition, Johns Hopkins, Baltimore.
library(stats, pos = "package:base", verbose = FALSE)
x <- Matrix(rnorm(9), 3, 3)
rcond(x)
## typically "the same" (with more computational effort):
1 / (norm(x) * norm(solve(x)))
rcond(Hilbert(9)) # should be about 9.1e-13
## For non-square matrices:
rcond(x1 <- cbind(1,1:10))# 0.05278
rcond(x2 <- cbind(x1, 2:11))# practically 0, since x2 does not have full rank
## sparse
(S1 <- Matrix(rbind(0:1,0, diag(3:-2))))
rcond(S1)
m1 <- as(S1, "denseMatrix")
all.equal(rcond(S1), rcond(m1))
## wide and sparse
rcond(Matrix(cbind(0, diag(2:-1))))
## Large sparse example ----------
m <- Matrix(c(3,0:2), 2,2)
M <- bdiag(kronecker(Diagonal(2), m), kronecker(m,m))
36*(iM <- solve(M)) # still sparse
MM <- kronecker(Diagonal(10), kronecker(Diagonal(5),kronecker(m,M)))
dim(M3 <- kronecker(bdiag(M,M),MM)) # 12'800 ^ 2
if(interactive()) ## takes about 2 seconds if you have >= 8 GB RAM
system.time(r <- rcond(M3))
## whereas this is *fast* even though it computes solve(M3)
system.time(r. <- rcond(M3, useInv=TRUE))
if(interactive()) ## the values are not the same
c(r, r.) # 0.05555 0.013888
## for all 4 norms available for sparseMatrix :
cbind(rr <- sapply(c("1","I","F","M"),
function(N) rcond(M3, norm=N, useInv=TRUE)))
stopifnot(all.equal(r., 1/72, tolerance=1e-12))
Run the code above in your browser using DataLab