dinvgauss(x, mean=1, shape=NULL, dispersion=1, log=FALSE)
pinvgauss(q, mean=1, shape=NULL, dispersion=1, lower.tail=TRUE, log.p=FALSE)
qinvgauss(p, mean=1, shape=NULL, dispersion=1, lower.tail=TRUE, log.p=FALSE, maxit=200L, tol=1e-14, trace=FALSE)
rinvgauss(n, mean=1, shape=NULL, dispersion=1)
length(n)
is larger than 1, then length(n)
random values are returned.shape
is not NULL
, in which case dispersion=1/shape
.TRUE
, probabilities are P(Xq).
TRUE
, the log-density is returned.TRUE
, probabilities are on the log-scale.q
.TRUE
then the working estimate for q
from each iteration will be output.dinvgauss
), probability (pinvgauss
), quantile (qinvgauss
) or random sample (rinvgauss
) for the inverse Gaussian distribution with mean mean
and dispersion dispersion
.
Output is a vector of length equal to the maximum length of any of the arguments x
, q
, mean
, shape
or dispersion
.
If the first argument is the longest, then all the attributes of the input argument are preserved on output, for example, a matrix x
will give a matrix on output.
Elements of input vectors that are missing will cause the corresponding elements of the result to be missing, as will non-positive values for mean
or dispersion
.
dispersion*mean^3
.
The distribution has applications in reliability and survival analysis, and is one of the response distributions used in generalized linear models.The shape and dispersion parameters are alternative parametrizations for the variability, with dispersion=1/shape
.
Only one of these two arguments needs to be specified.
If both are set, then shape
takes precedence.
These functions implement algorithms described by Giner and Goknur (2016).
pinvgauss
uses a result from Chhikara and Folks (1974), with enhancements for right tails and log-probabilities.
rinvgauss
uses an algorithm proposed by Michael et al (1976).
qinvgauss
uses a monotonically convergent Newton iteration developed by Giner and Smyth (2016).
All internal computations are undertaken on the log-scale as far as possible.
The pinvgauss
and qinvgauss
functions make use of Taylor series expansions to achieve full floating point accuracy for small tail probabilities.
Chhikara, R. S., and Folks, J. L., (1974). Estimation of the inverse Gaussian distribution function. Journal of the American Statistical Association 69, 250-254.
Giner, G., and Smyth, G. K. (2016). statmod: Probability Calculations for the Inverse Gaussian Distribution. http://www.statsci.org/smyth/pubs/qinvgaussPreprint.pdf
Michael, J. R., Schucany, W. R., and Haas, R. W. (1976). Generating random variates using transformations with multiple roots. The American Statistician, 30, 88--90.
Tweedie, M. C. (1957). Statistical Properties of Inverse Gaussian Distributions I. Annals of Mathematical Statistics 28, 362-377.
dinvGauss
, pinvGauss
, qinvGauss
and rinvGauss
in the SuppDists package.
q <- rinvgauss(10, mean=1, disp=0.5) # generate vector of 10 random numbers
p <- pinvgauss(q, mean=1, disp=0.5) # p should be uniformly distributed
# Quantile for small right tail probability:
qinvgauss(1e-20, mean=1.5, disp=0.7, lower.tail=FALSE)
# Same quantile, but represented in terms of left tail probability on log-scale
qinvgauss(-1e-20, mean=1.5, disp=0.7, lower.tail=TRUE, log.p=TRUE)
Run the code above in your browser using DataLab