Fits ordinal cumulative probability models for continuous or ordinal
	response variables, efficiently allowing for a large number of
	intercepts by capitalizing on the information matrix being sparse.
	Five different distribution functions are implemented, with the
	default being the logistic (i.e., the proportional odds
	model).  The ordinal cumulative probability models are stated in terms
	of exceedance probabilities (\(Prob[Y \ge y | X]\)) so that as with
	OLS larger predicted values are associated with larger Y.  This is
	important to note for the asymmetric distributions given by the
	log-log and complementary log-log families, for which negating the
	linear predictor does not result in \(Prob[Y < y | X]\).  The
	family argument is defined in orm.fit.  The model
	assumes that the inverse of the assumed cumulative distribution
	function, when applied to one minus the true cumulative distribution function
	and plotted on the \(y\)-axis (with the original \(y\) on the
	\(x\)-axis) yields parallel curves (though not necessarily linear).
	This can be checked by plotting the inverse cumulative probability
	function of one minus the empirical distribution function, stratified
	by X, and assessing parallelism.  Note that parametric
	regression models make the much stronger assumption of linearity of
	such inverse functions.
For the print method, format of output is controlled by the
	user previously running options(prType="lang") where
	lang is "plain" (the default), "latex", or
	"html".  When using html with Quarto or RMarkdown,
  results='asis' need not be written in the chunk header.
Quantile.orm creates an R function that computes an estimate of
	a given quantile for a given value of the linear predictor (which was
	assumed to use thefirst intercept).  It uses a linear
	interpolation method by default, but you can override that to use a
	discrete method by specifying method="discrete" when calling
	the function generated by Quantile.
  Optionally a normal approximation for a confidence
	interval for quantiles will be computed using the delta method, if
	conf.int > 0 is specified to the function generated from calling
	Quantile and you specify X.  In that case, a
	"lims" attribute is included
	in the result computed by the derived quantile function.
orm(formula, data=environment(formula),
    subset, na.action=na.delete, method="orm.fit",
    family=c("logistic", "probit", "loglog", "cloglog", "cauchit"),
    model=FALSE, x=FALSE, y=FALSE, lpe=FALSE,
    linear.predictors=TRUE, se.fit=FALSE,
    penalty=0, penalty.matrix,
    var.penalty=c('simple','sandwich'), scale=FALSE,
    maxit=30, weights, normwt=FALSE, ...)# S3 method for orm
print(x, digits=4, r2=c(0,2,4), coefs=TRUE, pg=FALSE,
    intercepts=x$non.slopes < 10, title, ...)
# S3 method for orm
Quantile(object, codes=FALSE, ...)
The returned fit object of orm contains the following components
in addition to the ones mentioned under the optional arguments.
calling expression
table of frequencies for Y in order of increasing Y
vector with the following elements: number of observations used in the
fit, effective sample size ESS, number of unique Y values, median Y from among the
observations used int he fit, maximum absolute value of first
derivative of log likelihood, model likelihood ratio
\(\chi^2\), d.f.,  \(P\)-value, score \(\chi^2\)
statistic (if no initial values given), \(P\)-value, Spearman's
\(\rho\) rank correlation between the linear predictor and Y,
the Nagelkerke \(R^2\) index, \(R^2\) indexes computed by
R2Measures, the \(g\)-index, \(gr\) (the
\(g\)-index on the odds ratio scale), and \(pdm\) (the mean absolute
difference between 0.5 and the predicted probability that \(Y\geq\)
the marginal median).
In the case of penalized estimation, the "Model L.R." is computed
without the penalty factor, and "d.f." is the effective d.f. from
Gray's (1992) Equation 2.9.  The \(P\)-value uses this corrected model
L.R. \(\chi^2\) and corrected d.f.
The score chi-square statistic uses first derivatives which contain
penalty components.
set to TRUE if convergence failed (and maxiter>1) or if a
singular information matrix is encountered
estimated parameters
estimated variance-covariance matrix (inverse of information matrix)
for the middle intercept and regression coefficients.  See
lrm for details if penalization is used.
see lrm
the character string for family.
a vector of expressions containing functions for the cumulative probability, inverse cumulative probability, derivative, second derivative, and derivative as a function of only x
a list of functions for the choice of family, with
	elements cumprob (the cumulative probability distribution
	function), inverse (inverse of cumprob), deriv
	(first derivative of cumprob), and deriv2 (second
	derivative of cumprob)
-2 log likelihoods (counting penalty components) When an offset variable is present, three deviances are computed: for intercept(s) only, for intercepts+offset, and for intercepts+offset+predictors. When there is no offset variable, the vector contains deviances for the intercept(s)-only model and the model with intercept(s) and predictors.
number of intercepts in model
the index of the middle (median) intercept used in
	computing the linear predictor and var
see lrm
the penalty matrix actually used in the estimation
a sparse matrix representation of type
	matrix.csr from the SparseM package.  This allows the
	full information matrix with all intercepts to be stored efficiently,
	and matrix operations using the Cholesky decomposition to be fast.
	link{vcov.orm} uses this information to compute the covariance
	matrix for intercepts other than the middle one.
a formula object. An offset term can be included. The offset causes
fitting of a model such as \(logit(Y=1) = X\beta + W\), where \(W\) is the
offset variable having no estimated coefficient.
The response variable can be any data type; orm converts it
in alphabetic or numeric order to a factor variable and
recodes it 1,2,... internally.
data frame to use. Default is the current frame.
logical expression or vector of subscripts defining a subset of observations to analyze
function to handle NAs in the data. Default is na.delete, which
deletes any observation having response or predictor missing, while
preserving the attributes of the predictors and maintaining frequencies
of deletions due to each variable in the model.
This is usually specified using options(na.action="na.delete").
name of fitting function. Only allowable choice at present is orm.fit.
character value specifying the distribution family, which is one of the following:
"logistic", "probit", "loglog", "cloglog", "cauchit", corresponding to logistic (the
	default), Gaussian, Cauchy, Gumbel maximum (\(exp(-exp(-x))\);
	extreme value type I), and Gumbel minimum
	(\(1-exp(-exp(x))\)) distributions.  These are the cumulative
	distribution functions assumed for \(Prob[Y \ge y | X]\).  The default is "logistic".
causes the model frame to be returned in the fit object
causes the expanded design matrix (with missings excluded)
to be returned under the name x.  For print, an object
created by orm.
causes the response variable (with missings excluded) to be returned
under the name y.
set lpe=TRUE to store the vector of likelihood probability elements
in the fit object in a list element named lpe.
This will enable the ordESS function to summarize the
effective sample sizes of any censored observations.
causes the predicted X beta (with missings excluded) to be returned
under the name linear.predictors.  The first intercept is used.
causes the standard errors of the fitted values (on the linear predictor
scale) to be returned under the name se.fit.  The middle
intercept is used.
see lrm
see lrm
see lrm
set to TRUE to subtract column means and divide by
	column standard deviations of the design matrix
  before fitting, and to back-solve for the un-normalized covariance
  matrix and regression coefficients.  This can sometimes make the model
	converge for very large
  sample sizes where for example spline or polynomial component
  variables create scaling problems leading to loss of precision when
  accumulating sums of squares and crossproducts.
maximum number of iterations to allow in orm.fit
a vector (same length as y) of possibly fractional case weights
set to TRUE to scale weights so they sum to the length of
 y; useful for sample surveys as opposed to the default of
 frequency weighting
arguments that are passed to orm.fit, or from
    print, to prModFit.  Ignored for
		Quantile.  One of the most important arguments is family.
number of significant digits to use
vector of integers specifying which R^2 measures to print,
		with 0 for Nagelkerke R^2 and 1:4 corresponding to the 4 measures
		computed by R2Measures.  Default is to print
		Nagelkerke (labeled R2) and second and fourth R2Measures
		which are the measures adjusted for the number of predictors, first
		for the raw sample size then for the effective sample size, which
		here is from the formula for the approximate variance of a log odds
    ratio in a proportional odds model.
set to TRUE to print g-indexes
specify coefs=FALSE to suppress printing the table
  of model coefficients, standard errors, etc.  Specify coefs=n
  to print only the first n regression coefficients in the
  model.
By default, intercepts are only printed if there are
    fewer than 10 of them.  Otherwise this is controlled by specifying
    intercepts=FALSE or TRUE.
a character string title to be passed to prModFit.
  Default is constructed from the name of the distribution family.
an object created by orm
if TRUE, uses the integer codes \(1,2,\ldots,k\)
	for the \(k\)-level response in computing the predicted quantile
Frank Harrell
Department of Biostatistics, Vanderbilt University
fh@fharrell.com
For the Quantile function:
Qi Liu and Shengxin Tu
Department of Biostatistics, Vanderbilt University
Sall J: A monotone regression smoother based on ordinal cumulative logistic regression, 1991.
Le Cessie S, Van Houwelingen JC: Ridge estimators in logistic regression. Applied Statistics 41:191--201, 1992.
Verweij PJM, Van Houwelingen JC: Penalized likelihood in Cox regression. Stat in Med 13:2427--2436, 1994.
Gray RJ: Flexible methods for analyzing survival data using splines, with applications to breast cancer prognosis. JASA 87:942--951, 1992.
Shao J: Linear model selection by cross-validation. JASA 88:486--494, 1993.
Verweij PJM, Van Houwelingen JC: Crossvalidation in survival analysis. Stat in Med 12:2305--2314, 1993.
Harrell FE: Model uncertainty, penalization, and parsimony. Available from https://hbiostat.org/talks/iscb98.pdf.
orm.fit, predict.orm, solve,
ordESS,
rms.trans, rms, polr,
latex.orm,  vcov.orm,
num.intercepts,
residuals.orm, na.delete,
na.detail.response,
pentrace, rmsMisc, vif,
predab.resample,
validate.orm, calibrate,
Mean.orm, gIndex, prModFit