cqo.qrrvglm.control(Rank = 1,
                Bestof = if (length(Cinit)) 1 else 10,
                checkwz = TRUE,
                Cinit = NULL,
                Crow1positive = TRUE,
                epsilon = 1.0e-06,
                EqualTolerances = NULL,
                eq.tolerances = TRUE,
                Etamat.colmax = 10,
                FastAlgorithm = TRUE,
                GradientFunction = TRUE,
                Hstep = 0.001,
                isd.latvar = rep(c(2, 1, rep(0.5, length = Rank)), length = Rank),
                iKvector = 0.1,
                iShape = 0.1,
                ITolerances = NULL,
                I.tolerances = FALSE,
                maxitl = 40,
                imethod = 1,
                Maxit.optim = 250,
                MUXfactor = rep(7, length = Rank),
                noRRR = ~ 1, Norrr = NA,
                optim.maxit = 20,
                Parscale = if (I.tolerances) 0.001 else 1.0,
                sd.Cinit = 0.02,
                SmallNo = 5.0e-13, 
                trace = TRUE,
                Use.Init.Poisson.QO = TRUE, 
                wzepsilon = .Machine$double.eps^0.75, ...)Bestof models fitted is returned.
    This argument helps guard against local solutions by (hopefully)
    finding the global solution from many fits. The argument has value
    1 if an initial value for $C$ is inputted uwzepsilon. If not,
    any values less than wzepsilon are replac.Init.Poisson.QO() to obtain
    initial values.Rank (recycled if necessary): are
      the elements of the first row of $C$ positive? For example,
      if Rank is 4, then specifying Crow1positive = c(FALSE,
      TRUE) will force $C[1,1]eq.tolerances = TRUE
    can help avoid numerical problems, especially with binary data.
    Note that the estimated (common) tolerance matrix may oreq.tolerances instead.Rank. Controls the amount
    of memory used by .Init.Poisson.QO(). It is the maximum
    number of columns allowed for the pseudo-response and its weights.
    In general, the larger the value, FastAlgorithm = FALSEoptim's argument gr
   is used or not, i.e., to compute gradient values. Used only if
   FastAlgorithm is TRUE. The default value is usually
   fasoptim.I.tolerances = TRUE. Used by
   .Init.Poisson.QO() to obtain iniTRUE then the (common) tolerance matrix is the
   $R$ by $R$ identity matrix by definition. Note that having
   I.tolerances = TRUE implies eq.tolerances = TRUE, but
   not vice versa. Internally, the quaI.tolerances instead.negbinomial and 
    optim at each of the optim.maxit
    iterations.I.tolerances = TRUE. Offsets are $-0.5$
   multiplied by the sum of the squares of all $noRRR.
  Use of Norrr will become an error soon.optim(..., control = list(parscale = Parscale));
   the elements of $C$ become $C$ / Parscale.
   Setting I.tolerances = TRUEUse.Init.Poisson.QO = FALSE
      and $C$ is not inputted using CinitTRUE because the
    calculations are numerically intensive, meaning it may take
    a long time, so that the user might think the computer has
    locked .Machine$double.eps and 0.0001. 
  Used to avoid under- or over-flow in the IRLS algorithm. 
  Used only if FastAlgorithm is TRUE.TRUE then the function .Init.Poisson.QO() is
    used to obtain initial values for the canonical coefficients $C$.
    If FALSE then random numbers are used instead.Bestof is a bare minimum for many datasets,
  therefore it will be necessary to increase its value to increase the
  chances of obtaining the global solution.   Having I.tolerances = TRUE means all the tolerance matrices
   are the order-$R$ identity matrix, i.e., it forces
   bell-shaped curves/surfaces on all species. This results in a
   more difficult optimization problem (especially for 2-parameter
   models such as the negative binomial and gamma) because of overflow
   errors and it appears there are more local solutions. To help avoid
   the overflow errors, scaling $C$ by the factor Parscale
   can help enormously. Even better, scaling $C$ by specifying
   isd.latvar is more understandable to humans. If failure to
   converge occurs, try adjusting Parscale, or better, setting
   eq.tolerances = TRUE (and hope that the estimated tolerance
   matrix is positive-definite). To fit an equal-tolerances model, it
   is firstly best to try setting I.tolerances = TRUE and varying
   isd.latvar and/or MUXfactor if it fails to converge.
   If it still fails to converge after many attempts, try setting
   eq.tolerances = TRUE, however this will usually be a lot slower
   because it requires a lot more memory.
With a $R > 1$ model, the latent variables are always uncorrelated, i.e., the variance-covariance matrix of the site scores is a diagonal matrix.
   If setting eq.tolerances = TRUE is used and the common
   estimated tolerance matrix is positive-definite then that model is
   effectively the same as the I.tolerances = TRUE model (the two are
   transformations of each other). In general, I.tolerances = TRUE
   is numerically more unstable and presents a more difficult problem
   to optimize; the arguments isd.latvar and/or MUXfactor often
   must be assigned some good value(s) (possibly found by trial and error)
   in order for convergence to occur. Setting I.tolerances = TRUE
   forces a bell-shaped curve or surface onto all the species data,
   therefore this option should be used with deliberation. If unsuitable,
   the resulting fit may be very misleading. Usually it is a good idea
   for the user to set eq.tolerances = FALSE to see which species
   appear to have a bell-shaped curve or surface. Improvements to the
   fit can often be achieved using transformations, e.g., nitrogen
   concentration to log nitrogen concentration.
   Fitting a CAO model (see cao) first is a good idea for
   pre-examining the data and checking whether it is appropriate to fit
   a CQO model.
Yee, T. W. (2006) Constrained additive ordination. Ecology, 87, 203--213.
cqo,
  rcqo,
  Coef.qrrvglm,
  Coef.qrrvglm-class,
  optim,
  binomialff,
  poissonff,
  negbinomial,
  gamma2,
  gaussianff.# Poisson CQO with equal tolerances
set.seed(111)  # This leads to the global solution
hspider[,1:6] <- scale(hspider[,1:6])  # Good idea when I.tolerances = TRUE
p1 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi,
               Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~
          WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux,
          quasipoissonff, data = hspider, eq.tolerances = TRUE)
sort(p1@misc$deviance.Bestof)  # A history of all the iterations
(isd.latvar <- apply(latvar(p1), 2, sd))  # Should be approx isd.latvar
 
# Refit the model with better initial values
set.seed(111)  # This leads to the global solution
p1 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, 
               Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~
          WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux,
          I.tolerances = TRUE, quasipoissonff, data = hspider,
          isd.latvar = isd.latvar)  # Note the use of isd.latvar here
sort(p1@misc$deviance.Bestof)  # A history of all the iterationsRun the code above in your browser using DataLab