Initial values are required for optimization or sampling algorithms. A
  user may specify initial values, or use the GIV function for
  random generation. Initial values determined by the user may fail to
  produce a finite posterior in complicated models, and the GIV
  function is here to help.
GIV has several uses. First, the
  IterativeQuadrature, LaplaceApproximation,
  LaplacesDemon, and VariationalBayes
  functions use GIV internally if unacceptable initial values are
  discovered. Second, the user may use GIV when developing their
  model specification function, Model, to check for potential
  problems. Third, the user may prefer to randomly generate acceptable
  initial values. Lastly, GIV is recommended when running
  multiple or parallel chains with the LaplacesDemon.hpc
  function (such as for later use with the Gelman.Diagnostic) for
  dispersed starting locations. For dispersed starting locations,
  GIV should be run once for each parallel chain, and the results
  should be stored per row in a matrix of initial values. For more
  information, see the LaplacesDemon.hpc documentation for
  initial values.
It is strongly recommended that the user specifies a
  Parameter-Generating Function (PGF), and includes this function in the
  list of data. Although the PGF may be specified according to the prior
  distributions (possibly considered as a Prior-Generating Function), it
  is often specified with a more restricted range. For example, if a
  user has a model with the following prior distributions
$$\beta_j \sim \mathcal{N}(0, 1000), j=1,\dots,5$$
  $$\sigma \sim \mathcal{HC}(25)$$
then the PGF, given the prior distributions, is
PGF <- function(Data) return(c(rnormv(5,0,1000),rhalfcauchy(1,25)))
However, the user may not want to begin with initial values that could
  be so far from zero (as determined by the variance of 1000), and may
  instead prefer
PGF <- function(Data) return(c(rnormv(5,0,10),rhalfcauchy(1,5)))
When PGF=FALSE, initial values are attempted to be constrained
  to the interval \([-100,100]\). This is done to prevent numeric
  overflows with parameters that are exponentiated within the model
  specification function. First, GIV passes the upper and lower
  bounds of this interval to the model, and any changed parameters are
  noted.
At this point, it is hoped that a non-finite posterior is not
  found. If found, then the remainder of the process is random and
  without the previous bounds. This can be particularly problematic in
  the case of, say, initial values that are the elements of a matrix
  that must be positive-definite, especially with large matrices. If a
  random solution is not found, then GIV will fail.
  
If the posterior is finite and PGF=FALSE, then initial values
  are randomly generated with a normal proposal and a small variance at
  the center of the returned range of each parameter. As GIV
  fails to find acceptable initial values, the algorithm iterates toward
  its maximum number of iterations, n. In each iteration, the
  variance increases for the proposal.
  
Initial values are considered acceptable only when the first two
  returned components of Model (which are LP and
  Dev) are finite, and when initial values do not change through
  constraints, as returned in the fifth component of the list:
  parm.
If GIV fails to return acceptable initial values, then it is
  best to study the model specification function. When the model is
  complicated, here is a suggestion. Remove the log-likelihood,
  LL, from the equation that calculates the logarithm of the
  unnormalized joint posterior density, LP. For example, convert
  LP <- LL + beta.prior to LP <- beta.prior. Now, maximize
  LP, which is merely the set of prior densities, with any
  optimization algorithm. Replace LL, and run the model with
  initial values that are in regions of high prior density (preferably
  with PGF=TRUE. If this fails, then the model specification
  should be studied closely, because a non-finite posterior should
  (especially) never be associated with regions of high prior density.