This function performs an elementary sensitivity analysis for two models regarding marginal posterior distributions and posterior inferences.

`SensitivityAnalysis(Fit1, Fit2, Pred1, Pred2)`

Fit1

This argument accepts an object of class `demonoid`

,
`iterquad`

, `laplace`

, `pmc`

, or `vb`

.

Fit2

This argument accepts an object of class `demonoid`

,
`iterquad`

, `laplace`

, `pmc`

, or `vb`

.

Pred1

This argument accepts an object of class
`demonoid.ppc`

, `iterquad.ppc`

, `laplace.ppc`

,
`pmc.ppc`

, or `vb.ppc`

.

Pred2

This argument accepts an object of class
`demonoid.ppc`

, `iterquad.ppc`

, `laplace.ppc`

,
`pmc.ppc`

, or `vb.ppc`

.

This function returns a list with the following components:

This is a \(J \times 2\) matrix of \(J\) marginal posterior distributions. Column names are "p(Fit1 > Fit2)" and "var(Fit1) / var(Fit2)".

This is a \(N \times 2\) matrix of \(N\) posterior predictive distributions. Column names are "p(Pred1 > Pred2)" and "var(Pred1) / var(Pred2)".

Sensitivity analysis is concerned with the influence from changes to the inputs of a model on the output. Comparing differences resulting from different prior distributions is the most common application of sensitivity analysis, though results from different likelihoods may be compared as well. The outputs of interest are the marginal posterior distributions and posterior inferences.

There are many more methods of conducting a sensitivity analysis than
exist in the `SensitivityAnalysis`

function. For more
information, see Oakley and O'Hagan (2004). The `SIR`

function is useful for approximating changes in the posterior due to
small changes in prior distributions.

The `SensitivityAnalysis`

function compares marginal posterior
distributions and posterior predictive distributions. Specifically,
it calculates the probability that each distribution in `Fit1`

and `Pred1`

is greater than the associated distribution in
`Fit2`

and `Pred2`

, and returns a variance ratio of each
pair of distributions. If the probability is \(0.5\) that a
distribution is greater than another, or if the variance ratio is
\(1\), then no difference is found due to the inputs.

Additional comparisons and methods are currently outside the scope of
the `SensitivityAnalysis`

function. The `BayesFactor`

function may also be considered, as well as comparing posterior
predictive checks resulting from `summary.demonoid.ppc`

,
`summary.iterquad.ppc`

,
`summary.laplace.ppc`

, `summary.pmc.ppc`

, or
`summary.vb.ppc`

.

Regarding marginal posterior distributions, the
`SensitivityAnalysis`

function compares only distributions with
identical parameter names. For example, suppose a statistician
conducts a sensitivity analysis to study differences resulting from
two prior distributions: a normal distribution and a Student t
distribution. These distributions have two and three parameters,
respectively. The statistician has named the parameters `beta`

and `sigma`

for the normal distribution, while for the Student
t distribution, the parameters are named `beta`

, `sigma`

,
and `nu`

. In this case, the `SensitivityAnalysis`

function
compares the marginal posterior distributions for `beta`

and
`sigma`

, though `nu`

is ignored because it is not in both
models. If the statistician does not want certain parameters compared,
then differing parameter names should be assigned.

Robust Bayesian analysis is a very similar topic, and often called simply Bayesian sensitivity analysis. In robust Bayesian analysis, the robustness of answers from a Bayesian analysis to uncertainty about the precise details of the analysis is studied. An answer is considered robust if it does not depend sensitively on the assumptions and inputs on which it is based. Robust Bayes methods acknowledge that it is sometimes very difficult to come up with precise distributions to be used as priors. Likewise the appropriate likelihood function that should be used for a particular problem may also be in doubt. In a robust Bayesian analysis, a standard Bayesian analysis is applied to all possible combinations of prior distributions and likelihood functions selected from classes of priors and likelihoods considered empirically plausible by the statistician.

Berger, J.O. (1984). "The Robust Bayesian Viewpoint (with discussion)". In J. B. Kadane, editor, Robustness of Bayesian Analyses, p. 63--144. North-Holland, Amsterdam.

Berger, J.O. (1985). "Statistical Decision Theory and Bayesian Analysis". Springer-Verlag, New York.

Berger, J.O. (1994). "An Overview of Robust Bayesian Analysis (with discussion)". Test, 3, p. 5--124.

Oakley, J. and O'Hagan, A. (2004). "Probabilistic Sensitivity Analysis
of Complex Models: a Bayesian Approach". *Journal of the Royal
Statistical Society, Series B*, 66, p. 751--769.

Weiss, R. (1995). "An Approach to Bayesian Sensitivity Analysis".
*Journal of the Royal Statistical Society, Series B*, 58,
p. 739--750.

`BayesFactor`

,
`IterativeQuadrature`

,
`LaplaceApproximation`

,
`LaplacesDemon`

,
`PMC`

,
`predict.demonoid`

,
`predict.iterquad`

,
`predict.laplace`

,
`predict.pmc`

,
`SIR`

,
`summary.demonoid.ppc`

,
`summary.iterquad.ppc`

,
`summary.laplace.ppc`

,
`summary.pmc.ppc`

, and
`VariationalBayes`

.

```
# NOT RUN {
#sa <- SensitivityAnalysis(Fit1, Fit2, Pred1, Pred2)
#sa
# }
```

Run the code above in your browser using DataLab