Robust Bayesian analysis

In statistics, robust Bayesian analysis, also called Bayesian sensitivity analysis, is a type of sensitivity analysis applied to the outcome from Bayesian inference or Bayesian optimal decisions.

Sensitivity analysis

edit

Robust Bayesian analysis, also called Bayesian sensitivity analysis, investigates the robustness of answers from a Bayesian analysis to uncertainty about the precise details of the analysis.[1][2][3][4][5][6] An answer is robust if it does not depend sensitively on the assumptions and calculation inputs on which it is based. Robust Bayes methods acknowledge that it is sometimes very difficult to come up with precise distributions to be used as priors.[4] Likewise the appropriate likelihood function that should be used for a particular problem may also be in doubt.[7] In a robust Bayes approach, a standard Bayesian analysis is applied to all possible combinations of prior distributions and likelihood functions selected from classes of priors and likelihoods considered empirically plausible by the analyst. In this approach, a class of priors and a class of likelihoods together imply a class of posteriors by pairwise combination through Bayes' rule. Robust Bayes also uses a similar strategy to combine a class of probability models with a class of utility functions to infer a class of decisions, any of which might be the answer given the uncertainty about best probability model and utility function. In both cases, the result is said to be robust if it is approximately the same for each such pair. If the answers differ substantially, then their range is taken as an expression of how much (or how little) can be confidently inferred from the analysis.

Although robust Bayes methods are clearly inconsistent with the Bayesian idea that uncertainty should be measured by a single additive probability measure and that personal attitudes and values should always be measured by a precise utility function, they are often accepted as a matter of convenience (e.g., because the cost or schedule do not allow the more painstaking effort needed to get a precise measure and function).[8] Some analysts also suggest that robust methods extend the traditional Bayesian approach by recognizing incertitude as of a different kind of uncertainty.[6][8] Analysts in the latter category suggest that the set of distributions in the prior class is not a class of reasonable priors, but that it is rather a reasonable class of priors. The idea is that no single distribution is reasonable as a model of ignorance, but considered as a whole, the class is a reasonable model for ignorance.

Robust Bayes methods are related to important and seminal ideas in other areas of statistics such as robust statistics and resistance estimators.[9][10] The arguments in favor of a robust approach are often applicable to Bayesian analyses. For example, some criticize methods that must assume the analyst is "omniscient" about certain facts such as model structure, distribution shapes and parameters. Because such facts are themselves potentially in doubt, an approach that does not rely too sensitively on the analysts getting the details exactly right would be preferred.

There are several ways to design and conduct a robust Bayes analysis, including the use of (i) parametric conjugate families of distributions, (ii) parametric but non-conjugate families, (iii) density-ratio (bounded density distributions),[11][12] (iv) ε-contamination,[13] mixture, quantile classes, etc., and (v) bounds on cumulative distributions.[14][15] Although calculating the solutions to robust Bayesian problems can, in some cases, be computationally intensive, there are several special cases in which the requisite calculations are, or can be made, straightforward.

See also

edit

References

edit
  1. ^ Berger, J.O. (1984). The robust Bayesian viewpoint (with discussion). In J. B. Kadane, editor, Robustness of Bayesian Analyses, pages 63–144. North-Holland, Amsterdam.
  2. ^ Berger, J.O. (1985). Statistical Decision Theory and Bayesian Analysis. Springer-Verlag, New York.
  3. ^ Wasserman, L. A. (1992). Recent methodological advances in robust Bayesian inference (with discussion). In J. M. Bernardo, J. O. Berger, A. P. Dawid, and A. F. M. Smith, editors, Bayesian Statistics, volume 4, pages 483–502. Oxford University Press, Oxford.
  4. ^ a b Berger, J.O. (1994). "An overview of robust Bayesian analysis" (with discussion). Test 3: 5-124.
  5. ^ Insua, D.R. and F. Ruggeri (eds.) (2000). Robust Bayesian Analysis. Lecture Notes in Statistics, volume 152. Springer-Verlag, New York.
  6. ^ a b Pericchi, L.R. (2000). Sets of prior probabilities and Bayesian robustness.
  7. ^ Pericchi, L.R., and M. E. Pérez (1994). "Posterior robustness with more than one sampling model". Journal of Statistical Planning and Inference 40: 279–294.
  8. ^ a b Walley, P. (1991). Statistical Reasoning with Imprecise Probabilities. Chapman and Hall, London.
  9. ^ Huber, P.J. (1981). Robust Statistics. Wiley, New York.
  10. ^ Huber, P. J. (1972). Robust statistics: a review. Annals of Mathematical Statististics 43: 1041–1067.
  11. ^ DeRobertis, L., and J.A. Hartigan (1981). Bayesian inference using intervals of measures. The Annals of Statistics 9: 235–244.
  12. ^ Walley, P. (1997). A bounded derivative model for prior ignorance about a real-valued parameter. Scandinavian Journal of Statistics 24:463-483.
  13. ^ Moreno, E., and L.R. Pericchi (1993). Bayesian robustness for hierarchical ε-contamination models. Journal of Statistical Planning and Inference 37:159–168.
  14. ^ Basu, S. (1994). Variations of posterior expectations for symmetric unimodal priors in a distribution band. Sankhyā: The Indian Journal of Statistics, Series A 56: 320–334.
  15. ^ Basu, S., and A. DasGupta (1995). "Robust Bayesian analysis with distribution bands". Statistics and Decisions 13: 333–349.

Other reading

edit