Bayesian learning with multiple priors and non-vanishing ambiguity

The existing models of Bayesian learning with multiple priors by Marinacci (2002) and by Epstein and Schneider (2007) formalize the intuitive notion that ambiguity should vanish through statistical learning in an one-urn environment. Moreover, the multiple priors decision maker of these models will eventually learn the ‘’truth’’. To accommodate non vanishing violations of Savage’s (1954) sure-thing principle, as reported in Nicholls et al. (2015), we construct and analyze a model of Bayesian learning with multiple priors for which ambiguity does not necessarily vanish. Our decision maker only forms posteriors from priors that pass a plausibility test in the light of the observed data in the form of a γ-maximum expected loglikelihood prior-selection rule. The ““stubbornness”” parameter γ1 determines the magnitude by which the expectation of the loglikelihood with respect to plausible priors can differ from the maximal expected loglikelihood. The greater the value of y, the more priors pass the plausibility test to the effect that less ambiguity vanishes in the limit of our learning model.
Related Journal

Economic Theory
26 October 2016
SHARE THIS Working Paper PUBLICATION: