In standard models of Bayesian learning agents reduce their uncertainty about an events true probability because their consistent estimator concentrates almost surely around this probabilitys true value as the number of observations becomes large. This paper takes the empirically observed violations of Savages (1954) sure thing principle seriously and asks whether Bayesian learners with ambiguity attitudes will reduce their ambiguity when sample information becomes large. To address this question, I develop closed-form models of Bayesian learning in which beliefs are described as Choquet estimators with respect to neo-additive capacities (Chateauneuf, Eichberger, and Grant 2007). Under the optimistic, the pessimistic, and the full Bayesian update rule, a Bayesian learners ambiguity will increase rather than decrease to the effect that these agents will express ambiguity attitudes regardless of whether they have access to large sample information or not. While consistent Bayesian learning occurs under the Sarin-Wakker update rule, this result comes with the descriptive drawback that it does not apply to agents who still express ambiguity attitudes after one round of updating.