We provide a novel rationale for why extreme forecasts are often more persuasive than moderate ones. We show that when people look for the most likely explanation of the views or opinions they observe, they will adopt an explanation according to which moderate views are just statistical derivatives of extreme views. Therefore, when using the most likely explanation to interpret a plurality of opinions, they will only use the information conveyed in the extreme views, completely ignoring more moderate views. We characterize this maximum likelihood (ML) approach to information aggregation in a dynamic model and show that it leads to a simple and dynamically consistent way of aggregating opinions. We highlight some behavioral implication of this approach such as directional updating and stagnation of beliefs. Finally, we analyze the convergence properties of the ML approach using extreme value theorems. We show that in contrast to Bayesians, for individuals using the ML approach, the prior beliefs might still matter even when individuals are exposed to rich information structures indefinitely.
PEW: Gilat Levy
Mon, Mar 25, 2019, 4:30 pm