Abstract
We develop a unified framework to study policy measures aimed at reducing the circulation of false news on social media, such as raising awareness of the dangers of misinformation, offering fact-checking, requiring confirmation clicks, and asking users to think about the content they are about to share. We evaluate the impact of these interventions on the sharing of false and true political news on Twitter using a randomized experiment during the 2022 mid-term U.S. elections. We find that priming fake news circulation is the most effective policy in changing the balance of shared news toward true content, by increasing sharing true and decreasing sharing false news. We build a model of sharing decisions, where a sharer derives utility from persuading the audience, from signaling their partisan affiliation, and from maintaining their reputation as a credible source. In equilibrium, sharing depends on the perceived veracity and partisanship of content, the salience of reputational concerns, and the sharing frictions. Structural estimation sheds light on the mechanisms. The impact of all considered policies comes mostly from their effects on the salience of reputation and the sharing frictions. The effect policies have on the perceived veracity of content plays a negligible role, even when it comes to fact-checking. The priming intervention performs the best as it affects the salience of reputation more than other policies while adding minimal friction.
Local Organizer: Niko Jaakkola