Crowdsourcing contextual information (Community Notes)

Reduce the spread of misinformation

Our confidence rating

Likely

Share This Intervention

What It Is

Attaching contextual information ('Notes') to misleading content. In the case of Community Notes, notes are crowdsourced and shown alongside a tweet when the Note has been rated "helpful" by users with diverse views.

Civic Signal Being Amplified

Understand
:
Show reliable information

When To Use It

Interactive

What Is Its Intended Impact

By providing relevant information to misleading posts, users may become less willing to see those posts as credible and to reshare them.

Evidence That It Works

Evidence That It Works

Three studies investigate whether Community Notes ("Notes") are effective in reducing retweets of false or misleading posts; one field experiment conducted by Twitter, and two others that use public data shared by Twitter to analyze the impact of community notes using quasi-experimental methods. Overall, the studies suggest that adding community notes may significantly reduce the likelihood of a post being reshared (25-50%), but since there is a substantial time delay from when a tweet is made and then receives a note (an average ~ 15 hours, or after 80% of retweets have been made), Community Notes overall have a small impact in reducing misinformation.

Twitter's study (Wojcik et al., 2022) reports a 25-34% decrease in the number of retweets when a Note is attached to a misleading tweet. Given that this study conducts a field experiment, i.e. if presumably randomly assigns some tweets to show a community note while not showing notes on other similar tweets, we would normally give its findings substantial weight. In this case, however, the study's write up (a single paragraph) does not provide enough detail - e.g., design, data, statistical method - for us to make an assessment of the strength of its findings.

The two other studies, however, are fully reported. They both make use of a public database from Twitter, but use different quasi-experimental methods to detect the effect of a tweet receiving a community note.  

Chaui et al. (2023) take advantage of the fact that Community Notes were rolled out in three stages: a pilot stage where Notes were hidden to all except a small subset of users; an initial rollout when all US users could see tweets with Notes and then a final rollout to global users. In one analysis the authors looked at two sets of false or misleading tweets before and after the US rollout; tweets with Notes that were marked as "helpful" (as rated by a threshold of enough diverse users) and tweets whose Notes had not reached a "helpful" threshold. Critically, Notes are only shown to users when they reach the "helpful" threshold. This allowed the authors to see what happened when 'helpful' notes became visible to all US users and compare them to notes that continued to be hidden because they hadn't reached the helpful bar. In this first analysis, the authors observed a positive effect: after the rollout, there was a drop in the number of shares of tweets with helpful Notes, but not a drop in among tweets with still-hidden Notes. This promising finding aside, the authors did not see a similar effect for the Global rollout. They also failed to see effects when looking at additional analyses, for example, comparing tweets with notes just below and above the "helpful" threshold. The authors conclude with skepticism that Community Notes have any impact.

Renault et al. (2024) use the same dataset but take a different approach to setting up a quasi-experiment. They instead use the fact that misleading tweets with helpful Notes will have a period where their notes have not yet reached the "helpful" threshold (of 0.4) and so are still hidden. They examine what happens when individual Notes become visible (and are just above the threshold, from 0.4-0.43) to see if there's a dropoff in retweets. Importantly, because there is a natural decline in retweets over time, they match those tweets with similar tweets with notes that remain just below the helpful threshold (0.37-0.4), and so remain invisible, as a comparison "control" group. After a Note becomes visible, they observe a 50% decrease in retweets compared to matched "control" tweets, a finding they observe in a second analysis of the data. 

While Chuai (2023) and Renault (2024) come to different conclusions in their papers, their findings are not as far off as may seem. Although Chuai takes considerable steps to control for differences between their "treatment" and "control" tweets (aside from the key difference of having a visible Note), their design necessarily leaves room for other "confounding" factors playing a role (e.g. what kind of tweets were getting notes before and after the rollouts). Renault's paper points out another key difference in their approaches - and what ultimately points to a limitation of Notes: Renault's seemingly large observed effect occurs after a Note becomes public which is, on average, after 80% of retweets usually have been made. In other words, any effect of a Note will usually occur at the tail end of a tweets' engagement.

In sum, although there is mixed evidence of the effectiveness of Community Notes, overall it is likely that Notes reduce re-sharing of false and misleading posts, yet their ability to have a meaningful effect is limited by the substantial lag time between when a false tweet is posted and receives a Note.

Why It Matters

Misinformation that is spread on social media poses potential threats to individual well-being and can lead to toxic division and mistrust. Other interventions that aim to reduce the spread of misinformation have limited effect in part because of mistrust of platforms and professional fact-checkers. Community Notes is a promising intervention because of its potential to identify more trustworthy checks on misinformation, by virtue of being crowdsourced across ideologically diverse users.

Special Considerations

Examples

This intervention entry currently lacks photographic evidence (screencaps, &c.)

Citations

Birdwatch: Crowd wisdom and bridging algorithms can inform understanding and reduce the spread of misinformation

Stefan Wojcik, Sophie Hilgard, Nick Judd, Delia Mocanu, Stephen Ragain, M. B. Hunzaker, Keith Coleman, and Jay Baxter
arXiv
October 27, 2022
arXiv:2210.15723

The Roll-Out of Community Notes Did Not Reduce Engagement With Misinformation on Twitter

Yuwei Chuai, Haoye Tian, Nicolas Pröllochs, and Gabriele Lenzini
arXiv
July 16, 2023

Collaboratively adding context to social media posts reduces the sharing of false news

Thomas Renault, David Restrepo Amariles, and Aurore Troussel
ArXiV
March 3, 2024

Is this intervention missing something?

You can help us! Click here to contribute a citation, example, or correction.

Further reading

Back to Prosocial Design Library