March 10, 2026

Do Social Media Feeds Fuel Political Division?

A new study suggests that algorithms can influence attitudes toward political outgroups.

, & al.
A new study suggests that algorithms can influence attitudes toward political outgroups.

It’s no secret that social media feeds are designed to maximize engagement, often with harmful consequences for individual users: from distraction and addictive use to anxiety and depression. Political scientists and advocates also worry that algorithms optimized to capture attention may deepen societal divisions by amplifying posts that demonize political opponents and provoke moral outrage.

Until now, evidence and assertions that algorithmic social media feeds fuel division have relied largely on observational research, theory, and survey experiments. The only field study conducted so far, however, found that switching to a reverse chronological feed (a non-algorithmic timeline that displays posts in the order they were published, as social media platforms originally did) had no effect on improving users’ feelings toward counter-partisans compared with Facebook’s engagement algorithm. (A recent study suggests a reason why that was the case.) 

But a new study suggests that while defaulting to a "neutral" reverse-chronological feed did not meaningfully reduce counter-partisan animosity when compared to engagement algorithms, platforms can more carefully tune their algorithmic feeds to dial division up or down – and that, ultimately, these algorithmic changes can influence attitudes toward political outgroups.

Testing Divisive Content Exposure

Piccardi et al. recruited more than 1,000 X users who installed a browser extension that allowed researchers to modify the posts appearing in their feeds. Participants were asked to spend 30 minutes on X each day for ten days. All identified as either Republican or Democrat and were politically engaged, with at least 5 percent of their feeds typically consisting of political content.

To identify divisive content, the researchers used a formula based on an LLM that classified posts according to eight dimensions of what they call “Antidemocratic Attitudes and Partisan Animosity” (AAPA). This framework captures different forms of hostile or anti-democratic political expression, allowing the system to detect and quantify divisive posts in participants’ feeds.

After the first three days, participants were randomly assigned to one of three groups for the remaining seven days of the study. The control group continued to see their feeds as usual using X’s default engagement-based algorithm. A second group saw the amount of divisive political content in their feeds downranked, so that the amount they saw was reduced by 10 percent, while a third group saw it upranked, and saw an additional 3 percent of divisive content inserted into their feeds.

Before, during, and after the study, participants reported their attitudes toward members of the opposing political party using a “feeling thermometer,” a standard measure in political science that asks respondents to rate their feelings toward a group on a warm-to-cold scale.

What the Experiment Found

The results were clear: participants who saw more divisive content felt “colder” toward members of the opposing party. Encouragingly for those advocating for pro-social design, participants who saw less divisive content reported warmer feelings toward counter-partisans.

What makes this study notable is that it provides field evidence that social media feeds can shape partisan attitudes, for better or worse. Like any experiment, however, it has limitations. For example, participants were prompted to spend at least 30 minutes per day on X, which makes the environment somewhat less natural than everyday use. Still, the fact that relatively small changes to users’ feeds produced statistically significant shifts in the attitudes of politically engaged partisans suggests that the design of social media feeds can have real political consequences. It also strengthens the case that efforts to design feeds that foster healthier political engagement are worth pursuing.

Implications for Feed Design

We see Piccardi et al.’s study as an important first step. As with any single study reporting statistically significant results, the findings need to be replicated before firm conclusions can be drawn. For that reason, we label the intervention as “convincing” rather than “validated” in our library.

There is also more to explore about how feeds might be redesigned to foster healthier forms of political disagreement. As mentioned earlier, in this study, Piccardi et al. identified divisive content using an LLM-based classifier for “Antidemocratic Attitudes and Partisan Animosity” (AAPA) and then downranked those posts. Other researchers are experimenting with different approaches. Jonathan Stray, for example, recently spoke with us during a Pro-Social event about his work testing alternative feed-ranking algorithms through browser extensions. Rather than simply downranking toxic content, some of these approaches aim to uprank political posts that model more constructive forms of disagreement. The results of that work are not yet public, but early evidence from a related study suggests the approach may hold promise.

About the Prosocial Design Network

The Prosocial Design Network researches and promotes prosocial design: evidence-based design practices that bring out the best in human nature online. Learn more at prosocialdesign.org.

Lend your support

A donation for as little as $1 helps keep our research free to the public.

Be A Donor