March 6, 2026

Pro-Social recap: Building Better Feeds

Jonathan Stray on how our feeds can support "better conflict."

, & al.
Jonathan Stray on how our feeds can support "better conflict."

Social media feeds are designed to engage, often with harmful effects. As much as they can entertain, they can also distract us, distort narratives, and in some cases actively deepen division. The open question is whether those same systems can be reoriented toward something more constructive: understanding, healthy disagreement, and the kind of pluralism a functioning democracy depends on.

Jonathan Stray has been thinking for a while about how our feeds can support "better conflict" and social cohesion. In one current project, Jonathan leads a team testing several "Prosocial Ranking" algorithms to see which, if any, can reduce dangerous levels of division. Another project is working to make those prosocial algorithms public. Through his work, he is exploring whether feeds can be designed not just to capture attention, but to support social cohesion without flattening disagreement.

In this Pro-Social conversation, Jonathan joined us to walk through how prosocial ranking might work in practice, what early experiments show, and what it would take to bring these ideas into real-world systems.

From engagement to “bridging”

Most feeds optimize for what performs best overall, says Jonathan, which often amplifies content that resonates strongly within like-minded groups. Prosocial ranking asks a different question: what content resonates across groups who usually disagree?

This is not about removing conflict or promoting bland neutrality. It is about identifying content that different sides find fair, informative, or worth engaging with, even if they interpret it differently. In that sense, the goal is not less disagreement, but better disagreement.

He described two ways platforms might do this, both being tested in the Prosocial Ranking Challenge. One approach focuses on “diverse approval,” where content is prioritized if it receives positive engagement from people across ideological divides. Another looks at “surprising validators,” surfacing ideas that challenge users but come from sources they trust, or highlighting unexpected agreement from opposing groups. The common thread is that what matters is how the content lands with multiple groups.

What the experiments show

To test these ideas, Jonathan and collaborators ran the ProSocial Ranking Challenge, inviting teams to build working ranking algorithms and testing them in live environments using browser extensions that modified users’ feeds.

The results were meaningful. Some approaches led to small, statistically significant reductions in polarization. That may not sound dramatic, but polarization is a system-level outcome that is difficult to shift, so even small effects are notable.

Importantly, these approaches did not consistently reduce engagement. In some cases, they increased time spent on platforms, suggesting that designing for prosocial outcomes does not necessarily come at the expense of user attention.

From research to real-world systems

A key focus of the conversation is how to move from experiments to deployment. One path is through existing platforms, whether via internal changes, public pressure, or regulation. Another is through newer, open ecosystems like the AT Protocol, where ranking systems can be more modular and user-controlled.

Jonathan discussed work on open, AI-driven infrastructure that would allow people to shape their feeds directly, including prompting for more constructive or bridging content. This points toward a different model of social media, where users are not locked into a single algorithm but can choose systems aligned with their values.

Watch the full conversation with Jonathan Stray

About the Prosocial Design Network

The Prosocial Design Network researches and promotes prosocial design: evidence-based design practices that bring out the best in human nature online. Learn more at prosocialdesign.org.

Lend your support

A donation for as little as $1 helps keep our research free to the public.

Be A Donor