We believe that digital products can be designed to help us better understand one another. That’s why we are building an international network of behavioral science and design experts to articulate a better, more prosocial future online; and to disentangle the Web’s most glaring drawbacks: from misunderstandings to incitements to hatred.
We are the Prosocial Design Network, and our mission is to promote prosocial design: evidence-based design practices that bring out the best in human nature online.
With a compendium of prosocial design interventions and emerging behavioral research, the Prosocial Design Network is committed to helping make the Web a healthier, more respectful place to visit.
Many believed the World Wide Web would lead to worldwide peace. Everyone would talk and understand each other, or so we thought. But it hasn’t turned out that way: even one of the inventors of the Internet says its design is creating outrage and polarization. The Web as we know it now, thirty years later, makes us more addicted, tribal, and afraid. But hope endures to restore some of the potential the Internet once had for understanding and meaningful connection.
We know from long-standing research that human decision making is very sensitive to the environment in which it happens. We also know that, as a result, even the subtlest design changes can influence human behavior. Therefore, it’s not a leap to believe—as the Prosocial Design Network does—that, through better design, we could change the Internet to be a better place. We can even imagine a future in which online interaction between strangers might evoke the same genuine empathy as if they met face-to-face.
We’ve found that many designers express interest in combating hate, harassment, and disinformation online, but find themselves without clear guidance on how to do so. To date, most existing interventions rely on common sense or intuition about what will work, and not empirical science.
Similarly, the recent COVID-19 outbreak has presented us with a situation where communicating over the Internet must be the default for the near future. Without design interventions to the contrary, our commonly used social interfaces risk aiding and abetting hateful rhetoric’s spread.
If you’re already familiar with terms like “nudges” or “choice architecture”, then you may have a sense of how such a transformation to our screens might work. For those who don’t, consider an example from behavioral economists Richard Thaler and Cass Sunstein’s book, Nudge: the school cafeteria.
Let’s say you’re in charge of what order food is placed in a high school cafeteria. If you track things, you’ll notice that the order changes how much of what students put on their plates.
So, what do you do? You have choices based on the outcomes you want. If you want the students to eat healthier, you can arrange things so they take more healthy foods. If you want students to take foods that have less of an impact on the environment, you can arrange the foods accordingly. If you’re being paid by an eccentric chicken farmer to get students to eat more chicken, you can make that happen. What you cannot do is offer a neutral choice: the food must be presented in some order.
In a school cafeteria, the options are limited, as are the decisions its designer needs to make. But for digital products the design options are vast and they could look different for each individual user. This exponentially expands the number of choices that designers have to consider. Given that people’s decisions are sensitive to details, and no design is neutral—just as in the cafeteria—this not only opens opportunities for companies to offer new experiences, but also creates a growing responsibility for the designers of online platforms to avoid doing harm.
This is where the Prosocial Design Network comes in: We aggregate existing knowledge so that designers and teams are equipped to make these crucial choice architecture decisions, and can sort out what works from what doesn’t.
Prior to the 2020 Elections, Twitter rolled out a feature in order to stop users from sharing news articles without reading them, which was causing false information to proliferate.
In Twitter’s design intervention, users were encouraged to read an article before they retweeted it, thereby slowing down the interaction. So far, Twitter reports that this design change has increased the actual clicking, and ostensibly reading, of links before they are retweeted, by 40%.
User friction could be a promising class of interventions that help preempt the spread of hoaxes and disinformation, which not only foment invective across users but can undermine public health and national security initiatives; as we’ve seen with both COVID-19 denialism and militant conspiracy theories.
New research suggests that changes in choice architecture could go even further, promoting competences that have long-lasting, educative effects. Showing users ten easy tips to check the accuracy of news stories was shown to reduce the discernment between true and false headlines by up to 27% in a huge Facebook study and a lasting effect, thereby fostering media literacy in the long run.
These represent the kinds of changes that we’re hoping to champion: not only ways to counter the disinformation that may prime users to hate one another, but also ways to promote a fellow feeling among them.
We partner with researchers, companies, and organizations, on initiatives that spread awareness about the need for, and the application of, prosocial design interventions.
Our Prosocial Design Library houses the latest findings on prosocial design, offering a platform to advocate for, and verify, the research behind these interventions.