Resources
October 5, 2023

Two simple things academics can do to translate their research into practice

This summer, PDN learned two things academics can do to improve their industry impact.

Julia Kamin, Ph.D.
, & al.
This summer, PDN learned two things academics can do to improve their industry impact.

How can we take the wealth of academic research on prosocial digital interventions and help more of its insights, lessons, and evidence-based solutions be adopted by tech practitioners?

This question drives our work at Prosocial Design Network (PDN) and, we know, matters to the many researchers who want to see their work have a meaningful impact on society.

While there’s no simple answer to that burning question, over the summer as part of a series of discovery conversations we conducted with practitioners, PDN learned two - fairly simple - steps academics can take to see more of their research translated to practice. 

In those conversations, without any prompting, we kept hearing two things: if they're going to integrate a new design feature (aka digital intervention) prosocial technologists want to know a) what's the intervention’s effect size and b) what's its impact on engagement.

Less P-value and more D-score please

As social scientists we're conditioned to be laser focused on inference and statistical significance. Pretty much all the technologists we spoke to understand why we care about p-values, but that's not where their heads are at: they want to know, at the end of the day, how much an intervention will, say, decrease toxic language or the spread of misinformation on their platform. We can understand where they are coming from: tech companies can't throw every prosocial digital intervention at their users but need to be economical in selecting the ones that will have maximum impact.

We also appreciate that reporting effect sizes has its set of wrinkles. Depending on the research setting, effect sizes can be misleading, either over- or under-estimating true impact. Yet there are two places where we think researchers can start. 

For one, researchers can do their best to describe findings in lay terms, such as presenting percentage - and percentage point - changes in outcomes. Second, researchers can more consistently report D-scores (i.e. changes measured in terms of standard deviations). To be fair, D-scores are not something many practitioners are likely to grok - today. But it's a way to begin to communicate relative effect sizes across contexts; without a clear alternative if all papers reported D-scores we can do a better job of comparing the effectiveness of different interventions.

No engagement, No go

While we all want platforms to just do the right thing and adopt prosocial design (already), the reality is that platforms will resist changes that decrease engagement. Researchers can help smooth the gears of adoption by, as much as possible, including measures of engagement in studies that test digital interventions. 

If a study shows a rise in prosocial outcomes and no drop - or even an increase - in engagement, that intervention becomes all the more palatable to platforms. If, conversely, there’s evidence that an intervention will cause engagement to drop, then that may be a signal to us researchers that we can be more creative in thinking about non-zero-sum prosocial interventions.

As with D-scores, we see some studies already including engagement as an outcome variable. To ease adoption, we encourage all studies to be candid about effects on engagement.

Aren’t we doing their homework for them?

We can imagine - and have heard - researchers grouse that platforms have a responsibility to do good by society and it shouldn’t be on researchers to convince them to do what’s right. To that - understandable - position, we have two thoughts.

First, don’t take on these suggestions for big tech platforms; think instead of helping out a) employees at big tech firms who are advocating for prosocial design from within and b) small to mid-size platforms that don’t have large research budgets. Even after big tech gutted their trust & safety teams, legions of prosocial technologists are still fighting within for positive change; the more we can shore up their cases by giving evidence that a design change is both effective and won’t hurt engagement, the easier we make it for them to convince teams to adopt prosocial design. Meanwhile, small and mid-sized platforms, many of whom are trying to create healthier versions of the social media titans, are more open to adopting prosocial digital interventions but they may not have the resources to internally test which design change is more effective - and they will also reasonably not want to hurt engagement.

The second thought is one we behavioral scientists all know well; we can wait for humans to adopt prosocial behavior on their own, but it also helps to “nudge” them along or “boost” change by removing friction and creating the conditions for adoption. In this case the humans are product managers and other decision makers at tech firms; by helping them see the upsides of adopting prosocial digital interventions and assuring them there are no downsides, we make it easier to do the right thing.

Julia Kamin (she/her) is a researcher based in New York City. She currently works with Civic Health Project, developing a measurement tool for organizations to gauge their impact on reducing polarization.

About the Prosocial Design Network

The Prosocial Design Network researches and promotes prosocial design: evidence-based design practices that bring out the best in human nature online. Learn more at prosocialdesign.org.

Lend your support

A donation for as little as $1 helps keep our research free to the public.

Be A Donor