January 13, 2026

We Find It Convincing that Digital Self-Control Apps Work. Here’s What That Means.

We’ve just updated our confidence rating for Digital Self-Control Apps.

, & al.
We’ve just updated our confidence rating for Digital Self-Control Apps.

What We Updated and Why

We’ve just updated our confidence rating for Digital Self-Control Apps (DSCAs) to “convincing,” from “tentative.” DSCAs are apps designed to help people rein in unwanted social media use by giving them practical tools and strategies to take back a bit of control. These tools are not new, and research into them isn’t new but two new studies* address key limitations of earlier studies. In this post, we’re going to geek out a little on why the evidence has strengthened, what tipped the balance, and why we’re now more confident that these tools can make a difference.

What the Evidence Already Showed

Before this update, multiple studies, including a meta-analysis, suggested that DSCAs reduce time spent on social media. Taken together, that body of research pointed in a promising direction. But it also came with enough methodological caveats that we rated DSCAs “tentative,” meaning there was promising evidence that they were effective, but more testing was needed to have confidence.

What were our methodological caveats? Most prior studies shared at least one important limitation:

  1. Self-selection: Participants typically opted-in and chose to use a DSCA. That made it hard to know whether observed effects reflected the app itself, or participants’ prior motivation.
  2. Substitution: Many studies measured time spent on specific platforms, without capturing whether participants compensated by spending more time elsewhere.
  3. Short time horizons: Most studies tracked behavior for around three weeks, leaving open the possibility that effects were driven by novelty rather than by lasting change.
  4. Within-subject designs: Participants switched between conditions, making it harder to separate the effects of the app from participants’ awareness of the study. We’re going to spend a little more time explaining this one, because it’s an important concept for interpreting this research.

In a between-subjects design, each participant is assigned to one condition and stays in that condition for the entire study. For example, one group might use a digital self-control app, while another group does not. In a within-subjects design, the same participants experience both conditions at different points in time. For example, a participant might use a digital self-control app for one period, then stop using it for another period, and researchers compare their behavior across those periods.

Within-subject designs can be useful because they allow researchers to compare participants to themselves rather than to others which can increase statistical power by generating more observations per participant. But they also introduce complications. When participants switch back and forth between conditions, they may become more aware of what the study is testing. That awareness can influence behavior: participants may consciously or unconsciously change how they act because they think they know what the researchers expect to see. While demand effects can also occur in between-subjects designs, they are typically more pronounced in within-subject studies due to repeated exposure to experimental conditions. This makes it harder to tell whether observed changes are caused by the intervention itself or by participants responding to the study context. For that reason, results from within-subject studies can be harder to interpret, especially when studying behaviors like social media use that are sensitive to attention and self-monitoring.

What the New Studies Did Differently

Two recent studies directly addressed these concerns. A Danish field experiment recruited teenagers in Denmark and paid them to participate, reducing the role of self-selection. Many participants plausibly used the app because they were compensated, not because they were already motivated to change their habits. Participants also linked the app to all of their social media accounts, making substitution effects easier to rule out. Importantly, the study used a between-subjects design, with participants assigned to a single condition for the duration of the study.

That experiment was still relatively short-term (the intervention period was only four weeks), but a separate longitudinal study, drawing on more than 300 days of data from voluntary participants of the same app, showed that reductions in social media use persisted over time and even increased. Taken together, these studies address the main sources of uncertainty that shaped our earlier rating.

Why This Was Enough to Update Our Rating

Together, these studies show that DSCAs reduce social media use even when participants are not intrinsically motivated, that reductions are not easily explained by substitution, and that effects are not limited to the first few weeks of use.

Under our evidence rating framework, a design pattern earns a “convincing” rating when there is strong evidence from at least one well-designed study, typically a field experiment, demonstrating that the pattern is effective. The new studies meet that bar. They use stronger research designs, rule out several alternative explanations that limited earlier work, and show sustained effects over time. On that basis, we now consider the evidence that DSCAs are effective to be convincing.

Not All DCSA Strategies Are Equally Effective

The new evidence also confirms something earlier research had already suggested: not all DSCA strategies work equally well. The Danish study tested several different DSCA strategies side by side. One prompted participants to periodically reflect on why they wanted to use social media. Another introduced friction by requiring participants to complete a brief pause, doing a short breathing exercise, before accessing social media. A third required participants to plan how much time they intended to spend on social media in advance and to actively re-plan once that time was up.

Interventions that introduced friction or required participants to plan their time led to a dramatic ~33% drop in social media usage for teens. By contrast, a periodic reflection prompt showed no noticeable effect. This matters both for people deciding which tools to use, and for assessing platform features that claim to help people manage their social media use.

What would it take to go from “Convincing” to “Validated”?

Under our rating system, moving from a “convincing” to a “validated” classification would require strong evidence from at least two independent research teams. In practice, this means multiple high-quality studies with high ecological validity, ideally field experiments conducted on real platforms, that also have strong study designs and statistical models. While the existing evidence includes one strong field experiment and complementary longitudinal data, achieving a “validated” rating would require a second strong study led by a different set of authors that replicates these effects. 

What this means for platforms

For platforms, this means that simply offering digital wellbeing features is not enough: the design of those tools matters. Many social media platforms have begun integrating these tools, particularly for teens. Some of them closely resemble strategies that the DSCA research shows are effective. For example, Instagram’s time limit feature, and a similar time management feature on Threads introduce small barriers before continued use, closely mapping onto the friction-based strategies tested in DSCA research that were shown to be highly effective. YouTube’s “take a break” reminder and Pinterest’s in-app prompts more closely resemble periodic reflection interventions, which showed no noticeable effect on usage in the research. Instagram's breathing pause falls somewhere in between. It resembles the breathing pause in the Danish study, but because it includes a button to dismiss, which was not present in the Danish study, it is harder to confirm its effectiveness one way or the other.

*One of the authors of both studies discussed in this post, David Grüning, serves as Chair of PDN’s Science Board. The studies were conducted independently of PDN, and our evidence rating reflects our standard evaluation criteria.

About the Prosocial Design Network

The Prosocial Design Network researches and promotes prosocial design: evidence-based design practices that bring out the best in human nature online. Learn more at prosocialdesign.org.

Lend your support

A donation for as little as $1 helps keep our research free to the public.

Be A Donor