We spoke with Manoel Horta Ribeiro about his research testing Reddit's Post Guidance.
Manoel Horta Ribeiro, Assistant Professor at Princeton, joined us at a recent Pro-Social to talk about his collaborative research with Reddit testing Post Guidance, a tool that gives users in-time feedback and a chance to course correct if their post might break a subreddit rule.
Going into the conversation, we knew we would learn about a great example of prosocial design research in action, but we also got an expert mini-course on Reddit and the challenges of content moderation. For any who don't spend time on Reddit; the platform acts as a "digital kingdom", to use Manoel's words, with its own set of rules that govern across thousands of "fiefdoms" - or subReddits - which are steered by volunteer mods who also set their own community-specific rules.
Those mods are faced with layers of trade-offs as they consider their moderation policies, strategies and tools. On one end, they aim to nurture users and encourage activity, yet to maintain quality it can also be necessary to treat users punitively by removing user posts or restricting engagement. They also largely depend on automod (just what it sounds like: an automated moderation tool) to efficiently remove rule-breaking content, but automod can be a "blunt tool" that misses context and nuance that a human mod would be able to pick up on.
Manoel positions Post Guidance as a way to help mods avoid many of those trade-offs. An overly simple way to think of Post Guidance would be as in-time automod; rather than removing posts that buck a subreddit rule (which mods get to specify) after the fact, Post Guidance pops us as users are writing a post and explains how they are breaking a rule. But by giving that in-time feedback, Post Guidance gives users the reins to modify their post - in essence letting them take on the role of a moderator with the benefit of understanding context. In that way, it is also less punitive; if users are able to modify their post to be in compliance they avoid getting a slap on the wrist.
What's more, as Manoel's research shows, Post Guidance lets mods get around the quantity vs. quality tradeoff. In a field experiment run with 33 subreddits who trialed Post Guidance, Manoel and his co-authors found that users who were randomly assigned to have the tool pop up ("treated" users) had more successful posts (i.e. that were not subsequently removed) and more high quality posts (i.e. that were upvoted by other users).
In the discussion and Q&A, Manoel dug in a little more to some of the nuances of those positive outcomes. Paradoxically, when users were assigned to treatment, they posted less but because so many fewer of their posts broke subreddit rules, overall successful posts (i.e. non-removed posts) increased. Manoel suggested there might be a two pronged effect; bad faith users may be choosing not to post or even sometimes finding ways to end-run automod, but good faith users are adapting in positive ways and their total number of successful posts make up for the dip in initial posts.
As may be obvious, we're stanning over Post Guidance, but that's because it locks into so many of the principles central to prosocial design - including giving users agency, being more proactive than punitive, helping communities provide local context and, of course, being evidence-based.
Finally, while we had an audience with Manoel and knowing he's had successful collaborations with tech companies, we asked him if he had any tips for other researchers. We provide his answers for any looking to work with platforms:
Watch the video below for the full conversation. Read Manoel's full study here.
The Prosocial Design Network researches and promotes prosocial design: evidence-based design practices that bring out the best in human nature online. Learn more at prosocialdesign.org.
A donation for as little as $1 helps keep our research free to the public.