If implemented on a platform, all posts or comments would, by default, not appear to anyone else for a set amount of time (say one hour). A few minutes before that time is up, the user would be notified and given the opportunity to make edits to what they said, cancel the post, or delay it again. The initial delay could be overridden (with a warning confirmation) and then labeled as an "instant post." This could then be downgraded by sorting algorithms (if spur-of the moment posts are deemed to be more hostile or of poorer quality).
This incentivizes users to think more carefully about what they're saying to a global audience. Some of the least productive comments are done as an immediate "road rage" style response, and the anger behind them tends to fade over time. Building a system to allow people to reconsider their words after they have calmed down could lead to healthier conversations. It could also reduce regret over participation when people feel better about the things they have said that are permanently visible to a global audience.
A grade of Inference is for proposed interventions that lack research of their own, but that could work by way of analogous studies, expert opinions, or first principles.
While this is, technically, the lowest evidentiary grade that we can afford an intervention while still including it in the library, this is not meant to discourage. On the contrary! This grade is very much an invitation to explore and experiment with it further.
Do you think this intervention could have more benefits, unacknowledged drawbacks, or other inaccuracies that we've neglected to mention here?
We always welcome more evidence and rigorous research to back up, debunk, or augment what we know.
If you want to be a part of that effort, we'd love to have your help!Email us
More conscientious posts, reduced misconceptions of people unlike themselves
Humanizes users in others' eyes