A mockup of a comment. It has a prompt on top of it that says 'heads up, topics like this can get heated'.

Warning that topic is sensitive

Reduce likelihood of sharing triggering content

Our confidence rating


Share This Intervention

What It Is

When specific language is detected in a draft comment or post about predefined topics, some visual element warns the author that conversations around these topics are hard and emotionally intense.

For example, on the social media platform NextDoor, every Fourth of July in the USA there are frequent, heated arguments about fireworks. Some people love them; some people hate them. So, any time in a three day range around July 4th, whenever NextDoor detects the word "firework" in a post, a just-in-time alert will appear reminding them that conversations about fireworks are hard and that people have very strong feelings about them.

Civic Signal Being Amplified

Ensure user safety

When To Use It


What Is Its Intended Impact

  • Reduces the likelihood that someone initiates a harder conversation than they intended by broaching a topic they didn't realize was one that is emotional for others.
  • Prompts people to recognize that people hold strongly divergent views from their own on a given topic without being confronted directly by other angry people

Evidence That It Works

Evidence That It Works

This intervention is in use by large platforms, most notably Twitter. While it's reasonable to assume platforms' internal testing showed some measure of success, without further evidence, we cannot yet give it a higher grade of confidence.

Why It Matters

Special Considerations

Use of this intervention can feel insensitive when not combined with enforcement of community norms. For example, if someone is targeted with racist behavior on your platform, there aren't any obvious repercussions for the abusers, and the recipient says something about their experience, their report could well be flagged as "intense," which can feel unjust to the person on the receiving end of both the original abuse and this automated warning.

Similarly, this may annoy people who specifically and intentionally participate in conversations online around sensitive topics to raise awareness. It may be helpful to offer an option to hide this warning for a certain amount of time.

It's worth also being transparent about what keywords will trigger this popup. If you are not transparent, people will come up with their own theories about what topics are and aren’t included and why. This will likely include people claiming that your platform is targeting them personally.

Trolls may specifically seek out these topics in order to be provocative and contrarian. Such behavior could be tracked (percent of posts by users that triggered this label and were posted anyway) and acted upon by other interventions.


This intervention entry currently lacks photographic evidence (screencaps, &c.)


This intervention entry currently lacks academic citations.

Is this intervention missing something?

You can help us! Click here to contribute a citation, example, or correction.

Further reading

Back to Prosocial Design Library