//

Twitter testing a new feature that prompts people to reconsider Tweet replies containing harmful language

1 min read
Twitter testing a new feature that prompts people to reconsider Tweet replies containing harmful language
Source: Twitter

Twitter began testing a new feature on Wednesday that prompts a pop-up window when a Twitter user sends an offensive reply to a Tweet, allowing users to check their post before sending it.

“A new feature that prompts people to reconsider Tweet replies containing harmful language is seeing promising results, with people changing or deleting their replies over 30% of the time when prompted for English users in the U.S. and around 47% of the time for Portuguese users in Brazil,” Twitter wrote.

“We’ve seen and heard that this type of behavior is one of the main reasons people leave Twitter,” said Paul Lee, adding that this type of content in large volumes can be especially threatening to historically excluded communities and groups that have been shown to experience disproportionate levels of abuse. “They either don’t use the platform or avoid using it in certain ways because they fear targeted instances of marginally-to-more-severely abusive behavior that result in direct harm.”

Advertisement

That’s why Twitter has been experimenting with using nudges, a prompt that shows when you try to respond to someone who uses harmful or offensive language.

Here’s how it works: when a person composes a potentially offensive reply to a tweet, and then clicks the “reply” button to publish the text, a pop-up appears that asks them if they would like to review the post before Tweeting it. The message includes the options to “Edit,” “Delete,” or “Tweet” as originally written. If the person clicks “Tweet,” the Reply is posted as they originally typed it. Those who click “Edit” are allowed to edit the Tweet text before sending it. The “Delete” option cancels the proposed post entirely.

Leave a Reply