(Reuters) – Twitter will test sending users a prompt when they reply to a tweet using “offensive or hurtful language,” in an effort to clean up conversations on the social media platform, the company said in a tweet on Tuesday.
Twitter has long been under pressure to clean up hateful and abusive content on its platform, which is policed by users flagging rule-breaking tweets and by technology.
Twitter’s policies do not allow users to target individuals with slurs, racist or sexist tropes, or degrading content.
Asked whether the experiment would instead give users a playbook to find loopholes in Twitter’s rules on offensive language, Saligram said it was targeted at the majority of rule-breakers, who are not repeat offenders.
Twitter said the experiment, the first of its kind for the company, will start on Tuesday and last at least a few weeks.

Comments to: Twitter will warn you if your tweet reply might be offensive

Your email address will not be published. Required fields are marked *

Attach images - Only PNG, JPG, JPEG and GIF are supported.


Welcome to Typer

Brief and amiable onboarding is the first thing a new user sees in the theme.
Join Typer
Registration is closed.