When users hit “send” on their reply, they will be told if the words in their tweet are similar to those in posts that have been reported, and asked if they would like to revise it or not.
Twitter has long been under pressure to clean up hateful and abusive content on its platform, which are policed by users flagging rule-breaking tweets and by technology.
Screenshots of shocking student rape chat group go viral
“We’re trying to encourage people to rethink their behavior and rethink their language before posting because they often are in the heat of the moment and they might say something they regret,” Sunita Saligram, Twitter’s global head of site policy for trust and safety, said in an interview with Reuters.
Twitter’s policies do not allow users to target individuals with slurs, racist or sexist tropes, or degrading content.
The company took action against almost 396,000 accounts under its abuse policies and more than 584,000 accounts under its hateful conduct policies between January and June of last year, according to its transparency report.
Japan is drafting protocols for dealing with UFOs
Asked whether the experiment would instead give users a playbook to find loopholes in Twitter’s rules on offensive language, Saligram said that it was targeted at the majority of rule-breakers who are not repeated offenders.
Twitter said the experiment, the first of its kind for the company, will start on Tuesday and last at least a few weeks. It will run globally but only for English-language tweets.
COMMENTS
Comments are moderated and generally will be posted if they are on-topic and not abusive.
For more information, please see our Comments FAQ