Twitter said it began enforcing new rules Monday aimed at filtering out “hateful” and “abusive” content on the social network including messages which promote or glorify violence.
The platform has long faced criticism over how it deals with hate groups and content, which led it to removing verification badges from prominent U.S. white nationalists last month.
“Specific threats of violence or wishing for serious physical harm, death, or disease to an individual or group of people is in violation of our policies,” the new rules state.
Also banned will be any content that “glorifies violence or the perpetrators of a violent act” as well as “hateful imagery” including logos or symbols associated with “hostility and malice” toward specific groups.
https://twitter.com/TwitterSafety/status/942756383593660416
Twitter also said it would suspend “accounts that affiliate with organizations that use or promote violence against civilians to further their causes.”
But Twitter said it would not cut off accounts for military or government entities, and would consider exceptions “for groups that are currently engaging in (or have engaged in) peaceful resolution.”
The policies drew criticism last month when it took no action following one of President Donald J. Trump‘s tweets which appeared to threaten violence against North Korea.
Twitter responded with a pledge to review its policy while noting that “newsworthiness” and public interest must be considered in deciding whether to take down a tweet.
The new policy marks the latest effort by social networks to remove content which promotes illegal or abusive activity while remaining open to dissent and controversial topics.
One account that was no longer visible on Twitter was that Britain First leader Jayda Fransen, whose anti-Islam messages were retweeted by Mr. Trump, and another leader of the group, Paul Golding.
Twitter declined to comment on any individual accounts and had no immediate information on the number of users impacted by the new enforcement, a spokeswoman said.