Facebook announced Tuesday that it would soon start removing content that violates the site’s terms of service.
The company has previously been under fire for not taking action against content that has been deemed too upsetting for some users.
But now it will start doing just that.
“Facebook’s policy is not to remove content that is personally identifiable, such as photos or videos,” a Facebook spokesperson told Ars.
“It is our goal to create a safe space where our community can feel safe and secure online.
We are working on this new policy to make that happen.”
Content that violates this new rule will no longer appear on Facebook, even if it’s already removed.
Facebook has a long history of allowing its users to post controversial content on the site.
It has long struggled with the issue, which has often resulted in posts being removed from its site or Facebook’s news feeds, as well as from other social media platforms.
Facebook had previously said that it had no plans to update its policy in the coming weeks, and that it is “actively working to ensure this new approach works for everyone.”
This new policy is a departure from the company’s previous approach, which had been to allow posts to remain up on the platform until they had been removed from the site by Facebook.
Facebook’s announcement comes in the wake of a number of controversies surrounding the social network, including the firing of the chief operating officer last month, as the company struggles to make money on advertising.
The firing of CEO Mark Zuckerberg has been the subject of widespread discussion, as has the company in general.
Zuckerberg has publicly said that Facebook will be looking to hire more than 100 more people to help with the company, but it’s unclear whether that hiring will be enough to stem the tide of bad publicity.