Illustration of U.S social network Instagram’s logo on a tablet screen.
Kirill Kudryavtsev | Afp | Getty Images
Meta apologized on Thursday and said it had fixed an “error” that resulted in some Instagram users reporting a flood of violent and graphic content recommended on their personal “Reels” page.
“We have fixed an error that caused some users to see content in their Instagram Reels feed that should not have been recommended. We apologize for the mistake,” a Meta spokesperson said in a statement shared with CNBC.
The statement comes after a number of Instagram users took to various social media platforms to voice concerns about a recent influx of violent and “not safe for work” content recommendations.
Some users claimed they saw such content, even with Instagram’s “Sensitive Content Control” enabled to its highest moderation setting.
According to Meta policy, the company works to protect users from disturbing imagery and removes content that is particularly violent or graphic.
Prohibited content may include “videos depicting dismemberment, visible innards or charred bodies,” as well as “sadistic remarks towards imagery depicting the suffering of humans and animals.”
However, Meta says it does allow some graphic content if it helps users to condemn and raise awareness about important issues such as human rights abuses, armed conflicts or acts of terrorism. Such content may come with limitations, such as warning labels.
On Wednesday night in the U.S., CNBC was able to view several posts on Instagram reels that appeared to show dead bodies, graphic injuries and violent assaults. The posts were labeled “Sensitive Content.”
According to Meta’s website, it uses internal technology and a team of more than 15,000 reviewers to help detect disturbing imagery.
The technology, which includes artificial intelligence and machine learning tools, helps prioritize posts and remove “the vast majority of violating content” before users even report it, the website states.
Furthermore, Meta works to avoid recommending content on its platforms that may be “low-quality, objectionable, sensitive or inappropriate for younger viewers,” it adds.
Shifting policy
The error with Instagram Reels, however, comes after Meta announced plans to update its moderation policies in efforts to better promote free expression.
In a statement published on Jan. 7, the company said that it would change the way it enforces some of its content rules in order to reduce mistakes that had led to users being censored.
Meta said this included shifting its automated systems from scanning for “all policy violations” to a focus on “illegal and high-severity violations, like terrorism, child sexual exploitation, drugs, fraud and scams.” For less severe policy violations, the company added that it would rely on users to report issues before taking action.

Meanwhile, Meta said that its systems were demoting too much content based on predictions that it “might” violate standards and that it was in the process of “getting rid of most of these demotions.”
CEO Mark Zuckerberg also announced that the company would allow more political content and change its third party fact-checking program with a “Community Notes” model, similar to the system on Elon Musk’s platform X.
The moves have widely been seen as an effort by Zuckerberg to mend ties with U.S. President Donald Trump, who has criticized Meta’s moderation policies in the past.
According to a Meta spokesperson on X, the CEO visited the White House earlier this month “to discuss how Meta can help the administration defend and advance American tech leadership abroad.”
As part of a wave of tech layoffs in 2022 and 2023, Meta cut 21,000 employees, nearly a quarter of its workforce, which affected much of its civic integrity and trust and safety teams.