Meta ditches independent fact-checkers in favor of user-led "community notes" system

Bangla Post Desk
Bangla Post Desk
Published: 08 January 2025, 01:48 pm
Meta ditches independent fact-checkers in favor of user-led "community notes" system

Meta has announced it will abandon its long-standing reliance on independent fact-checkers for Facebook and Instagram, opting instead for a user-driven "community notes" system. The change, unveiled by Meta CEO Mark Zuckerberg on Tuesday, marks a significant shift in the company’s approach to content moderation, as it moves toward a more hands-off model.

In a video accompanying a blog post, Zuckerberg explained that the decision to phase out third-party moderators was motivated by concerns over perceived political bias and a desire to return to Meta’s foundational principles of free expression. "We believe it’s time to get back to our roots," Zuckerberg said. "Third-party fact-checkers have become too politically biased, and we need to let people decide for themselves."

The move follows mounting criticism from political figures, particularly from U.S. President-elect Donald Trump and his Republican allies, who have accused Meta of censoring right-wing viewpoints under the guise of fact-checking. Trump praised Zuckerberg’s decision, calling it a step in the right direction. "Meta has come a long way," he remarked during a news conference. When asked if the decision was a response to past threats he had made to Zuckerberg, Trump replied, "Probably."

Meta's current fact-checking program, in place since 2016, involved sending flagged posts to independent organizations for assessment. Posts deemed inaccurate were often labeled or demoted in users' feeds. This policy will now be replaced by community notes, a mechanism reminiscent of the one introduced by social media platform X (formerly Twitter), which allows users to collaborate on adding context to controversial posts.

The shift is expected to roll out first in the U.S., with no immediate changes planned for the U.K. or the EU. Meta has assured users that there will be no change in how harmful content related to self-harm, suicide, or eating disorders is treated.

Joel Kaplan, Meta’s new global affairs head and a prominent Republican, defended the move, stating that the previous reliance on fact-checkers had often led to censorship rather than clarity. "Too much harmless content gets censored, and too many innocent people end up in 'Facebook jail'," he wrote in the blog post. However, Meta acknowledged the risks associated with the change, with Zuckerberg noting that it would likely result in fewer harmful posts being caught, but also fewer innocent accounts being mistakenly banned.

Campaigners against hate speech expressed concern over the policy change, warning that it could open the floodgates to disinformation. Ava Lee, from Global Witness, criticized the move as a political maneuver to align with the incoming Trump administration, which has long criticized Meta’s content moderation practices. "This decision has harmful implications for the platform’s responsibility in combating hate speech and disinformation," Lee stated.

Meta’s shift comes amid broader trends in the tech industry, where content moderation has become increasingly politicized. Critics of the new direction argue that Meta’s decision could set a dangerous precedent for other platforms. Kate Klonick, an expert on internet law, described the move as a "radical swing" in content governance, signaling a shift towards prioritizing free speech over safety mechanisms.

As Zuckerberg prepares to meet with Trump and other key figures before the president-elect’s inauguration, Meta’s policy changes seem to be a part of a broader effort to rebuild relationships with the incoming administration. With Meta also planning to add Dana White, a close Trump ally, to its board of directors, the company is clearly signaling a shift in its political priorities.

The change has sparked debate about the future of online speech, with critics warning that it could lead to a rise in hate speech and misinformation, while supporters argue it’s a necessary step toward protecting free expression. As Meta implements this new approach, it remains to be seen how it will impact the platform’s content moderation policies and its relationship with users worldwide.