Analysts and civil rights advocates have raised concerns about how Mark Zuckerberg’s plans to dismantle Meta’s fact-checking program could disrupt advertising economies, digital job markets and broader democratic discourse.
In recent months, Meta has sought to attract creators with bonus incentives and great AI tools, apparently positioning itself to take advantage of an influx of creators following TikTok’s pending ban. The full impact of Meta’s inspired “community management” approach to moderation remains to be seen, but social media creators across Meta’s properties are likely to feel the sharp change.
For underrepresented creatives, the change means greater risk within an already precarious career economy.
Playing by the rules of the platform
Creators’ dependence on Meta’s platforms — Facebook, Instagram, Threads — means they’re vigilant about upholding the companies’ community standards, which outline violent, deceptive, sexually exploitative and otherwise prohibited content. But these rules are far from clear and unequal.
Here, Zuckerberg’s admission that Meta’s moderation systems sometimes get that “wrong” rings true. As noted in Meta’s announcement on Tuesday, “Too much harmless content gets censored, too many people find themselves mistakenly locked in ‘Facebook jail,’ and we’re often too slow to respond when they do.” Creators report many punitive errors – an ankle wrongly marked as “nudity” or a joke about “kidnapping a book” interpreted as child endangerment.
Many creators I’ve interviewed over the years have experienced what they describe as unfair punishment. As one social media personality with four million followers told me after her Instagram account was banned for allegedly over-sexualized content: “The losses were devastating. Tearful. Emotional. Scary. Because you’re like, ‘I’ve become invisible.'” For those who are able to successfully monetize their accounts, the emotional damage is certainly accompanied by financial loss.
But if excessive censorship is a concern for creators, a far greater risk – especially for marginalized communities – is an independent career within a no-holds-barred internet. During my research, I heard accounts of identity-based harassment, trolling, ridicule and threats. Worse yet, creators report that the platform’s systems put them on the “wrong side of the algorithm” — a phrase used to describe exposure to antagonistic audiences.
Meta’s proposed solution to such damages is a strong reliance on its community to report violations. Certainly, shared systems of voluntary governance have the potential to manage “social relations, conflicts, and civil liberties on the Internet,” as my colleague J. Nathan Matias wrote in a 2019 study on reddit. But given how few shared cultural norms exist across Meta’s various communities or creators, this approach seems unlikely to work in this situation.
One possible result is that creators will engage in self-censorship to avoid harassment. Given how often marginalized creators already face sexist, racist and/or transphobic language, Meta’s removal of guidance on sensitive issues like gender or immigration is particularly alarming. Wired reported on recent changes to the Hate Conduct policy, including the statement: “We allow claims of mental illness or abnormality when based on gender or sexual orientation.”
There is also likely to be an increase in mass reports, where users flood the reporting system with complaints, aiming to remove the target creator’s content. This would open the floodgates for strategic attacks – just as “organized beat” political groups have used to express discontent.
To be sure, the battle for visibility is already prevalent in the creator economy. As one Cosplay creator shared with me on Instagram, “If someone posts a video and a bunch of community trolls don’t like it… they mass report that creator [and] their stuff gets removed when nothing they say goes against any of our community guidelines.” What Colten Meisner, assistant professor at North Carolina State University, describes as “weaponized platform governance” is particularly problematic for creators who advocate for marginalized communities — such as people of color, the LGBTQ+ community, and those with disabilities. limited.
If more vulnerable creators are forced to self-censor, their privileged counterparts may be encouraged to create more sensational content. Thus, we can expect to see an increase in so-called rage baiting – that is, content deliberately created to incite anger in the audience. The outrage testifies to the logic of the “there’s no such thing as bad publicity” attention. And there is something of a science to the tactic, as studies of emotional contagion make clear.
Another Adpocalypse?
The feature, of course, is how moderate Meta regulation will affect advertising deals within a lucrative creative economy. As media scholar Siva Vaidhyanathan recalled Wednesday in a critique of Zuckerberg, “No reputable company wants its product or service placed next to a horrific image of sexual exploitation, violence or bigotry.” It’s another Adpokalips – where advertisers withdraw funds in mass – on the horizon?
As creatives continue to expand their influence in news and politics — the 2024 presidential campaign has been dubbed the “influence election” — the consequences of these changes deserve careful reflection. Not only do they affect the livelihoods of a professional class of creatives who are just beginning to create work and legal structures, but they also shape the main agendas of attention of anyone who relies on them for advice, information and entertainment. What Meta – and X, for that matter – call radical free speech is likely to entrench inequalities within the creative economy.