Voulez-Vous Parler Social Networks’ Content Moderation Policies?
My father told me once, “If you see something wrong happening in the world, you can either do nothing, or you can do something.” And I already tried nothing.
— Steve Trevor, Wonder Woman
Social networks, tech vendors ingesting social data, and brands running social listening practices might all be wondering the same thing right now: How far should content moderation go, and is there value in capturing consumers’ thoughts as-is? Equally terrible conversation occurs on all social networks, including mainstream and niche ones. At some point in social networks’ growth trajectory, they must select a path: do nothing or do something when it comes to regulating content on their platforms. That decision trickles down to social tech vendors serving a variety of intelligence use cases — and ultimately to brands using that data for business decisions. So which is the right approach?
Why social networks (especially mainstream ones) “do something”:
- Social networks make money from advertisers. Companies or brands that market and advertise on social media need to feel like it’s a brand-safe environment. This requires the social networks to have content policies and guidelines to be deemed “brand safe.” During the 2020 social justice movement, many prominent brands reduced or pulled social advertising on Facebook and Instagram. While Facebook, Inc.’s revenue wasn’t impacted due to its diversification among large and small advertisers, the PR headache and societal optics spurred the company to employ more dramatic content moderation.
- False narratives begin off social media but make their way onto social networks that ultimately get blamed. Fringe media websites, sometimes linked to foreign intelligence services, use single grains of truth to build robust, sensational false narratives. Overtly state-controlled media outlets often report on the fringe story to give the narrative legitimacy and to reach a greater audience. The false narrative then gets amplified and reshared on social media where the back-and-forth commentary becomes toxic and inflammatory.
- Apple and Google app stores dominate distribution and require content moderation. All apps, including social networks’ apps, must abide by app marketplace policies. For example, each store has anti-malware tools to help prevent users from downloading malicious apps. While the anti-malware policies are relatively objective, other parts of the app store policies are subjective. Apple’s policy on content and behavior cites a former US Supreme Court justice: “‘I’ll know it when I see it.’ And we think that you will also know it when you cross it.” Unfortunately, civility and decorum are, like art, in the eye of the beholder.
Why other social networks “do nothing”:
(Note: Despite its free and open discourse promise, Parler states it does lightweight human content moderation. It uses a community jury of five randomly-selected peers to review individual posts. But it’s a far cry from the mainstream social networks’ more rigorous moderation processes and doesn’t employ any machine moderation.)
- There is a line between free speech on social networks and inciting violence. Mainstream social networks are not legally bound by the former but fear the latter — and thus are increasingly cautious in regulating both. Parler’s CEO views them as starkly different: “You can’t stop people and change their opinions by force by censoring them. They’ll just go somewhere else and do it. So as long as it is legal, it’s allowed.”
- “Deplatforming” or “cancel culture” is simply modern whack-a-mole. Often, the effort to address harmful content has the opposite effect and draws more attention to the incendiary rhetoric, à la The Streisand Effect. Furthermore, the internet is too vast for even Amazon, Google, Apple, and Facebook to stamp out divisive content.
- Tech companies helped create the social media morass and won’t undo it. “‘Our algorithms exploit the human brain’s attraction to divisiveness,’ read a slide from a 2018 presentation at Facebook, Inc. headquarters. ‘If left unchecked,’ it warned, Facebook would feed users ‘more and more divisive content in an effort to gain user attention & increase time on the platform.’” Don’t expect the tech companies to achieve a Sisyphean task alone.
Our Take: Do Something — But Do It Definitively And Collaboratively
Left unchecked, social media’s negative externalities will continue to grow, fueling more division. The consequences of not moderating content outweigh the pitfalls of moderating. Ultimately, people, brands, and society itself need a safe environment online and off. Those in the business of hosting a social network require a strong code of conduct because social conversation has downstream impact on all constituents using social data. But today’s social networks aren’t technically media entities held to media regulations and thus design their own guidelines. They attempt to be proactive but often reactively adapt on the fly. While agility is helpful in a fast-changing world, having fluid policies that impact billions of users is like constantly moving the goalposts. It leaves users and brands unsure of what is acceptable vs. unacceptable on any given day.
Social networks won’t solve this problem alone. Marketers and advertisers, from large Fortune 100s to small mom-and-pop shops, can and should flex more muscle: Stop spending on social advertising, demand definitive and transparent social network guidelines, and rely on strong company values to dictate participation in social media or not. The threat intelligence community can also lend expertise on how disinformation flows through the internet and how to run threat modeling exercises. Consumers have the option to quit social media, and governments hold the power to enact regulation. As a group, they will be better served to define where free speech ends and harmful speech begins on social media.