These are tough times for Free Speech purists–of whom I am one.
I have always been persuaded by the arguments that support freedom of expression. In a genuine marketplace of ideas, I believe–okay, I want to believe– that better ideas will drive out worse ones. More compelling is the argument that, while some ideas may be truly dangerous, giving government the right to decide which ideas get expressed and which ones don’t would be much more dangerous.
But FaceBook and other social media sites are really testing my allegiance to unfettered, unregulated–and amplified–expression. Recently, The Guardian reported that more than 3 million followers and members support the crazy QAnon conspiracy on Facebook, and their numbers are growing.
For those unfamiliar with QAnon, it
is a movement of people who interpret as a kind of gospel the online messages of an anonymous figure – “Q” – who claims knowledge of a secret cabal of powerful pedophiles and sex traffickers. Within the constructed reality of QAnon, Donald Trump is secretly waging a patriotic crusade against these “deep state” child abusers, and a “Great Awakening” that will reveal the truth is on the horizon.
Brian Friedberg, a senior researcher at the Harvard Shorenstein Center is quoted as saying that Facebook is a “unique platform for recruitment and amplification,” and that he doubts QAnon would have been able to happen without the “affordances of Facebook.”
Facebook isn’t just providing a platform to QAnon groups–its algorithms are actively recommending them to users who may not otherwise have been exposed to them. And it isn’t only QAnon. According to the Wall Street Journal, Facebook’s own internal research in 2016 found that “64% of all extremist group joins are due to our recommendation tools.”
If the problem was limited to QAnon and other conspiracy theories, it would be troubling enough, but it isn’t. A recent essay by a Silicone Valley insider named Roger McNamee in Time Magazine began with an ominous paragraph:
If the United States wants to protect democracy and public health, it must acknowledge that internet platforms are causing great harm and accept that executives like Mark Zuckerberg are not sincere in their promises to do better. The “solutions” Facebook and others have proposed will not work. They are meant to distract us.
McNamee points to a predictable cycle: platforms are pressured to “do something” about harassment, disinformation or conspiracy theories. They respond by promising to improve their content moderation. But– as the essay points out– none have been successful at limiting the harm from third party content, and so the cycle repeats. (As he notes, banning Alex Jones removed his conspiracy theories from the major sites, but did nothing to stop the flood of similar content from other people.)
The article identifies three reasons content moderation cannot work: scale, latency, and intent. Scale refers to the sheer hundreds of millions messages posted each day. Latency is the time it takes for even automated moderation to identify and remove a harmful message. The most important obstacle, however, is intent–a/k/a the platform’s business model.
The content we want internet platforms to remove is the content most likely to keep people engaged and online–and that makes it exceptionally valuable to the platforms.
As a result, the rules for AI and human moderators are designed to approve as much content as possible. Alone among the three issues with moderation, intent can only be addressed with regulation.
McNamee argues we should not have to accept disinformation as the price of access, and he offers a remedy:
At present, platforms have no liability for the harms caused by their business model. Their algorithms will continue to amplify harmful content until there is an economic incentive to do otherwise. The solution is for Congress to change incentives by implementing an exception to the safe harbor of Section 230 of the Communications Decency Act for algorithm amplification of harmful content and guaranteeing a right to litigate against platforms for this harm. This solution does not impinge on first amendment rights, as platforms are free to continue their existing business practices, except with liability for harms.
I’m not sure I share McNamee’s belief that his solution doesn’t implicate the First Amendment.
The (relative) newness of the Internet and social media creates uncertainty. What, exactly, are these platforms? How should they be classified? They aren’t traditional publishers–and third parties’ posts aren’t their “speech.”
As 2020 campaigns heat up, more attention is being paid to how FaceBook promotes propaganda. Its refusal to remove or label clear lies from the Trump campaign has prompted advertisers to temporarily boycott the platform. FaceBook may react by tightening some moderation, but ultimately, McNamee is right: that won’t solve the problem.
One more conundrum of our Brave New World……