Falsely Shouting “Fire” In The Digital Theater

Tom Wheeler is one of the savviest observers of the digital world.

Now at the Brookings Institution, Wheeler headed up the FCC during the Obama administration, and recently authored an essay titled “The Consequences of Social Media’s Giant Experiment.” That essay–like many of his other publications–considered the impact of legally-private enterprises that have had a huge public impact.

The “experiment” Wheeler considers is the shutdown of Trump’s disinformation megaphones: most consequential, of course, were the Facebook and Twitter bans of Donald Trump’s accounts, but it was also important that  Parler–a site for rightwing radicalization and conspiracy theories–was effectively shut down for a time by Amazon’s decision to cease hosting it, and decisions by both Android and Apple to remove it from their app stores. (I note that, since Wheeler’s essay, Parler has found a new hosting service–and it is Russian owned.)

These actions are better late than never. But the proverbial horse has left the barn. These editorial and business judgements do, however, demonstrate how companies have ample ability to act conscientiously to protect the responsible use of their platforms.

Wheeler addresses the conundrum that has been created by a subsection of the law that  insulates social media companies from responsibility for making the sorts of  editorial judgements that publishers of traditional media make every day. As he says, these 26 words are the heart of the issue: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

As he points out,

If you are insulated from the consequences of your actions and make a great deal of money by exploiting that insulation, then what is the incentive to act responsibly?…

The social media companies have put us in the middle of a huge and explosive lab experiment where we see the toxic combination of digital technology, unmoderated content, lies and hate. We now have the answer to what happens when these features and large profits are blended together in a connected world. The result not only has been unproductive for civil discourse, it also represents a danger to democratic systems and effective problem-solving.

Wheeler repeats what most observers of our digital world have recognized: these platforms have the technological capacity to exercise the same sort of responsible moderation that  we expect of traditional media. What they lack is the will–because more responsible moderating algorithms would eat into their currently large–okay, obscene– profits.

The companies’ business model is built around holding a user’s attention so that they may display more paying messages. Delivering what the user wants to see, the more outrageous the better, holds that attention and rings the cash register.

Wheeler points out that we have mischaracterized these platforms–they are not, as they insist, tech enterprises. They are media, and should be required to conform to the rules and expectations that govern media sources. He has other suggestions for tweaking the rules that govern these platforms, and they are worth consideration.

That said, the rise of these digital giants creates a bigger question and implicates what is essentially a philosophical dilemma.

The U.S. Constitution was intended to limit the exercise of power; it was crafted at a time in human history when governments held a clear monopoly on that power. That is arguably no longer the case–and it isn’t simply social media giants. Today, multiple social and economic institutions have the power to pose credible threats both to individual liberty and to social cohesion. How we navigate the minefield created by that reality–how we restrain the power of theoretically “private” enterprises– will determine the life prospects of our children and grandchildren.

At the very least, we need rules that will limit the ability of miscreants to falsely shout fire in our digital environments.

Comments

Information Silos And The First Amendment

The First Amendment contemplates and protects a “marketplace of ideas.” We have no precedent for an information environment in which there is no marketplace–no “agora” where different ideas and perspectives contend with each other for acceptance.

What we have instead are information “silos”–a column in the New York Times recently quoted Robert Post, a Yale professor, for the observation that people have always been crazy, but the internet has allowed them to find each other.

In those silos, they talk only to each other.

Social media has enabled the widespread and instantaneous transmission of lies in the service of political gain, and we are seeing the results. The question is: what should we do?

One set of scholars has concluded that the damage being done by misinformation and propaganda outweighs the damage of censorship. Rick Hasen, perhaps the most pre-eminent scholar of election law, falls into that category:

Change is urgent to deal with election pathologies caused by the cheap speech era, but even legal changes as tame as updating disclosure laws to apply to online political ads could face new hostility from a Supreme Court taking a libertarian marketplace-of-ideas approach to the First Amendment. As I explain, we are experiencing a market failure when it comes to reliable information voters need to make informed choices and to have confidence in the integrity of our electoral system. But the Court may stand in the way of necessary reform.

I don’t know what Hasen considers “necessary reform,” but I’m skeptical.

I have always been a First Amendment purist, and I still agree with the balance struck by the Founders, who understood that–as pernicious and damaging as bad ideas can be–allowing government to determine which ideas get voiced is likely to be much more dangerous. (As a former ACLU colleague memorably put it, “Poison gas is a great weapon until the wind shifts.”)

That said, social media platforms aren’t government. Like brick-and-mortar private businesses, they can insist on certain behaviors by their customers. And like other private businesses, they can and should be regulated in the public interest. (At the very least, they should be required to apply their own rules consistently. People expressing concern/outrage over Twitter’s ban of Trump should be reminded that he would have encountered that ban much earlier had he been an ordinary user. Trump had flouted Twitter and Facebook rules for years.)

The Times column suggests we might learn from European approaches to issues of speech, including falsehoods and hate speech. Hate speech can only be banned in the U.S. if it is intended to incite imminent violence and is actually likely to do so. Europeans have decided that hate speech isn’t valuable public discourse– that racism isn’t an idea; it’s a form of discrimination.

The underlying philosophical difference here is about the right of the individual to self-expression. Americans value that classic liberal right very highly — so highly that we tolerate speech that might make others less equal. Europeans value the democratic collective and the capacity of all citizens to participate fully in it — so much that they are willing to limit individual rights.

The First Amendment was crafted for a political speech environment that was markedly different than today’s, as Tim Wu has argued.  Government censorship was then the greatest threat to free speech. Today, those, including Trump, “who seek to control speech use new methods that rely on the weaponization of speech itself, such as the deployment of ‘troll armies,’ the fabrication of news, or ‘flooding’ tactics” that humiliate, harass, discourage, and even destroy targeted speakers.”

Wu argues that Americans can no longer assume that the First Amendment is an adequate guarantee against malicious speech control and censorship. He points out that the marketplace of ideas has become corrupted by technologies “that facilitate the transmission of false information.”

American courts have long held that the best test of truth is the power of an idea to get itself accepted in the competition that characterizes a marketplace. They haven’t addressed what happens when there is no longer a functioning market–when citizens  confine their communicative interactions to sites that depend for their profitability on confirming the biases of carefully targeted populations.

I certainly don’t think the answer is to dispense with–or water down– the First Amendment. But that Amendment was an effort to keep those with power from controlling information. In today’s information environment, platforms like Twitter, Facebook, etc. are as powerful and influential as government. Our challenge is to somehow rein in intentional propaganda and misinformation without throwing the baby out with the bathwater.

Any ideas how we do that?

Comments

Elementary Ethics

Yesterday, I posted about generalized social trust–its importance, and some of the reasons for its recent decline. Today, I want to focus on the role played by ethical behavior–in this case, the lack of ethical behavior–in the distressing and accelerating erosion of social trust.

One of the most obvious ethical principles is avoidance of conflicts of interest. I believe it was John Locke who noted that a person (okay, back then he said “a man”) could not be the judge in his own case, and that is really the heart of the rule against conflicts. Elected officials are not supposed to participate in decisions that will affect them personally and directly.

If a state official approves a purchase of land for a highway, and that highway will run through land owned by members of his family, that’s a conflict of interest. If a United States Senator relies upon information not yet shared with the public to sell stock holdings before the news gets out, that’s a blatant conflict. (And yes, Senator Perdue, we’re all looking at you.) When a President refuses to divest himself of business interests that will be directly affected by his decisions in office, that’s a huge departure from ethical behavior.

It is hardly a secret that the Trump Administration has been brazenly unethical. Last year, Pro Publica noted that the administration itself had reported (quietly) numerous ethical breaches. The report noted that President Trump’s ethics pledge had been considerably weaker than previous pledges, but that the government ethics office found violations of even those watered-down rules, particularly at three federal agencies: the Environmental Protection Agency, the Department of the Interior and the National Labor Relations Board.

Just one example: At the NLRB, Republican board member William Emanuel improperly voted on a case despite the fact that his former law firm, Littler Mendelson, represented one of the parties. (The firm represents corporations in labor disputes, and he also voted to eliminate regulations protecting unions.) Conflicts at the EPA have been widely covered by the media; numerous EPA officials chosen by Trump have come from fossil fuel companies and/or the law firms that represent them, and those officials have rolled back nearly 100 environmental regulations.

Then there’s former Interior Secretary Ryan Zinke, who is being investigated by the Justice Department’s public integrity section over allegations he lied to his agency’s inspector general’s office. There are also two separate probes by the Department’s inspector general about Zinke’s ties to real estate deals in Montana and a proposed casino project in Connecticut. 

As for Trump, there is at least one lawsuit charging violations of the Emoluments Clause still working its way through the courts–although the current composition of the Supreme Court doesn’t bode well for the outcome. 

The White House has refused to impose any sanctions for officials found to have committed ethical violations. That–as observers have noted–has sent a message of tacit approval, not just to the officials violating ethical standards, but to citizens who are aware of the breaches.

It isn’t just government. Cable news companies and social media giants routinely behave in ways that violate both journalism ethics and strictures against conflicts of interest. Facebook employs a rightwing internet site, The Daily Caller, as a “fact checker” despite the fact that the site is supported financially by the GOP. A story originally published by Salon reports that “The Daily Caller has taken tens of thousands of dollars to help Republican campaigns raise money while performing political fact-check services for Facebook.”

The Caller, a right-wing publication co-founded by Fox News personality Tucker Carlson, has also since 2016 sent dozens of emails “paid for by Trump Make America Great Again Committee,” a joint fundraising vehicle shared by the Trump campaign and the Republican National Committee, according to Media Matters.

Media Matters also revealed that The Daily Caller has sent sponsored emails on behalf of a number of Republican candidates this year. Media Matters posted screenshots of the emails, from Sen. Lindsey Graham, R-S.C; Rep. Jim Jordan, R-Ohio; the Senate Conservatives Fund; and the Bikers for the President PAC.

Asking the Daily Caller to fact-check political posts is like asking a wife-beater to evaluate spousal abuse cases.

When ethical principles are routinely flouted by a society’s most powerful institutions, is it any wonder that Americans don’t know who or what they can trust?

Comments

Increasing Intensity–For Profit

Remember when Donald Rumsfeld talked about “known unknowns”? It was a clunky phrase, but in a weird way, it describes much of today’s world.

Take social media, for example. What we know is that pretty much everyone is on one or another (or many) social media platforms. What we don’t know is how the various algorithms those sites employ are affecting our opinions, our relationships and our politics. (Just one of the many reasons to be nervous about the reach of wacko conspiracies like QAnon, not to mention the upcoming election…)

A recent essay in the “subscriber only” section of Talking Points Memo focused on those algorithms, and especially on the effect of those used by Facebook. The analysis suggested that the algorithms were designed to increase users’ intensities and Facebook’s profits, designs that have contributed mightily to the current polarization of American voters.

The essay referenced recent peer-reviewed research confirming something we probably all could have guessed: the more time people spend on Facebook the more polarized their beliefs become. What most of us wouldn’t have guessed is the finding that the effect is  five times greater for conservatives than for liberals–an effect that was not found for other social media sites.

The study looked at the effect on conservatives of Facebook usage and Reddit usage. The gist is that when conservatives binge on Facebook the concentration of opinion-affirming content goes up (more consistently conservative content) but on Reddit it goes down significantly. This is basically a measure of an echo chamber. And remember too that these are both algorithmic, automated sites. Reddit isn’t curated by editors. It’s another social network in which user actions, both collectively and individually, determine what you see. If you’ve never visited Reddit let’s also just say it’s not all for the faint of heart. There’s stuff there every bit as crazy and offensive as anything you’ll find on Facebook.

The difference is in the algorithms and what the two sites privilege in content. Read the article for the details but the gist is that Reddit focuses more on interest areas and viewers’ subjective evaluations of quality and interesting-ness whereas Facebook focuses on intensity of response.

Why the difference? Reddit is primarily a “social” site; Facebook is an advertising site. Its interest in stoking intensity is in service of that advertising–the longer you are engaged with the platform, the more time you spend on it, and especially how intensely you are engaged, all translate into increased profit.

Facebook argues that the platform is akin to the telephone; no one blames telephone when people use them to spread extremist views. It argues that the site is simply facilitating communication. But–as the essay points out– that’s clearly not true. Facebook’s search engine is designed to encourage and amplify some emotions and responses–something your telephone doesn’t do.  It’s a “polarization/extremism generating machine.”

The essay ends with an intriguing–and apt–analogy to the economic description of externalities:

Producing nuclear energy is insanely profitable if you sell the energy, take no safety precautions and dump the radioactive waste into the local river. In other words, if the profits remain private and the costs are socialized. What makes nuclear energy an iffy financial proposition is the massive financial costs associated with doing otherwise. Facebook is like a scofflaw nuclear power company that makes insane profits because it runs its reactor in the open and dumps the waste in the bog behind the local high school.

Facebook’s externality is political polarization.

The question–as always–is “what should we do about it?”

Comments

FaceBook, Disinformation And The First Amendment

These are tough times for Free Speech purists–of whom I am one.

I have always been persuaded by the arguments that support freedom of expression. In a genuine  marketplace of ideas, I believe–okay, I want to believe– that better ideas will drive out worse ones. More compelling is the argument that, while some ideas may be truly dangerous, giving   government the right to decide which ideas get expressed and which ones don’t would be much more dangerous. 

But FaceBook and other social media sites are really testing my allegiance to unfettered, unregulated–and amplified–expression. Recently, The Guardian reported that more than 3 million followers and members support the crazy QAnon conspiracy on Facebook, and their numbers are growing.

For those unfamiliar with QAnon, it

is a movement of people who interpret as a kind of gospel the online messages of an anonymous figure – “Q” – who claims knowledge of a secret cabal of powerful pedophiles and sex traffickers. Within the constructed reality of QAnon, Donald Trump is secretly waging a patriotic crusade against these “deep state” child abusers, and a “Great Awakening” that will reveal the truth is on the horizon.

Brian Friedberg, a senior researcher at the Harvard Shorenstein Center is quoted as saying that Facebook is a “unique platform for recruitment and amplification,” and that he doubts  QAnon would have been able to happen without the “affordances of Facebook.”

Facebook isn’t just providing a platform to QAnon groups–its  algorithms are actively recommending them to users who may not otherwise have been exposed to them. And it isn’t only QAnon. According to the Wall Street Journal, Facebook’s own internal research in 2016 found that “64% of all extremist group joins are due to our recommendation tools.”

If the problem was limited to QAnon and other conspiracy theories, it would be troubling enough, but it isn’t. A recent essay by a Silicone Valley insider named Roger McNamee in Time Magazine began with an ominous paragraph:

If the United States wants to protect democracy and public health, it must acknowledge that internet platforms are causing great harm and accept that executives like Mark Zuckerberg are not sincere in their promises to do better. The “solutions” Facebook and others have proposed will not work. They are meant to distract us.

McNamee points to a predictable cycle: platforms are pressured to “do something” about harassment, disinformation or conspiracy theories. They respond by promising to improve their content moderation. But– as the essay points out– none have been successful at limiting the harm from third party content, and  so the cycle repeats.  (As he notes, banning Alex Jones removed his conspiracy theories from the major sites, but did nothing to stop the flood of similar content from other people.)

The article identifies three reasons content moderation cannot work: scale, latency, and intent. Scale refers to the sheer hundreds of millions messages posted each day. Latency is the time it takes for even automated moderation to identify and remove a harmful message. The most important obstacle, however, is intent–a/k/a the platform’s business model.

The content we want internet platforms to remove is the content most likely to keep people engaged and online–and that makes it exceptionally valuable to the platforms.

As a result, the rules for AI and human moderators are designed to approve as much content as possible. Alone among the three issues with moderation, intent can only be addressed with regulation.

McNamee argues we should not have to accept disinformation as the price of access, and he offers a remedy:

At present, platforms have no liability for the harms caused by their business model. Their algorithms will continue to amplify harmful content until there is an economic incentive to do otherwise. The solution is for Congress to change incentives by implementing an exception to the safe harbor of Section 230 of the Communications Decency Act for algorithm amplification of harmful content and guaranteeing a right to litigate against platforms for this harm. This solution does not impinge on first amendment rights, as platforms are free to continue their existing business practices, except with liability for harms.

I’m not sure I share McNamee’s belief that his solution doesn’t implicate the First Amendment.

The (relative) newness of the Internet and social media creates uncertainty. What, exactly, are these platforms? How should they be classified? They aren’t traditional publishers–and third parties’ posts aren’t their “speech.” 

As 2020 campaigns heat up, more attention is being paid to how FaceBook promotes propaganda. Its refusal to remove or label clear lies from the Trump campaign has prompted advertisers to temporarily boycott the platform. FaceBook may react by tightening some moderation, but ultimately, McNamee is right: that won’t solve the problem.

One more conundrum of our Brave New World……

Happy 4th!

Comments