Tag Archives: Facebook

Information Silos And The First Amendment

The First Amendment contemplates and protects a “marketplace of ideas.” We have no precedent for an information environment in which there is no marketplace–no “agora” where different ideas and perspectives contend with each other for acceptance.

What we have instead are information “silos”–a column in the New York Times recently quoted Robert Post, a Yale professor, for the observation that people have always been crazy, but the internet has allowed them to find each other.

In those silos, they talk only to each other.

Social media has enabled the widespread and instantaneous transmission of lies in the service of political gain, and we are seeing the results. The question is: what should we do?

One set of scholars has concluded that the damage being done by misinformation and propaganda outweighs the damage of censorship. Rick Hasen, perhaps the most pre-eminent scholar of election law, falls into that category:

Change is urgent to deal with election pathologies caused by the cheap speech era, but even legal changes as tame as updating disclosure laws to apply to online political ads could face new hostility from a Supreme Court taking a libertarian marketplace-of-ideas approach to the First Amendment. As I explain, we are experiencing a market failure when it comes to reliable information voters need to make informed choices and to have confidence in the integrity of our electoral system. But the Court may stand in the way of necessary reform.

I don’t know what Hasen considers “necessary reform,” but I’m skeptical.

I have always been a First Amendment purist, and I still agree with the balance struck by the Founders, who understood that–as pernicious and damaging as bad ideas can be–allowing government to determine which ideas get voiced is likely to be much more dangerous. (As a former ACLU colleague memorably put it, “Poison gas is a great weapon until the wind shifts.”)

That said, social media platforms aren’t government. Like brick-and-mortar private businesses, they can insist on certain behaviors by their customers. And like other private businesses, they can and should be regulated in the public interest. (At the very least, they should be required to apply their own rules consistently. People expressing concern/outrage over Twitter’s ban of Trump should be reminded that he would have encountered that ban much earlier had he been an ordinary user. Trump had flouted Twitter and Facebook rules for years.)

The Times column suggests we might learn from European approaches to issues of speech, including falsehoods and hate speech. Hate speech can only be banned in the U.S. if it is intended to incite imminent violence and is actually likely to do so. Europeans have decided that hate speech isn’t valuable public discourse– that racism isn’t an idea; it’s a form of discrimination.

The underlying philosophical difference here is about the right of the individual to self-expression. Americans value that classic liberal right very highly — so highly that we tolerate speech that might make others less equal. Europeans value the democratic collective and the capacity of all citizens to participate fully in it — so much that they are willing to limit individual rights.

The First Amendment was crafted for a political speech environment that was markedly different than today’s, as Tim Wu has argued.  Government censorship was then the greatest threat to free speech. Today, those, including Trump, “who seek to control speech use new methods that rely on the weaponization of speech itself, such as the deployment of ‘troll armies,’ the fabrication of news, or ‘flooding’ tactics” that humiliate, harass, discourage, and even destroy targeted speakers.”

Wu argues that Americans can no longer assume that the First Amendment is an adequate guarantee against malicious speech control and censorship. He points out that the marketplace of ideas has become corrupted by technologies “that facilitate the transmission of false information.”

American courts have long held that the best test of truth is the power of an idea to get itself accepted in the competition that characterizes a marketplace. They haven’t addressed what happens when there is no longer a functioning market–when citizens  confine their communicative interactions to sites that depend for their profitability on confirming the biases of carefully targeted populations.

I certainly don’t think the answer is to dispense with–or water down– the First Amendment. But that Amendment was an effort to keep those with power from controlling information. In today’s information environment, platforms like Twitter, Facebook, etc. are as powerful and influential as government. Our challenge is to somehow rein in intentional propaganda and misinformation without throwing the baby out with the bathwater.

Any ideas how we do that?

 

 

Elementary Ethics

Yesterday, I posted about generalized social trust–its importance, and some of the reasons for its recent decline. Today, I want to focus on the role played by ethical behavior–in this case, the lack of ethical behavior–in the distressing and accelerating erosion of social trust.

One of the most obvious ethical principles is avoidance of conflicts of interest. I believe it was John Locke who noted that a person (okay, back then he said “a man”) could not be the judge in his own case, and that is really the heart of the rule against conflicts. Elected officials are not supposed to participate in decisions that will affect them personally and directly.

If a state official approves a purchase of land for a highway, and that highway will run through land owned by members of his family, that’s a conflict of interest. If a United States Senator relies upon information not yet shared with the public to sell stock holdings before the news gets out, that’s a blatant conflict. (And yes, Senator Perdue, we’re all looking at you.) When a President refuses to divest himself of business interests that will be directly affected by his decisions in office, that’s a huge departure from ethical behavior.

It is hardly a secret that the Trump Administration has been brazenly unethical. Last year, Pro Publica noted that the administration itself had reported (quietly) numerous ethical breaches. The report noted that President Trump’s ethics pledge had been considerably weaker than previous pledges, but that the government ethics office found violations of even those watered-down rules, particularly at three federal agencies: the Environmental Protection Agency, the Department of the Interior and the National Labor Relations Board.

Just one example: At the NLRB, Republican board member William Emanuel improperly voted on a case despite the fact that his former law firm, Littler Mendelson, represented one of the parties. (The firm represents corporations in labor disputes, and he also voted to eliminate regulations protecting unions.) Conflicts at the EPA have been widely covered by the media; numerous EPA officials chosen by Trump have come from fossil fuel companies and/or the law firms that represent them, and those officials have rolled back nearly 100 environmental regulations.

Then there’s former Interior Secretary Ryan Zinke, who is being investigated by the Justice Department’s public integrity section over allegations he lied to his agency’s inspector general’s office. There are also two separate probes by the Department’s inspector general about Zinke’s ties to real estate deals in Montana and a proposed casino project in Connecticut. 

As for Trump, there is at least one lawsuit charging violations of the Emoluments Clause still working its way through the courts–although the current composition of the Supreme Court doesn’t bode well for the outcome. 

The White House has refused to impose any sanctions for officials found to have committed ethical violations. That–as observers have noted–has sent a message of tacit approval, not just to the officials violating ethical standards, but to citizens who are aware of the breaches.

It isn’t just government. Cable news companies and social media giants routinely behave in ways that violate both journalism ethics and strictures against conflicts of interest. Facebook employs a rightwing internet site, The Daily Caller, as a “fact checker” despite the fact that the site is supported financially by the GOP. A story originally published by Salon reports that “The Daily Caller has taken tens of thousands of dollars to help Republican campaigns raise money while performing political fact-check services for Facebook.”

The Caller, a right-wing publication co-founded by Fox News personality Tucker Carlson, has also since 2016 sent dozens of emails “paid for by Trump Make America Great Again Committee,” a joint fundraising vehicle shared by the Trump campaign and the Republican National Committee, according to Media Matters.

Media Matters also revealed that The Daily Caller has sent sponsored emails on behalf of a number of Republican candidates this year. Media Matters posted screenshots of the emails, from Sen. Lindsey Graham, R-S.C; Rep. Jim Jordan, R-Ohio; the Senate Conservatives Fund; and the Bikers for the President PAC.

Asking the Daily Caller to fact-check political posts is like asking a wife-beater to evaluate spousal abuse cases.

When ethical principles are routinely flouted by a society’s most powerful institutions, is it any wonder that Americans don’t know who or what they can trust?

 

Increasing Intensity–For Profit

Remember when Donald Rumsfeld talked about “known unknowns”? It was a clunky phrase, but in a weird way, it describes much of today’s world.

Take social media, for example. What we know is that pretty much everyone is on one or another (or many) social media platforms. What we don’t know is how the various algorithms those sites employ are affecting our opinions, our relationships and our politics. (Just one of the many reasons to be nervous about the reach of wacko conspiracies like QAnon, not to mention the upcoming election…)

A recent essay in the “subscriber only” section of Talking Points Memo focused on those algorithms, and especially on the effect of those used by Facebook. The analysis suggested that the algorithms were designed to increase users’ intensities and Facebook’s profits, designs that have contributed mightily to the current polarization of American voters.

The essay referenced recent peer-reviewed research confirming something we probably all could have guessed: the more time people spend on Facebook the more polarized their beliefs become. What most of us wouldn’t have guessed is the finding that the effect is  five times greater for conservatives than for liberals–an effect that was not found for other social media sites.

The study looked at the effect on conservatives of Facebook usage and Reddit usage. The gist is that when conservatives binge on Facebook the concentration of opinion-affirming content goes up (more consistently conservative content) but on Reddit it goes down significantly. This is basically a measure of an echo chamber. And remember too that these are both algorithmic, automated sites. Reddit isn’t curated by editors. It’s another social network in which user actions, both collectively and individually, determine what you see. If you’ve never visited Reddit let’s also just say it’s not all for the faint of heart. There’s stuff there every bit as crazy and offensive as anything you’ll find on Facebook.

The difference is in the algorithms and what the two sites privilege in content. Read the article for the details but the gist is that Reddit focuses more on interest areas and viewers’ subjective evaluations of quality and interesting-ness whereas Facebook focuses on intensity of response.

Why the difference? Reddit is primarily a “social” site; Facebook is an advertising site. Its interest in stoking intensity is in service of that advertising–the longer you are engaged with the platform, the more time you spend on it, and especially how intensely you are engaged, all translate into increased profit.

Facebook argues that the platform is akin to the telephone; no one blames telephone when people use them to spread extremist views. It argues that the site is simply facilitating communication. But–as the essay points out– that’s clearly not true. Facebook’s search engine is designed to encourage and amplify some emotions and responses–something your telephone doesn’t do.  It’s a “polarization/extremism generating machine.”

The essay ends with an intriguing–and apt–analogy to the economic description of externalities:

Producing nuclear energy is insanely profitable if you sell the energy, take no safety precautions and dump the radioactive waste into the local river. In other words, if the profits remain private and the costs are socialized. What makes nuclear energy an iffy financial proposition is the massive financial costs associated with doing otherwise. Facebook is like a scofflaw nuclear power company that makes insane profits because it runs its reactor in the open and dumps the waste in the bog behind the local high school.

Facebook’s externality is political polarization.

The question–as always–is “what should we do about it?”

 

FaceBook, Disinformation And The First Amendment

These are tough times for Free Speech purists–of whom I am one.

I have always been persuaded by the arguments that support freedom of expression. In a genuine  marketplace of ideas, I believe–okay, I want to believe– that better ideas will drive out worse ones. More compelling is the argument that, while some ideas may be truly dangerous, giving   government the right to decide which ideas get expressed and which ones don’t would be much more dangerous. 

But FaceBook and other social media sites are really testing my allegiance to unfettered, unregulated–and amplified–expression. Recently, The Guardian reported that more than 3 million followers and members support the crazy QAnon conspiracy on Facebook, and their numbers are growing.

For those unfamiliar with QAnon, it

is a movement of people who interpret as a kind of gospel the online messages of an anonymous figure – “Q” – who claims knowledge of a secret cabal of powerful pedophiles and sex traffickers. Within the constructed reality of QAnon, Donald Trump is secretly waging a patriotic crusade against these “deep state” child abusers, and a “Great Awakening” that will reveal the truth is on the horizon.

Brian Friedberg, a senior researcher at the Harvard Shorenstein Center is quoted as saying that Facebook is a “unique platform for recruitment and amplification,” and that he doubts  QAnon would have been able to happen without the “affordances of Facebook.”

Facebook isn’t just providing a platform to QAnon groups–its  algorithms are actively recommending them to users who may not otherwise have been exposed to them. And it isn’t only QAnon. According to the Wall Street Journal, Facebook’s own internal research in 2016 found that “64% of all extremist group joins are due to our recommendation tools.”

If the problem was limited to QAnon and other conspiracy theories, it would be troubling enough, but it isn’t. A recent essay by a Silicone Valley insider named Roger McNamee in Time Magazine began with an ominous paragraph:

If the United States wants to protect democracy and public health, it must acknowledge that internet platforms are causing great harm and accept that executives like Mark Zuckerberg are not sincere in their promises to do better. The “solutions” Facebook and others have proposed will not work. They are meant to distract us.

McNamee points to a predictable cycle: platforms are pressured to “do something” about harassment, disinformation or conspiracy theories. They respond by promising to improve their content moderation. But– as the essay points out– none have been successful at limiting the harm from third party content, and  so the cycle repeats.  (As he notes, banning Alex Jones removed his conspiracy theories from the major sites, but did nothing to stop the flood of similar content from other people.)

The article identifies three reasons content moderation cannot work: scale, latency, and intent. Scale refers to the sheer hundreds of millions messages posted each day. Latency is the time it takes for even automated moderation to identify and remove a harmful message. The most important obstacle, however, is intent–a/k/a the platform’s business model.

The content we want internet platforms to remove is the content most likely to keep people engaged and online–and that makes it exceptionally valuable to the platforms.

As a result, the rules for AI and human moderators are designed to approve as much content as possible. Alone among the three issues with moderation, intent can only be addressed with regulation.

McNamee argues we should not have to accept disinformation as the price of access, and he offers a remedy:

At present, platforms have no liability for the harms caused by their business model. Their algorithms will continue to amplify harmful content until there is an economic incentive to do otherwise. The solution is for Congress to change incentives by implementing an exception to the safe harbor of Section 230 of the Communications Decency Act for algorithm amplification of harmful content and guaranteeing a right to litigate against platforms for this harm. This solution does not impinge on first amendment rights, as platforms are free to continue their existing business practices, except with liability for harms.

I’m not sure I share McNamee’s belief that his solution doesn’t implicate the First Amendment.

The (relative) newness of the Internet and social media creates uncertainty. What, exactly, are these platforms? How should they be classified? They aren’t traditional publishers–and third parties’ posts aren’t their “speech.” 

As 2020 campaigns heat up, more attention is being paid to how FaceBook promotes propaganda. Its refusal to remove or label clear lies from the Trump campaign has prompted advertisers to temporarily boycott the platform. FaceBook may react by tightening some moderation, but ultimately, McNamee is right: that won’t solve the problem.

One more conundrum of our Brave New World……

Happy 4th!

 

Facebook And False Equivalence

Is it just me, or do the months between now and November seem interminable?

In the run-up to what will be an existentially-important decision for America’s future, we are living through an inconsistent, contested and politicized quarantine, mammoth protests triggered by a series of racist police murders of unarmed black men, and their   cynical escalation into riots by advocates of race war, and daily displays of worsening insanity from the White House–including, but certainly not limited to, America’s withdrawal from the World Health Organization in the middle of a pandemic followed by a phone call in which our “eloquent” President called governors “weak” and “jerks” for not waging war on their own citizens.

And in the midst of it all, a pissing match between the Psychopath-in-Chief and Twitter, which has finally–belately–decided to label some of Trump’s incendiary and inaccurate tweets for what they are.

We can only hope this glimmer of responsibility from Twitter continues. The platform’s unwillingness to apply the same rules to Trump that they apply to other users hasn’t just been cowardly–it has given his constant lies a surface plausibility and normalized his bile. We should all applaud Twitter’s belated recognition of its responsibility.

Then, of course, there’s Facebook.

It isn’t that Mark Zuckerberg is unaware of the harms being caused by Facebooks current algorithms. Numerous media outlets have reported on the company’s internal investigations into the way those algorithms encourage division and distort political debate. In her column last Sunday’s New York Times, Maureen Dowd reported

The Wall Street Journal had a chilling report a few days ago that Facebook’s own research in 2018 revealed that “our algorithms exploit the human brain’s attraction to divisiveness. If left unchecked,” Facebook would feed users “more and more divisive content in an effort to gain user attention & increase time on the platform.”

Mark Zuckerberg shelved the research.

The reasons are both depressing and ironic: in addition to concerns that less vitriol might mean users spending less time on the site, Zuckerberg understands that reducing the spread of untrue, divisive content would require eliminating substantially more material from the right than the left, opening the company to accusations of bias against conservatives.

Similar fears are said to be behind Facebook’s unwillingness to police political speech in advertisements and posts.

Think about it: Facebook knows that its platform is enormously influential. It know that the Right trades in conspiracy theories and intentional misinformation to a much greater extent than the Left, skewing the information landscape in dangerous ways. But for whatever reason– in order to insulate the company from regulation, or to curry favor with wealthy investors, or to escape the anger of the Breitbarts and Limbaughs–not to mention Trump–it has chosen to “allow people to make their own decisions.”

The ubiquity of social media presents lawmakers with significant challenges. Despite all the blather from the White House and the uninformed hysteria of ideologues, the issue isn’t censorship or freedom of speech–as anyone who has taken elementary civics knows, the Bill of Rights prohibits government from censoring communication. Facebook and Twitter and other social media sites aren’t government. For that matter, under current law, they aren’t even considered “publishers” who could be held accountable for whatever inaccurate drivel a user posts.

That means social media companies have the right to dictate their own terms of use. There is no legal impediment to Facebook or Twitter “censoring” posts they consider vile, obscene or untrue. (Granted, there are significant practical and marketing concerns involved in such an effort.) On Monday, reports emerged that Facebook’s own employees–including several in management–are clamoring for the platform to emulate Twitter’s new approach.

There have always been cranks and liars, racists and political propagandists. There haven’t always been easily accessible, worldwide platforms through which they could connect with similarly twisted individuals and spread their poisons. One of the many challenges of our technological age is devising constitutionally-appropriate ways to regulate those platforms.

If Mark Zuckerberg is unwilling to make FaceBook at least a minimally-responsible overseer of our national conversation–if he and his board cannot make and enforce reasonable rules about veracity in posts, a future government will undoubtedly do it for them–something that could set a dangerous precedent.

Refusing to be responsible– supporting a false equivalency that is tearing the country apart– is a much riskier strategy than Zuckerberg seems to recognize.

On the other hand, it finally seems to be dawning on Jack Dorsey, CEO of Twitter, that (as Dowd put it in her column)”Trump and Twitter were a match made in hell.”