Category Archives: Free Speech

Read This Book

Last week, I finished reading Jonathan Rauch’s The Constitution of Knowledge. I highly recommend it.

The book is an extraordinarily readable primer on epistemology –how we humans know what we know, and a defense of the proposition that knowledge is a product of collective and institutional effort–what we might call the scientific method writ large. (As Rauch points out, knowledge is “a conversation, not a destination,” and falsification is an essential element in the development of knowledge.)

He begins with the thesis that the open society is defined by three social systems: economic, political, and epistemic, and that each of those systems handles social decision-making about resources, power, and truth. The book goes on to compare and contrast those social systems, and to connect today’s challenges to the long history of philosophical and scientific inquiries about the nature of reality, the differences between faith and fact, and the social and governmental importance of occupying the same “reality-based” community.

The book is also a stirring defense of free speech against assaults from both the  right (censorship) and the left (cancel culture).

Rauch warns that the real danger in a culture where lying is ubiquitous isn’t simply misdirection; it is the undermining of our ability to distinguish between fact and falsehood. As others have noted, the methodology of censorship has changed; today, rather than efforts to simply suppress uncongenial ideas (virtually impossible in our digital age), the tactic is to “flood the information zone with shit”–to confuse, undermine and paralyze rather than brainwash.

In the digital age, Rauch shares a concern that regular readers of this blog will recognize as  a preoccupation of mine–a concern that  the marketplace of ideas is in danger of being supplanted by a marketplace of realities.

Perhaps the greatest virtue of the book is Rauch’s detailed explanation of why facts are–and must be– a social product.

Whether and where and how much of the time we think well thus depends not just on how biased we may be as individuals or even how we behave in unstructured groups; it also depends, crucially, on the design of the social environment in which we find ourselves. To phrase the point more bluntly: It’s the institutions, stupid.

As he says, our task is to create a” social environment which increases rightness and reduces wrongness.” Unlike our governmental constitution, the constitution of knowledge is unwritten, but no less important–it is a “social operating system” that aims to elicit co-operation and resolve differences on the “basis of rules rather than personal authority or tribal affiliation or brute force.” And he reminds us that information technology is very different from knowledge technology.

Information can be simply emitted, but knowledge, the product of a rich social interaction, must be achieved.

Rauch also reminds readers that all knowledge is necessarily provisional–that as we learn more, we revisit and refine what we “know” in light of new information and new knowledge, and that this inevitable impermanence can be very threatening to individuals who need bright lines and eternal truths.

Rauch concludes the discussion with advice on how the reality-based community can respond to and marginalize the trolls and virtue signalers and others who are using our new tools of communication to pollute the national discourse.

Speaking of that national discourse, I thought it was interesting to look at the ideological diversity of those who provided the inevitable jacket “blurbs” praising the book, because they represent a variety of (reality-based)political and social perspectives. Their range testifies to the objectivity of the content.

Bottom line, this is a truly important book, providing an essential overview of how humans know, how the “Constitution of Knowledge” overcomes individual errors and biases, allowing the collective “us” to distinguish between fact and fiction, and why that process is so essential to social construction and stability.

The foregoing description does a real disservice to the scope and richness of this book. You need to read it.

 

Free Speech

“Cancel culture.” “Political correctness.” “Hate speech.” Americans have been arguing about free speech since passage of the Alien and Sedition Acts. Recently, there have even been reports of disagreement within that bastion of free-speech defense, the ACLU.

As we all know, no one is trying to shut up people with whom they agree. The First Amendment was designed, as Justice Holmes memorably put it, to “protect the idea we hate.” In an effort to explain why that insight is so important, I often shared with my students a personal experience from “back in the day”– early in my long-ago tenure as Executive Director of Indiana’s ACLU.

Members of the KKK had applied to use the steps of the Indiana Statehouse for a rally. Then-Governor Evan Bayh (who surely knew better) refused to allow it. The Statehouse steps had routinely been used by other organizations, and despite Bayh’s posturing, the law clearly forbid the government from allowing or disallowing such use based on the content of the message to be delivered.

So the Klan came to the ACLU.

At the time, the people who ended up representing the rights of these odious people included the Jewish Executive Director (me), the affiliate’s one secretary, who was Black, and a co-operating attorney, who was gay.

Each of us knew that if the Klan ever achieved power, we’d be among the first to be marginalized or even eliminated–so why on earth would we protect the organization’s right to spew its bigotry? Because we also knew that– in a system where government can pick and choose who has rights– no one really has rights. The government that can muzzle the KKK today can muzzle me tomorrow–and as we have (painfully) learned, we can’t assume that good people will always be in charge of that government.

As one ACLU leader put it, poison gas is a great weapon until the wind shifts.

As with so many other misunderstood elements of the Bill of Rights, the issue isn’t what you may say or do– it is who gets to decide what you say or do? And right now, at the same time state-level Republican legislators are accusing the left of “canceling” their messages and “censoring” Dr. Seuss, they are waging a determined war on protesters’ and educators’ right to say things with which they disagree. 

As Michelle Goldberg recently reported,

In a number of states, Republicans have responded to last year’s racial justice uprising by cracking down on demonstrators. As The Times reported in April, during 2021 legislative sessions, lawmakers in 34 states have introduced 81 anti-protest bills. An Indiana bill would bar people convicted of unlawful assembly from state employment. A Minnesota proposal would prohibit people convicted of unlawful protesting from getting student loans, unemployment benefits or housing assistance. Florida passed a law protecting drivers from civil liability if they crash their cars into people protesting in the streets.

Meanwhile, the right-wing moral panic about critical race theory has led to a rash of statewide bills barring schools — including colleges and universities — from teaching what are often called “divisive concepts,” including the idea that the United States is fundamentally racist or sexist. Even where such laws haven’t been passed, the campaign has had a chilling effect; the Kansas Board of Regents recently asked state universities for a list of courses that include critical race theory.

As Goldberg says, there’s nothing new about the left growing weary of sticking up for the rights of reactionaries. Personally, I would find it really satisfying to shut down Faux News, or to tell the My Pillow Guy to go stuff a sock in it. The problem is, satisfying that urge won’t take us where we need to go. Goldberg’s last sentence is worth contemplating.

 Maybe every generation has to learn for itself that censorship isn’t a shortcut to justice.

To which I would just add: and criticism of your position by people who aren’t using the power of government to shut you up isn’t censorship.

 

 

 

Section 230

These are hard times for free speech advocates. The Internet–with its capacity for mass distribution of lies, misinformation, bigotry and incitement to violence–cries out for reform, but it is not apparent (certainly not to me) what sort of reforms might curb the dangers without also stifling free expression.

One approach is focused on a law that is older than Google: Section 230 of the Communications Decency Act. 

What is Section 230? Is it really broken? Can it be fixed without inadvertently doing more damage? 

The law is just 26 words that allow online platforms to make rules about what people can or can’t post without being held legally responsible for the content. (There are some exceptions, but not many. )As a recent newsletter on technology put it (sorry, for some reason link doesn’t work),

If I accuse you of murder on Facebook, you might be able to sue me, but you can’t sue Facebook. If you buy a defective toy from a merchant on Amazon, you might be able to take the seller to court, but not Amazon. (There is some legal debate about this, but you get the gist.)

The law created the conditions for Facebook, Yelp and Airbnb to give people a voice without being sued out of existence. But now Republicans and Democrats are asking whether the law gives tech companies either too much power or too little responsibility for what happens under their watch.


Republicans mostly worry that Section 230 gives internet companies too much power to suppress online debate and discussion, while Democrats mostly worry that it lets those companies ignore or even enable dangerous incitements and/or illegal transactions. 

The fight over Section 230 is really a fight over the lack of control exercised by Internet giants like Facebook and Twitter. In far too many situations, the law allows people to lie online without consequence–lets face it, that high school kid who is spreading lewd rumors about a girl who turned him down for a date isn’t likely to be sued, no matter how damaging, reprehensible and untrue his posts may be. The recent defamation suits brought by the voting machine manufacturers were salutary and satisfying, but most people harmed by the bigotry and disinformation online are not in a position to pursue such remedies.

The question being debated among techies and lawyers is whether Section 230 is too protective; whether it reduces incentives for platforms like Facebook and Twitter to make and enforce stronger measures that would be more effective in curtailing obviously harmful rhetoric and activities. 

Several proposed “fixes” are currently being considered. The Times article described them.


Fix-it Plan 1: Raise the bar. Some lawmakers want online companies to meet certain conditions before they get the legal protections of Section 230.

One example: A congressional proposal would require internet companies to report to law enforcement when they believe people might be plotting violent crimes or drug offenses. If the companies don’t do so, they might lose the legal protections of Section 230 and the floodgates could open to lawsuits.

Facebook this week backed a similar idea, which proposed that it and other big online companies would have to have systems in place for identifying and removing potentially illegal material.

Another proposed bill would require Facebook, Google and others to prove that they hadn’t exhibited political bias in removing a post. Some Republicans say that Section 230 requires websites to be politically neutral. That’s not true.

Fix-it Plan 2: Create more exceptions. One proposal would restrict internet companies from using Section 230 as a defense in legal cases involving activity like civil rights violations, harassment and wrongful death. Another proposes letting people sue internet companies if child sexual abuse imagery is spread on their sites.

Also in this category are legal questions about whether Section 230 applies to the involvement of an internet company’s own computer systems. When Facebook’s algorithms helped circulate propaganda from Hamas, as David detailed in an article, some legal experts and lawmakers said that Section 230 legal protections should not have applied and that the company should have been held complicit in terrorist acts.


Slate has an article describing all of the proposed changes to Section 230.

I don’t have a firm enough grasp of the issues involved–let alone the technology needed to accomplish some of the proposed changes–to have a favored “fix” to Section 230.

I do think that this debate foreshadows others that will arise in a world where massive international companies–online and not– in many cases wield more power than governments. Constraining these powerful entities will require new and very creative approaches.

Mandating Fairness

Whenever one of my posts addresses America’s problem with disinformation, at least one commenter will call for re-institution of the Fairness Doctrine–despite the fact that, each time, another commenter (usually a lawyer) will explain why that doctrine wouldn’t apply to social media or most other Internet sites causing contemporary mischief.

The Fairness Doctrine was contractualGovernment owned the broadcast channels that were being auctioned for use by private media companies, and thus had the right to require certain undertakings from responsive bidders. In other words, in addition to the payments being tendered, bidders had to promise to operate “in the public interest,” and the public interest included an obligation to give contending voices a fair hearing.

The government couldn’t have passed a law requiring newspapers and magazines to be “fair,” and it cannot legally require fair and responsible behavior from cable channels and social media platforms, no matter how much we might wish it could.

So–in this era of QAnon and Fox News and Rush Limbaugh clones– where does that leave us?

The Brookings Institution, among others, has wrestled with the issue.

The violence of Jan. 6 made clear that the health of online communities and the spread of disinformation represents a major threat to U.S. democracy, and as the Biden administration takes office, it is time for policymakers to consider how to take a more active approach to counter disinformation and form a public-private partnership aimed at identifying and countering disinformation that poses a risk to society.

Brookings says that a non-partisan public-private effort is required because disinformation crosses platforms and transcends political boundaries. They recommend a “public trust” that would provide analysis and policy proposals intended to defend democracy against the constant stream of  disinformation and the illiberal forces at work disseminating it. 
It would identify emerging trends and methods of sharing disinformation, and would
support data-driven initiatives to improve digital media-literacy. 

Frankly, I found the Brookings proposal unsatisfactorily vague, but there are other, more concrete proposals for combatting online and cable propaganda. Dan Mullendore pointed to one promising tactic in a comment the other day. Fox News income isn’t–as we might suppose– dependent mostly on advertising; significant sums come from cable fees. And one reason those fees are so lucrative is that Fox gets bundled with other channels, meaning that many people pay for Fox who wouldn’t pay for it if it weren’t a package deal . A few days ago, on Twitter, a lawyer named Pam Keith pointed out that a simple regulatory change ending  bundling would force Fox and other channels to compete for customers’ eyes, ears and pocketbooks.

Then there’s the current debate over Section 230 of the Communications Decency Act, with many critics advocating its repeal, and others, like the Electronic Frontier Foundation, defending it.

Section 230 says that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” (47 U.S.C. § 230). In other words, online intermediaries that host or republish speech are protected against a range of laws that might otherwise be used to hold them legally responsible for what others say and do. The protected intermediaries include not only regular Internet Service Providers (ISPs), but also a range of “interactive computer service providers,” including basically any online service that publishes third-party content. Though there are important exceptions for certain criminal and intellectual property-based claims, CDA 230 creates a broad protection that has allowed innovation and free speech online to flourish.

Most observers believe that an outright repeal of Section 230 would destroy social networks as we know them (the linked article explains why, as do several others), but there is a middle ground between total repeal and naive calls for millions of users to voluntarily leave platforms that fail to block hateful and/or misleading posts.

Fast Company has suggested that middle ground.

One possibility is that the current version of Section 230 could be replaced with a requirement that platforms use a more clearly defined best-efforts approach, requiring them to use the best technology and establishing some kind of industry standard they would be held to for detecting and mediating violating content, fraud, and abuse. That would be analogous to standards already in place in the area of advertising fraud….

Another option could be to limit where Section 230 protections apply. For example, it might be restricted only to content that is unmonetized. In that scenario, you would have platforms displaying ads only next to content that had been sufficiently analyzed that they could take legal responsibility for it. 

A “one size fits all” reinvention of the Fairness Doctrine isn’t going to happen. But that doesn’t mean we can’t make meaningful, legal improvements that would make a real difference online.

 

Information Silos And The First Amendment

The First Amendment contemplates and protects a “marketplace of ideas.” We have no precedent for an information environment in which there is no marketplace–no “agora” where different ideas and perspectives contend with each other for acceptance.

What we have instead are information “silos”–a column in the New York Times recently quoted Robert Post, a Yale professor, for the observation that people have always been crazy, but the internet has allowed them to find each other.

In those silos, they talk only to each other.

Social media has enabled the widespread and instantaneous transmission of lies in the service of political gain, and we are seeing the results. The question is: what should we do?

One set of scholars has concluded that the damage being done by misinformation and propaganda outweighs the damage of censorship. Rick Hasen, perhaps the most pre-eminent scholar of election law, falls into that category:

Change is urgent to deal with election pathologies caused by the cheap speech era, but even legal changes as tame as updating disclosure laws to apply to online political ads could face new hostility from a Supreme Court taking a libertarian marketplace-of-ideas approach to the First Amendment. As I explain, we are experiencing a market failure when it comes to reliable information voters need to make informed choices and to have confidence in the integrity of our electoral system. But the Court may stand in the way of necessary reform.

I don’t know what Hasen considers “necessary reform,” but I’m skeptical.

I have always been a First Amendment purist, and I still agree with the balance struck by the Founders, who understood that–as pernicious and damaging as bad ideas can be–allowing government to determine which ideas get voiced is likely to be much more dangerous. (As a former ACLU colleague memorably put it, “Poison gas is a great weapon until the wind shifts.”)

That said, social media platforms aren’t government. Like brick-and-mortar private businesses, they can insist on certain behaviors by their customers. And like other private businesses, they can and should be regulated in the public interest. (At the very least, they should be required to apply their own rules consistently. People expressing concern/outrage over Twitter’s ban of Trump should be reminded that he would have encountered that ban much earlier had he been an ordinary user. Trump had flouted Twitter and Facebook rules for years.)

The Times column suggests we might learn from European approaches to issues of speech, including falsehoods and hate speech. Hate speech can only be banned in the U.S. if it is intended to incite imminent violence and is actually likely to do so. Europeans have decided that hate speech isn’t valuable public discourse– that racism isn’t an idea; it’s a form of discrimination.

The underlying philosophical difference here is about the right of the individual to self-expression. Americans value that classic liberal right very highly — so highly that we tolerate speech that might make others less equal. Europeans value the democratic collective and the capacity of all citizens to participate fully in it — so much that they are willing to limit individual rights.

The First Amendment was crafted for a political speech environment that was markedly different than today’s, as Tim Wu has argued.  Government censorship was then the greatest threat to free speech. Today, those, including Trump, “who seek to control speech use new methods that rely on the weaponization of speech itself, such as the deployment of ‘troll armies,’ the fabrication of news, or ‘flooding’ tactics” that humiliate, harass, discourage, and even destroy targeted speakers.”

Wu argues that Americans can no longer assume that the First Amendment is an adequate guarantee against malicious speech control and censorship. He points out that the marketplace of ideas has become corrupted by technologies “that facilitate the transmission of false information.”

American courts have long held that the best test of truth is the power of an idea to get itself accepted in the competition that characterizes a marketplace. They haven’t addressed what happens when there is no longer a functioning market–when citizens  confine their communicative interactions to sites that depend for their profitability on confirming the biases of carefully targeted populations.

I certainly don’t think the answer is to dispense with–or water down– the First Amendment. But that Amendment was an effort to keep those with power from controlling information. In today’s information environment, platforms like Twitter, Facebook, etc. are as powerful and influential as government. Our challenge is to somehow rein in intentional propaganda and misinformation without throwing the baby out with the bathwater.

Any ideas how we do that?