Tag Archives: censorship

Horton Hears A Censor

A number of years ago, when my husband was still practicing architecture, he was presenting a school board with preliminary plans for a school they’d hired him to design. There were a number of decisions on which he wanted their feedback, but the board focused entirely–for an hour!– on arguments over the size of an elevator, and whether it should accommodate one wheelchair or two.

As he left, he ran into a friend, and explained his frustration with the school board’s focus. The friend said something I’ve thought about on multiple occasions since: “people argue about what they understand.” Insightful as that observation was, I think it needs amending to “People argue about what they think they understand.”

Which brings me to censorship, accusations of “cancel culture,” and Dr. Seuss, with a brief side trip to Mr. Potato Head.

The right wing is exercised–even hysterical–and screaming “censorship” about a decision made by the company that controls publication of the Dr. Seuss books. It will suspend publication of six of those sixty-odd books, based upon a determination that  they contain racist and insensitive imagery. The decision didn’t affect “Green Eggs and Ham,” “The Cat in the Hat,” “Horton Hears a Who” or numerous other titles.

This is not censorship, not only because they aren’t proposing to collect and destroy the numerous copies that already exist but because, in our constitutional system,  only government can censor speech. In fact, a decision by the company that owns the rights to Dr. Seuss’ books is an exercise of that company’s own free speech rights.

Think of it this way: you post something to Twitter, then think better of it and remove that post. You haven’t been censored; you made both the initial decision to post whatever it was and the subsequent decision to remove it.

Or think about that same example in the context of contemporary criticism of so-called “cancel culture.” You post something that other people find offensive. They respond by criticizing you. Your public-sector employer hasn’t punished you and, for that matter, no government entity has taken any action, but many people have expressed disdain or worse. Again–that is neither censorship nor “cancellation.”

The Free Speech clause of the First Amendment protects us from government actions that suppress the free expression of our opinions or our ability to access particular information or ideas. It doesn’t protect us from the disapproval of our fellow-citizens. It doesn’t even protect us from being sanctioned or fired by our private-sector employer, because that employer has its own First-Amendment right to ensure that messages being publicly communicated by its employees are consistent with its own.

When Walmart decides not to carry a particular book, when a local newspaper (remember those?) rejects an advertisement or refuses to print a letter to the editor, when the manufacturer of “Mr. Potato Head” decides to drop the “Mr,” those entities are exercising their First Amendment rights. They aren’t “censoring.” They aren’t even “cancelling.”

You are within your rights to disagree with the decision by those who own the Dr. Seuss catalogue (actually, that “company” is run by the author’s family, aka the Seuss estate.) Disagreement and criticism are your rights under the First Amendment. You are free to argue that the decision was misplaced, that it constituted over-reaction…whatever. But since the government did not require that decision–or participate in it– it wasn’t censorship. And unless the criticism was accompanied by ostracism–unless it was accompanied by removal of the author’s books from bookstores and libraries–it isn’t cancellation, either.

Americans have a right to freedom of expression. We have no right–constitutional or otherwise– to freedom from criticism. The desire of America’s culture warriors to “own the libs” doesn’t trump that reality.

As for the decision to stop printing and circulating six books with unfortunate portrayals, we’d do well to heed Charles Blow. In a column for the New York Times, Blow reminded readers that the images we present to young children can be highly corrosive and racially vicious.Times article on the controversy noted that  a number of other children’s books have been edited to purge what we now recognize as racist stereotypes. Often, those edits have been made by the authors who wrote the books, who belatedly recognized that they had engaged in hurtful stereotyping.

Agree or disagree with a given decision–whether by the Dr. Seuss estate or by Hasbro, the Potato Head manufacturer–it was a decision they had the right to make and a right that the rest of us have an obligation to respect, even if we disagree.

Mandating Fairness

Whenever one of my posts addresses America’s problem with disinformation, at least one commenter will call for re-institution of the Fairness Doctrine–despite the fact that, each time, another commenter (usually a lawyer) will explain why that doctrine wouldn’t apply to social media or most other Internet sites causing contemporary mischief.

The Fairness Doctrine was contractualGovernment owned the broadcast channels that were being auctioned for use by private media companies, and thus had the right to require certain undertakings from responsive bidders. In other words, in addition to the payments being tendered, bidders had to promise to operate “in the public interest,” and the public interest included an obligation to give contending voices a fair hearing.

The government couldn’t have passed a law requiring newspapers and magazines to be “fair,” and it cannot legally require fair and responsible behavior from cable channels and social media platforms, no matter how much we might wish it could.

So–in this era of QAnon and Fox News and Rush Limbaugh clones– where does that leave us?

The Brookings Institution, among others, has wrestled with the issue.

The violence of Jan. 6 made clear that the health of online communities and the spread of disinformation represents a major threat to U.S. democracy, and as the Biden administration takes office, it is time for policymakers to consider how to take a more active approach to counter disinformation and form a public-private partnership aimed at identifying and countering disinformation that poses a risk to society.

Brookings says that a non-partisan public-private effort is required because disinformation crosses platforms and transcends political boundaries. They recommend a “public trust” that would provide analysis and policy proposals intended to defend democracy against the constant stream of  disinformation and the illiberal forces at work disseminating it. 
It would identify emerging trends and methods of sharing disinformation, and would
support data-driven initiatives to improve digital media-literacy. 

Frankly, I found the Brookings proposal unsatisfactorily vague, but there are other, more concrete proposals for combatting online and cable propaganda. Dan Mullendore pointed to one promising tactic in a comment the other day. Fox News income isn’t–as we might suppose– dependent mostly on advertising; significant sums come from cable fees. And one reason those fees are so lucrative is that Fox gets bundled with other channels, meaning that many people pay for Fox who wouldn’t pay for it if it weren’t a package deal . A few days ago, on Twitter, a lawyer named Pam Keith pointed out that a simple regulatory change ending  bundling would force Fox and other channels to compete for customers’ eyes, ears and pocketbooks.

Then there’s the current debate over Section 230 of the Communications Decency Act, with many critics advocating its repeal, and others, like the Electronic Frontier Foundation, defending it.

Section 230 says that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” (47 U.S.C. § 230). In other words, online intermediaries that host or republish speech are protected against a range of laws that might otherwise be used to hold them legally responsible for what others say and do. The protected intermediaries include not only regular Internet Service Providers (ISPs), but also a range of “interactive computer service providers,” including basically any online service that publishes third-party content. Though there are important exceptions for certain criminal and intellectual property-based claims, CDA 230 creates a broad protection that has allowed innovation and free speech online to flourish.

Most observers believe that an outright repeal of Section 230 would destroy social networks as we know them (the linked article explains why, as do several others), but there is a middle ground between total repeal and naive calls for millions of users to voluntarily leave platforms that fail to block hateful and/or misleading posts.

Fast Company has suggested that middle ground.

One possibility is that the current version of Section 230 could be replaced with a requirement that platforms use a more clearly defined best-efforts approach, requiring them to use the best technology and establishing some kind of industry standard they would be held to for detecting and mediating violating content, fraud, and abuse. That would be analogous to standards already in place in the area of advertising fraud….

Another option could be to limit where Section 230 protections apply. For example, it might be restricted only to content that is unmonetized. In that scenario, you would have platforms displaying ads only next to content that had been sufficiently analyzed that they could take legal responsibility for it. 

A “one size fits all” reinvention of the Fairness Doctrine isn’t going to happen. But that doesn’t mean we can’t make meaningful, legal improvements that would make a real difference online.

 

The New Censorship

One of the many causes of increased tribalism and chaos worldwide is the unprecedented nature of the information environment we inhabit. A quote from Yuval Noah Harari’s Homo Deus is instructive–

In the past, censorship worked by blocking the flow of information. In the twenty-first century, censorship works by flooding people with irrelevant information.

We are only dimly beginning to understand the nature of the threat posed by the mountains of “information” with which we are inundated. Various organizations are mounting efforts to fight that threat–to increase news literacy and control disinformation– with results that are thus far imperceptible.

The Brookings Institution has engaged in one of those efforts; it has a series on Cybersecurity and Election Interference, and in a recent report, offered four steps to “stop the spread of disinformation.” The linked report begins by making an important point about the actual targets of such disinformation.

The public discussion of disinformation often focuses on targeted candidates, without recognizing that disinformation actually targets voters. In the case of elections, actors both foreign and domestic are trying to influence whether or not you as an individual vote, and for whom to cast your ballot. The effort goes farther than elections: it is about the information on whether to vaccinate children or boycott the NFL. What started with foreign adversaries now includes domestic groups, all fighting for control over what you believe to be true.

The report also recognizes that the preservation of democratic and economic institutions in the digital era will ultimately depend on efforts to control disinformation by  government and the various platforms on which it is disseminated. Since the nature of the necessary action is not yet clear–so far as I can tell, we don’t have a clue how to accomplish this– Brookings says that the general public needs to make itself less susceptible, and its report offers four ways to accomplish that.

You’ll forgive me if I am skeptical of the ability/desire of most Americans to follow their advice, but for what it is worth, here are the steps they advocate:

Know your algorithm
Get to know your own social media feed and algorithm, because disinformation targets us based on our online behavior and our biases. Platforms cater information to you based on what you stop to read, engage with, and send to friends. This information is then accessible to advertisers and can be manipulated by those who know how to do so, in order to target you based on your past behavior. The result is we are only seeing information that an algorithm thinks we want to consume, which could be biased and distorted.

Retrain your newsfeed
Once you have gotten to know your algorithm, you can change it to start seeing other points of view. Repeatedly seek out reputable sources of information that typically cater to viewpoints different than your own, and begin to see that information occur in your newsfeed organically.

Scrutinize your news sources
Start consuming information from social media critically. Social media is more than a news digest—it is social, and it is media. We often scroll through passively, absorbing a combination of personal updates from friends and family—and if you are among the two-thirds of Americans who report consuming news on social media—you are passively scrolling through news stories as well. A more critical eye to the information in your feed and being able to look for key indicators of whether or not news is timely and accurate, such as the source and the publication date, is incredibly important.

Consider not sharing
Finally, think before you share. If you think that a “news” article seems too sensational or extreme to be true, it probably is. By not sharing, you are stopping the flow of disinformation and falsehoods from getting across to your friends and network. While the general public cannot be relied upon to solve this problem alone, it is imperative that we start doing our part to stop this phenomenon. It is time to stop waiting for someone to save us from disinformation, and to start saving ourselves.

All good advice. Why do I think the people who most need to follow it, won’t?

Cue the Censors….

Remember the chant from our childhood– “Sticks and stones may break my bones, but words will never hurt me”?

Of course, that has never been true; words can and do deeply wound. But the message of the chant is nonetheless important: just as we realize that politics is “warfare by another name,” and infinitely preferable, since politics at least lets us live to fight another day, discussion and debate and even name-calling are preferable to physical attacks.

Furthermore, the notion that robust speech and debate are an essential element of the search for truth is enshrined in the Free Speech clause of the Constitution’s First Amendment. Freedom not just for ideas with which we agree, but freedom even–perhaps especially– for the idea we hate, as Justice Holmes memorably put it.

And yet, if there is one constant through American history, it is the urge to suppress ideas that offend some person or faction. Pick up any newspaper or visit any news site, and there will be reports on efforts to censor. Two recent examples:

The Kansas State Senate on Wednesday passed S.B. 56, with twenty-six Republican senators supporting the measure, and six Republicans and eight Democrats opposing. The bill is ostensibly designed to protect students by making it illegal to display or present material that is “harmful to minors,” such as pornography.

But the broad categorizations and vague language have caused concern among teachers and free speech advocates about what will and won’t be policed.

Of course, what I think is “harmful to minors” may be rather different from what you think is harmful.

Censorship efforts are often accompanied by pious expressions of concern for children; other times, however, it is very clear that opponents of particular ideas simply want to suppress those ideas.

A Pennsylvania transit system permitted churches to advertise on the sides of its buses but refused to allow a group that doesn’t believe in God to place an ad containing the word “atheists,” fearing it would offend riders, according to a federal lawsuit filed Tuesday.

The County of Lackawanna Transit System repeatedly rejected the ads sought by the Northeastern Pennsylvania Freethought Society, telling the group it doesn’t permit advertising space to be used as a forum for public debate. The transit system also told the group its ad might alienate riders and hurt revenue, according to the lawsuit, filed in Scranton.

The transit system allowed several churches — as well as a political candidate and a blog that linked to anti-Semitic, Holocaust denial and white supremacist websites — to advertise before the Freethought Society first tried placing its ad in 2012, the suit said.

I don’t suppose it occurs to the censors that when you demonstrate fear of an opposing idea, you are simply highlighting the weakness of your own position….

 

 

Why Censorship Doesn’t Work

A former student sent me the following email

Plans are proceeding for the November 5 “Read-In” of writings by Howard Zinn at Purdue University, co-sponsored by the Indiana affiliate of the American Federation of Teachers among other groups.  Parallel events at several other Indiana schools are planned.  Information is available from Prof. Tithi Bhattacharya at tbhattac@gmail.com.
I can’t help wondering how many people who will attend this event had ever heard of Howard Zinn prior to Mitch Daniels’ ill-advised effort to suppress his work.
It so often works that way.
When my middle son was a student at the University of Cincinnati, the local prosecutor tried to close down an “obscene” exhibit of Robert Mapplethorpe’s photographs. Students and residents who ordinarily wouldn’t have gone across the street to attend an art exhibit lined up to see this one. My son told me the lines stretched for blocks.
The phenomenon isn’t limited to books and art–according to a couple of film histories I’ve read, at times when movie attendance was dwindling, filmmakers responded by producing more explicit films and hoping that the howls of prudery from the “usual sources” would increase attendance.
You’d think the busybodies would learn.
Censorship may or may not be unconstitutional (depending upon whether government is doing it), but it’s rarely effective. Quite the contrary. If there’s material you don’t want people to see or hear or read, your best bet is just to ignore it.
I wonder if Mitch has figured that out yet.