Tag Archives: twitter

Section 230

These are hard times for free speech advocates. The Internet–with its capacity for mass distribution of lies, misinformation, bigotry and incitement to violence–cries out for reform, but it is not apparent (certainly not to me) what sort of reforms might curb the dangers without also stifling free expression.

One approach is focused on a law that is older than Google: Section 230 of the Communications Decency Act. 

What is Section 230? Is it really broken? Can it be fixed without inadvertently doing more damage? 

The law is just 26 words that allow online platforms to make rules about what people can or can’t post without being held legally responsible for the content. (There are some exceptions, but not many. )As a recent newsletter on technology put it (sorry, for some reason link doesn’t work),

If I accuse you of murder on Facebook, you might be able to sue me, but you can’t sue Facebook. If you buy a defective toy from a merchant on Amazon, you might be able to take the seller to court, but not Amazon. (There is some legal debate about this, but you get the gist.)

The law created the conditions for Facebook, Yelp and Airbnb to give people a voice without being sued out of existence. But now Republicans and Democrats are asking whether the law gives tech companies either too much power or too little responsibility for what happens under their watch.


Republicans mostly worry that Section 230 gives internet companies too much power to suppress online debate and discussion, while Democrats mostly worry that it lets those companies ignore or even enable dangerous incitements and/or illegal transactions. 

The fight over Section 230 is really a fight over the lack of control exercised by Internet giants like Facebook and Twitter. In far too many situations, the law allows people to lie online without consequence–lets face it, that high school kid who is spreading lewd rumors about a girl who turned him down for a date isn’t likely to be sued, no matter how damaging, reprehensible and untrue his posts may be. The recent defamation suits brought by the voting machine manufacturers were salutary and satisfying, but most people harmed by the bigotry and disinformation online are not in a position to pursue such remedies.

The question being debated among techies and lawyers is whether Section 230 is too protective; whether it reduces incentives for platforms like Facebook and Twitter to make and enforce stronger measures that would be more effective in curtailing obviously harmful rhetoric and activities. 

Several proposed “fixes” are currently being considered. The Times article described them.


Fix-it Plan 1: Raise the bar. Some lawmakers want online companies to meet certain conditions before they get the legal protections of Section 230.

One example: A congressional proposal would require internet companies to report to law enforcement when they believe people might be plotting violent crimes or drug offenses. If the companies don’t do so, they might lose the legal protections of Section 230 and the floodgates could open to lawsuits.

Facebook this week backed a similar idea, which proposed that it and other big online companies would have to have systems in place for identifying and removing potentially illegal material.

Another proposed bill would require Facebook, Google and others to prove that they hadn’t exhibited political bias in removing a post. Some Republicans say that Section 230 requires websites to be politically neutral. That’s not true.

Fix-it Plan 2: Create more exceptions. One proposal would restrict internet companies from using Section 230 as a defense in legal cases involving activity like civil rights violations, harassment and wrongful death. Another proposes letting people sue internet companies if child sexual abuse imagery is spread on their sites.

Also in this category are legal questions about whether Section 230 applies to the involvement of an internet company’s own computer systems. When Facebook’s algorithms helped circulate propaganda from Hamas, as David detailed in an article, some legal experts and lawmakers said that Section 230 legal protections should not have applied and that the company should have been held complicit in terrorist acts.


Slate has an article describing all of the proposed changes to Section 230.

I don’t have a firm enough grasp of the issues involved–let alone the technology needed to accomplish some of the proposed changes–to have a favored “fix” to Section 230.

I do think that this debate foreshadows others that will arise in a world where massive international companies–online and not– in many cases wield more power than governments. Constraining these powerful entities will require new and very creative approaches.

Falsely Shouting “Fire” In The Digital Theater

Tom Wheeler is one of the savviest observers of the digital world.

Now at the Brookings Institution, Wheeler headed up the FCC during the Obama administration, and recently authored an essay titled “The Consequences of Social Media’s Giant Experiment.” That essay–like many of his other publications–considered the impact of legally-private enterprises that have had a huge public impact.

The “experiment” Wheeler considers is the shutdown of Trump’s disinformation megaphones: most consequential, of course, were the Facebook and Twitter bans of Donald Trump’s accounts, but it was also important that  Parler–a site for rightwing radicalization and conspiracy theories–was effectively shut down for a time by Amazon’s decision to cease hosting it, and decisions by both Android and Apple to remove it from their app stores. (I note that, since Wheeler’s essay, Parler has found a new hosting service–and it is Russian owned.)

These actions are better late than never. But the proverbial horse has left the barn. These editorial and business judgements do, however, demonstrate how companies have ample ability to act conscientiously to protect the responsible use of their platforms.

Wheeler addresses the conundrum that has been created by a subsection of the law that  insulates social media companies from responsibility for making the sorts of  editorial judgements that publishers of traditional media make every day. As he says, these 26 words are the heart of the issue: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

As he points out,

If you are insulated from the consequences of your actions and make a great deal of money by exploiting that insulation, then what is the incentive to act responsibly?…

The social media companies have put us in the middle of a huge and explosive lab experiment where we see the toxic combination of digital technology, unmoderated content, lies and hate. We now have the answer to what happens when these features and large profits are blended together in a connected world. The result not only has been unproductive for civil discourse, it also represents a danger to democratic systems and effective problem-solving.

Wheeler repeats what most observers of our digital world have recognized: these platforms have the technological capacity to exercise the same sort of responsible moderation that  we expect of traditional media. What they lack is the will–because more responsible moderating algorithms would eat into their currently large–okay, obscene– profits.

The companies’ business model is built around holding a user’s attention so that they may display more paying messages. Delivering what the user wants to see, the more outrageous the better, holds that attention and rings the cash register.

Wheeler points out that we have mischaracterized these platforms–they are not, as they insist, tech enterprises. They are media, and should be required to conform to the rules and expectations that govern media sources. He has other suggestions for tweaking the rules that govern these platforms, and they are worth consideration.

That said, the rise of these digital giants creates a bigger question and implicates what is essentially a philosophical dilemma.

The U.S. Constitution was intended to limit the exercise of power; it was crafted at a time in human history when governments held a clear monopoly on that power. That is arguably no longer the case–and it isn’t simply social media giants. Today, multiple social and economic institutions have the power to pose credible threats both to individual liberty and to social cohesion. How we navigate the minefield created by that reality–how we restrain the power of theoretically “private” enterprises– will determine the life prospects of our children and grandchildren.

At the very least, we need rules that will limit the ability of miscreants to falsely shout fire in our digital environments.

 

 

 

Information Silos And The First Amendment

The First Amendment contemplates and protects a “marketplace of ideas.” We have no precedent for an information environment in which there is no marketplace–no “agora” where different ideas and perspectives contend with each other for acceptance.

What we have instead are information “silos”–a column in the New York Times recently quoted Robert Post, a Yale professor, for the observation that people have always been crazy, but the internet has allowed them to find each other.

In those silos, they talk only to each other.

Social media has enabled the widespread and instantaneous transmission of lies in the service of political gain, and we are seeing the results. The question is: what should we do?

One set of scholars has concluded that the damage being done by misinformation and propaganda outweighs the damage of censorship. Rick Hasen, perhaps the most pre-eminent scholar of election law, falls into that category:

Change is urgent to deal with election pathologies caused by the cheap speech era, but even legal changes as tame as updating disclosure laws to apply to online political ads could face new hostility from a Supreme Court taking a libertarian marketplace-of-ideas approach to the First Amendment. As I explain, we are experiencing a market failure when it comes to reliable information voters need to make informed choices and to have confidence in the integrity of our electoral system. But the Court may stand in the way of necessary reform.

I don’t know what Hasen considers “necessary reform,” but I’m skeptical.

I have always been a First Amendment purist, and I still agree with the balance struck by the Founders, who understood that–as pernicious and damaging as bad ideas can be–allowing government to determine which ideas get voiced is likely to be much more dangerous. (As a former ACLU colleague memorably put it, “Poison gas is a great weapon until the wind shifts.”)

That said, social media platforms aren’t government. Like brick-and-mortar private businesses, they can insist on certain behaviors by their customers. And like other private businesses, they can and should be regulated in the public interest. (At the very least, they should be required to apply their own rules consistently. People expressing concern/outrage over Twitter’s ban of Trump should be reminded that he would have encountered that ban much earlier had he been an ordinary user. Trump had flouted Twitter and Facebook rules for years.)

The Times column suggests we might learn from European approaches to issues of speech, including falsehoods and hate speech. Hate speech can only be banned in the U.S. if it is intended to incite imminent violence and is actually likely to do so. Europeans have decided that hate speech isn’t valuable public discourse– that racism isn’t an idea; it’s a form of discrimination.

The underlying philosophical difference here is about the right of the individual to self-expression. Americans value that classic liberal right very highly — so highly that we tolerate speech that might make others less equal. Europeans value the democratic collective and the capacity of all citizens to participate fully in it — so much that they are willing to limit individual rights.

The First Amendment was crafted for a political speech environment that was markedly different than today’s, as Tim Wu has argued.  Government censorship was then the greatest threat to free speech. Today, those, including Trump, “who seek to control speech use new methods that rely on the weaponization of speech itself, such as the deployment of ‘troll armies,’ the fabrication of news, or ‘flooding’ tactics” that humiliate, harass, discourage, and even destroy targeted speakers.”

Wu argues that Americans can no longer assume that the First Amendment is an adequate guarantee against malicious speech control and censorship. He points out that the marketplace of ideas has become corrupted by technologies “that facilitate the transmission of false information.”

American courts have long held that the best test of truth is the power of an idea to get itself accepted in the competition that characterizes a marketplace. They haven’t addressed what happens when there is no longer a functioning market–when citizens  confine their communicative interactions to sites that depend for their profitability on confirming the biases of carefully targeted populations.

I certainly don’t think the answer is to dispense with–or water down– the First Amendment. But that Amendment was an effort to keep those with power from controlling information. In today’s information environment, platforms like Twitter, Facebook, etc. are as powerful and influential as government. Our challenge is to somehow rein in intentional propaganda and misinformation without throwing the baby out with the bathwater.

Any ideas how we do that?

 

 

Facebook And False Equivalence

Is it just me, or do the months between now and November seem interminable?

In the run-up to what will be an existentially-important decision for America’s future, we are living through an inconsistent, contested and politicized quarantine, mammoth protests triggered by a series of racist police murders of unarmed black men, and their   cynical escalation into riots by advocates of race war, and daily displays of worsening insanity from the White House–including, but certainly not limited to, America’s withdrawal from the World Health Organization in the middle of a pandemic followed by a phone call in which our “eloquent” President called governors “weak” and “jerks” for not waging war on their own citizens.

And in the midst of it all, a pissing match between the Psychopath-in-Chief and Twitter, which has finally–belately–decided to label some of Trump’s incendiary and inaccurate tweets for what they are.

We can only hope this glimmer of responsibility from Twitter continues. The platform’s unwillingness to apply the same rules to Trump that they apply to other users hasn’t just been cowardly–it has given his constant lies a surface plausibility and normalized his bile. We should all applaud Twitter’s belated recognition of its responsibility.

Then, of course, there’s Facebook.

It isn’t that Mark Zuckerberg is unaware of the harms being caused by Facebooks current algorithms. Numerous media outlets have reported on the company’s internal investigations into the way those algorithms encourage division and distort political debate. In her column last Sunday’s New York Times, Maureen Dowd reported

The Wall Street Journal had a chilling report a few days ago that Facebook’s own research in 2018 revealed that “our algorithms exploit the human brain’s attraction to divisiveness. If left unchecked,” Facebook would feed users “more and more divisive content in an effort to gain user attention & increase time on the platform.”

Mark Zuckerberg shelved the research.

The reasons are both depressing and ironic: in addition to concerns that less vitriol might mean users spending less time on the site, Zuckerberg understands that reducing the spread of untrue, divisive content would require eliminating substantially more material from the right than the left, opening the company to accusations of bias against conservatives.

Similar fears are said to be behind Facebook’s unwillingness to police political speech in advertisements and posts.

Think about it: Facebook knows that its platform is enormously influential. It know that the Right trades in conspiracy theories and intentional misinformation to a much greater extent than the Left, skewing the information landscape in dangerous ways. But for whatever reason– in order to insulate the company from regulation, or to curry favor with wealthy investors, or to escape the anger of the Breitbarts and Limbaughs–not to mention Trump–it has chosen to “allow people to make their own decisions.”

The ubiquity of social media presents lawmakers with significant challenges. Despite all the blather from the White House and the uninformed hysteria of ideologues, the issue isn’t censorship or freedom of speech–as anyone who has taken elementary civics knows, the Bill of Rights prohibits government from censoring communication. Facebook and Twitter and other social media sites aren’t government. For that matter, under current law, they aren’t even considered “publishers” who could be held accountable for whatever inaccurate drivel a user posts.

That means social media companies have the right to dictate their own terms of use. There is no legal impediment to Facebook or Twitter “censoring” posts they consider vile, obscene or untrue. (Granted, there are significant practical and marketing concerns involved in such an effort.) On Monday, reports emerged that Facebook’s own employees–including several in management–are clamoring for the platform to emulate Twitter’s new approach.

There have always been cranks and liars, racists and political propagandists. There haven’t always been easily accessible, worldwide platforms through which they could connect with similarly twisted individuals and spread their poisons. One of the many challenges of our technological age is devising constitutionally-appropriate ways to regulate those platforms.

If Mark Zuckerberg is unwilling to make FaceBook at least a minimally-responsible overseer of our national conversation–if he and his board cannot make and enforce reasonable rules about veracity in posts, a future government will undoubtedly do it for them–something that could set a dangerous precedent.

Refusing to be responsible– supporting a false equivalency that is tearing the country apart– is a much riskier strategy than Zuckerberg seems to recognize.

On the other hand, it finally seems to be dawning on Jack Dorsey, CEO of Twitter, that (as Dowd put it in her column)”Trump and Twitter were a match made in hell.”

 

Weaponizing Social Media

The already ample commentary directed at our “Tweeter-in-Chief” grew more copious–and pointed–in the wake of Trump’s “Morning Joe” attacks and the bizarre visual of him “body slamming” CNN.

John Cassidy’s essay in the New Yorker was consistent with the general tenor of those reactions, especially his conclusion:

Where America, until recently, had at its helm a Commander-in-Chief whom other countries acknowledged as a global leader and a figure of stature even if they didn’t like his policies, it now has something very different: an oafish Troll-in-Chief who sullies his office daily.

Most of the Cassidy piece focused on Trump’s addiction to–and childish use of–Twitter, and it is hard to disagree with his observation that the content of these messages is “just not normal behavior.” Thoughtful people, those not given to hyperbole or ad hominem attacks, are increasingly questioning Trump’s mental health.

The paragraph that struck me, however, was this one, because it raises an issue larger than the disaster in the White House:

Trump’s online presence isn’t something incidental to his Presidency: it is central to it, and always has been. If the media world were still dominated by the major broadcast networks and a handful of big newspapers, Trump would most likely still be hawking expensive apartments, building golf courses, and playing himself in a reality-television series. It was the rise of social media, together with the proliferation of alternative right-wing news sites, that enabled Trump to build a movement of angry, alienated voters and, ultimately, go from carnival barker to President.

Unpack, for a moment, the observation that social media and alternative “news” made Trump possible.

John Oliver recently aired a worrisome segment about Sinclair Broadcasting, a “beneath the radar” behemoth which is on the verge of a $3.9 billion merger with Tribune Media. That merger would significantly consolidate ownership of local television outlets, including one in Indianapolis. Oliver showed clips demonstrating Sinclair’s extreme right-wing bias–bias that, as Oliver pointed out– is in the same category as Fox News and Breitbart.

It’s damaging enough when radio talk shows, television networks and internet sites peddle falsehoods and conspiracy theories. What truly “weaponizes” disinformation and propaganda, however, is social media, where Facebook “friends” and twitter followers endlessly repeat even the most obvious fantasies; as research has shown, that repetition can make even people who are generally rational believe very irrational things.

When NASA has to issue an official denial that it is operating a child slave colony on Mars, we’re in unprecedented times.

I don’t have research to confirm or rebut my theory, but I believe that Americans’ loss of trust in our government–in our institutions and those elected and/or appointed to manage them–has made many people receptive to “alternative” explanations for decisions they may not like or understand. It couldn’t be that the people making that decision or crafting that legislation simply see the situation differently. It couldn’t be that public servant A is simply wrong; or that those making decision B had access to information we don’t have. No–they must be getting paid off. They must be working with other enemies of righteousness in a scheme to [fill in the blank].

No wonder it is so difficult to get good people to run for public office. In addition to good faith disagreements about their performance, they are likely to be accused of corrupt motives.

The other day, I struck up a discussion with a perfectly nice woman–a former schoolteacher. The talk turned to IPS, and she was complimentary about the schools with which she was familiar. She was less complimentary about the district’s charter schools–a position I understand. (It’s a mixed bag. Some are excellent, some aren’t, and they certainly aren’t a panacea for what ails education.)

All perfectly reasonable.

Then she confided to me that the Superintendent “gets a bonus” for every contract he signs with a Charter school. In other words, it’s all about the money. It couldn’t be that the school board and superintendent want the best for the children in the district and–right or wrong– simply see things differently.

Our daughter is on that school board, and I know for a fact that the Superintendent does NOT get bonuses for contracting with charter schools.  When I shared this exchange with our daughter, she regaled me with a number of other appalling, disheartening accusations that have grown and festered on social media.

I don’t have a remedy for our age of conspiracy. Censorship is clearly not an answer. (In the long run,  education can help.) But if we don’t devise a strategy for countering radio and television propaganda and the fever swamps of social media–the instruments that gave us Trump–we’ll be in an increasingly dangerous world of hurt.