A Compelling Read

Jonathan Haidt is a well-regarded scholar who has written a compelling article for the Atlantic, titled  “Why The Past Ten Years Of American Life Have Been Uniquely Stupid.” He begins by referencing the biblical story of Babel:

What would it have been like to live in Babel in the days after its destruction? In the Book of Genesis, we are told that the descendants of Noah built a great city in the land of Shinar. They built a tower “with its top in the heavens” to “make a name” for themselves. God was offended by the hubris of humanity and said:

Look, they are one people, and they have all one language; and this is only the beginning of what they will do; nothing that they propose to do will now be impossible for them. Come, let us go down, and confuse their language there, so that they will not understand one another’s speech.

The text does not say that God destroyed the tower, but in many popular renderings of the story he does, so let’s hold that dramatic image in our minds: people wandering amid the ruins, unable to communicate, condemned to mutual incomprehension.

Babel, according to Haidt, is not a story about tribalism. Instead, he insists it’s a story about the “fragmentation of everything.” And he makes a point that is often overlooked:  this fragmentation isn’t just happening between those who see themselves as red or blue, but within both left and right, and “within universities, companies, professional associations, museums, and even families.”

How have we come to this point? Haidt blames social media.The early Internet seemed to promise an expansion of co-operation and global democracy.

Myspace, Friendster, and Facebook made it easy to connect with friends and strangers to talk about common interests, for free, and at a scale never before imaginable. By 2008, Facebook had emerged as the dominant platform, with more than 100 million monthly users, on its way to roughly 3 billion today. In the first decade of the new century, social media was widely believed to be a boon to democracy. What dictator could impose his will on an interconnected citizenry? What regime could build a wall to keep out the internet?

The high point of techno-democratic optimism was arguably 2011, a year that began with the Arab Spring and ended with the global Occupy movement. That is also when Google Translate became available on virtually all smartphones, so you could say that 2011 was the year that humanity rebuilt the Tower of Babel. We were closer than we had ever been to being “one people,” and we had effectively overcome the curse of division by language. For techno-democratic optimists, it seemed to be only the beginning of what humanity could do.

Then, he writes, it all fell apart.

Haidt references the three major forces that social scientists have identified as collectively necessary to the cohesion of successful democracies: they are social capital–defined as extensive social networks with high levels of trust– strong institutions, and shared stories. And he points out that social media has weakened all three, as the platforms morphed from a new form of communication to a mechanism for performing –for what Haidt characterizes as the management of ones “personal brand.” Communication became a method for impressing others, rather than a sharing that might deepen friendships and understanding. He blamed the introduction of the “like’ and “share” buttons–which allowed the platforms to gauge users’ engagement–as a critical turning point.

As a social psychologist who studies emotion, morality, and politics, I saw this happening too. The newly tweaked platforms were almost perfectly designed to bring out our most moralistic and least reflective selves. The volume of outrage was shocking.

I encourage you to click through and read the entire, lengthy article, but if you don’t have time to do so, I’ll end this recap with the paragraph that struck me as a description of the most troubling consequences of our current use of these social media platforms.

It’s not just the waste of time and scarce attention that matters; it’s the continual chipping-away of trust. An autocracy can deploy propaganda or use fear to motivate the behaviors it desires, but a democracy depends on widely internalized acceptance of the legitimacy of rules, norms, and institutions. Blind and irrevocable trust in any particular individual or organization is never warranted. But when citizens lose trust in elected leaders, health authorities, the courts, the police, universities, and the integrity of elections, then every decision becomes contested; every election becomes a life-and-death struggle to save the country from the other side.

Haidt’s very troubling conclusion: If we do not make major changes soon, then our institutions, our political system, and our society may collapse.

I’m very afraid he’s right.

Comments

Social Media, Tribalism, And Craziness

If we are ever going to emerge from pandemic hell or semi-hell, we have to get a handle on two of the most dangerous aspects of contemporary life: the use of social media to spread disinformation, and the politicization of science–including, especially now, medical science.

Talking Points Memo recently ran a column (behind the paywall, so no link–sorry) from an expert in social media. That column made several points:

  •  fake news spreads faster than verified and validated news from credible sources. We also know that items and articles connecting vaccines and death are among the content people engage with most.
  • The algorithms used by social media platforms are primed for engagement, creating a “rabbit-hole effect”–it pushes users who click on anti-vaccine messages toward more anti-vaccine content. The people spreading medical misinformation know this, and know how to exploit the weaknesses of the engagement-driven systems on social media platforms.
  • “Social media is being manipulated on an industrial scale, including by a Russian campaign pushing disinformation about COVID-19 vaccines.” Research tells us that people who rely on Facebook for their news about the coronavirus are less likely to be vaccinated than people who get their coronavirus news from any other source.

According to the column, the problem is exacerbated by the way in which vaccine-related misinformation fits into people’s preexisting beliefs.

I was struck by the observation that acceptance of  wild and seemingly obvious inaccuracies requires a certain “pre-existing” belief system. That, not surprisingly, gets us to America’s current, extreme political tribalism.              
 
Let me share some very troubling data: To date, some 86% of Democrats have received at least one COVID-19 vaccine shot–compared with only 45% of Republicans. A Washington Post survey found that only 6% of Democratic respondents reported an intent to decline the vaccine, while 47% of Republicans said they would refuse to be inoculated. 

Not to put too fine a point on it,  this is insane.

Aside from people with genuine medical conditions that make vaccination unwise, the various justifications offered for denying the vaccine range from hypocritical (“pro-life” politicians suddenly defending the right of individuals to control of their own bodies) to legally inaccurate (“freedom” has never included the right to endanger others—if it did, we’d have the “freedom” to drive drunk and ignore red lights), to conspiratorial (COVID is a “hoax” perpetrated by those hated liberals).

Now, America has always had citizens willing to make decisions that endanger others; what is truly mystifying, however, is why such people overwhelmingly inhabit red states— including Indiana. 

Every state with large numbers of people who have refused vaccination is predominantly Republican. In several of those states, hospitalizations of unvaccinated COVID patients threatens to overwhelm health care systems. New York, a blue state, has five Covid patients hospitalized per 100,000 people, while red state Florida, where Governor Ron DeSantis has actually barred businesses from requiring patrons to show proof of vaccination, has 34 per100,000.

DeSantis’ Trumpian approach is an excellent example of just how dramatically the GOP has departed from the positions that used to define it. Whatever happened to the Republican insistence that business owners have the right to determine the rules for their own employees and patrons? (They still give lip service to those rules when the issue is whether to serve LGBTQ customers, but happily abandon them when the decision involves the health and safety of those same patrons.)

And what happened to the GOP’s former insistence on patriotism? Surely protecting others in one’s community from a debilitating and frequently deadly disease is patriotic.

Tribalism has clearly triumphed over logic and self-interest. As Amanda Marcotte recently wrote in Salon,

getting the vaccine would be an admission for conservatives that they were wrong about COVID-19 in the first place, and that liberals were right. And for much of red-state America, that’s apparently a far worse fate than death.

Making vaccine refusal a badge of political affiliation makes absolutely no sense. It does, however, correspond to the precipitous decline of rationality in what was once the “Grand Old Party”—a party now characterized by the anti-science, anti-logic, anti-intellectualism of officials like Marjorie Taylor Greene, Lauren Boebert, Jim Jordan, Paul Gosar, and Louie Gohmert (who was memorably described by Charlie Savage as “the dumbest mammal to enter a legislative chamber since Caligula’s horse”).

These mental giants (cough, cough) are insisting that vaccination will “magnetize” the body and make keys stick to you, and that Bill Gates is sneaking “tracking chips” into the vaccine doses. (As a friend recently queried, don’t most of those people warning against “tracking devices” own cell phones?? Talk about tracking…)

Talk about buffoonery.

The problem is, these sad, deranged people are endangering the rest of us.
 
 
 
 

Comments

The Age Of Misinformation

Political scientists often study the characteristics and influence of those they dub “high information voters.” Although that cohort is relatively small, it accounts for a significant amount–probably a majority–of America’s political discourse.

Research has suggested that these more informed voters, who follow politics closely, are just as likely–perhaps even more likely– to exhibit confirmation bias as are Americans less invested in the daily political news. But their ability to spread both information and misinformation is far greater than it was before the Internet and the ubiquity of social media.

As Max Fisher recently wrote in a column for the New York Times, 

There’s a decent chance you’ve had at least one of these rumors, all false, relayed to you as fact recently: that President Biden plans to force Americans to eat less meat; that Virginia is eliminating advanced math in schools to advance racial equality; and that border officials are mass-purchasing copies of Vice President Kamala Harris’s book to hand out to refugee children.

All were amplified by partisan actors. But you’re just as likely, if not more so, to have heard it relayed from someone you know. And you may have noticed that these cycles of falsehood-fueled outrage keep recurring.

Fisher attributes this phenomenon to a number of factors, but especially to an aspect of identity politics; we live in an age where political identity has become central to the self-image held by many Americans.

Fisher cites research attributing the prevalence of misinformation to three main elements of our time. Perhaps the most important of the three is a social environment in which individuals feel the need for what he terms “in-grouping,” and I would call tribalism — identification with like-minded others  as a source of strength and (especially) superiority. As he says,

In times of perceived conflict or social change, we seek security in groups. And that makes us eager to consume information, true or not, that lets us see the world as a conflict putting our righteous ingroup against a nefarious outgroup.

 American political polarization promotes the sharing of disinformation. The hostility between Red and Blue America feeds a pervasive distrust, and when people are distrustful, they become much more prone to engage in and accept rumor and falsehood. Distrust also encourages people to see the world as “us versus them”– and that’s a world in which we are much more apt to believe information that bolsters “us” and denigrates “them.” We know that  individuals with more polarized views are more likely to believe falsehoods.

And of course, the emergence of high-profile political figures who prey on these tribal instincts exacerbates the situation.

Then there is the third factor — a shift to social media, which is a powerful outlet for composers of disinformation, a pervasive vector for misinformation itself and a multiplier of the other risk factors.

“Media has changed, the environment has changed, and that has a potentially big impact on our natural behavior,” said William J. Brady, a Yale University social psychologist.

“When you post things, you’re highly aware of the feedback that you get, the social feedback in terms of likes and shares,” Dr. Brady said. So when misinformation appeals to social impulses more than the truth does, it gets more attention online, which means people feel rewarded and encouraged for spreading it.

It isn’t surprising that people who get positive feedback when they post inflammatory or false statements are more likely to do so again–and again. In one particularly troubling analysis, researchers found that when a fact-check revealed that information in a post was wrong, the response of partisans wasn’t to revise their thinking or get upset with the purveyor of the lie.

Instead, it was to attack the fact checkers.

“The problem is that when we encounter opposing views in the age and context of social media, it’s not like reading them in a newspaper while sitting alone,” the sociologist Zeynep Tufekci wrote in a much-circulated MIT Technology Review article. “It’s like hearing them from the opposing team while sitting with our fellow fans in a football stadium. Online, we’re connected with our communities, and we seek approval from our like-minded peers. We bond with our team by yelling at the fans of the other one.”

In an ecosystem where that sense of identity conflict is all-consuming, she wrote, “belonging is stronger than facts.”

We’re in a world of hurt…..

Comments

Mandating Fairness

Whenever one of my posts addresses America’s problem with disinformation, at least one commenter will call for re-institution of the Fairness Doctrine–despite the fact that, each time, another commenter (usually a lawyer) will explain why that doctrine wouldn’t apply to social media or most other Internet sites causing contemporary mischief.

The Fairness Doctrine was contractualGovernment owned the broadcast channels that were being auctioned for use by private media companies, and thus had the right to require certain undertakings from responsive bidders. In other words, in addition to the payments being tendered, bidders had to promise to operate “in the public interest,” and the public interest included an obligation to give contending voices a fair hearing.

The government couldn’t have passed a law requiring newspapers and magazines to be “fair,” and it cannot legally require fair and responsible behavior from cable channels and social media platforms, no matter how much we might wish it could.

So–in this era of QAnon and Fox News and Rush Limbaugh clones– where does that leave us?

The Brookings Institution, among others, has wrestled with the issue.

The violence of Jan. 6 made clear that the health of online communities and the spread of disinformation represents a major threat to U.S. democracy, and as the Biden administration takes office, it is time for policymakers to consider how to take a more active approach to counter disinformation and form a public-private partnership aimed at identifying and countering disinformation that poses a risk to society.

Brookings says that a non-partisan public-private effort is required because disinformation crosses platforms and transcends political boundaries. They recommend a “public trust” that would provide analysis and policy proposals intended to defend democracy against the constant stream of  disinformation and the illiberal forces at work disseminating it. 
It would identify emerging trends and methods of sharing disinformation, and would
support data-driven initiatives to improve digital media-literacy. 

Frankly, I found the Brookings proposal unsatisfactorily vague, but there are other, more concrete proposals for combatting online and cable propaganda. Dan Mullendore pointed to one promising tactic in a comment the other day. Fox News income isn’t–as we might suppose– dependent mostly on advertising; significant sums come from cable fees. And one reason those fees are so lucrative is that Fox gets bundled with other channels, meaning that many people pay for Fox who wouldn’t pay for it if it weren’t a package deal . A few days ago, on Twitter, a lawyer named Pam Keith pointed out that a simple regulatory change ending  bundling would force Fox and other channels to compete for customers’ eyes, ears and pocketbooks.

Then there’s the current debate over Section 230 of the Communications Decency Act, with many critics advocating its repeal, and others, like the Electronic Frontier Foundation, defending it.

Section 230 says that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” (47 U.S.C. § 230). In other words, online intermediaries that host or republish speech are protected against a range of laws that might otherwise be used to hold them legally responsible for what others say and do. The protected intermediaries include not only regular Internet Service Providers (ISPs), but also a range of “interactive computer service providers,” including basically any online service that publishes third-party content. Though there are important exceptions for certain criminal and intellectual property-based claims, CDA 230 creates a broad protection that has allowed innovation and free speech online to flourish.

Most observers believe that an outright repeal of Section 230 would destroy social networks as we know them (the linked article explains why, as do several others), but there is a middle ground between total repeal and naive calls for millions of users to voluntarily leave platforms that fail to block hateful and/or misleading posts.

Fast Company has suggested that middle ground.

One possibility is that the current version of Section 230 could be replaced with a requirement that platforms use a more clearly defined best-efforts approach, requiring them to use the best technology and establishing some kind of industry standard they would be held to for detecting and mediating violating content, fraud, and abuse. That would be analogous to standards already in place in the area of advertising fraud….

Another option could be to limit where Section 230 protections apply. For example, it might be restricted only to content that is unmonetized. In that scenario, you would have platforms displaying ads only next to content that had been sufficiently analyzed that they could take legal responsibility for it. 

A “one size fits all” reinvention of the Fairness Doctrine isn’t going to happen. But that doesn’t mean we can’t make meaningful, legal improvements that would make a real difference online.

Comments

Falsely Shouting “Fire” In The Digital Theater

Tom Wheeler is one of the savviest observers of the digital world.

Now at the Brookings Institution, Wheeler headed up the FCC during the Obama administration, and recently authored an essay titled “The Consequences of Social Media’s Giant Experiment.” That essay–like many of his other publications–considered the impact of legally-private enterprises that have had a huge public impact.

The “experiment” Wheeler considers is the shutdown of Trump’s disinformation megaphones: most consequential, of course, were the Facebook and Twitter bans of Donald Trump’s accounts, but it was also important that  Parler–a site for rightwing radicalization and conspiracy theories–was effectively shut down for a time by Amazon’s decision to cease hosting it, and decisions by both Android and Apple to remove it from their app stores. (I note that, since Wheeler’s essay, Parler has found a new hosting service–and it is Russian owned.)

These actions are better late than never. But the proverbial horse has left the barn. These editorial and business judgements do, however, demonstrate how companies have ample ability to act conscientiously to protect the responsible use of their platforms.

Wheeler addresses the conundrum that has been created by a subsection of the law that  insulates social media companies from responsibility for making the sorts of  editorial judgements that publishers of traditional media make every day. As he says, these 26 words are the heart of the issue: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

As he points out,

If you are insulated from the consequences of your actions and make a great deal of money by exploiting that insulation, then what is the incentive to act responsibly?…

The social media companies have put us in the middle of a huge and explosive lab experiment where we see the toxic combination of digital technology, unmoderated content, lies and hate. We now have the answer to what happens when these features and large profits are blended together in a connected world. The result not only has been unproductive for civil discourse, it also represents a danger to democratic systems and effective problem-solving.

Wheeler repeats what most observers of our digital world have recognized: these platforms have the technological capacity to exercise the same sort of responsible moderation that  we expect of traditional media. What they lack is the will–because more responsible moderating algorithms would eat into their currently large–okay, obscene– profits.

The companies’ business model is built around holding a user’s attention so that they may display more paying messages. Delivering what the user wants to see, the more outrageous the better, holds that attention and rings the cash register.

Wheeler points out that we have mischaracterized these platforms–they are not, as they insist, tech enterprises. They are media, and should be required to conform to the rules and expectations that govern media sources. He has other suggestions for tweaking the rules that govern these platforms, and they are worth consideration.

That said, the rise of these digital giants creates a bigger question and implicates what is essentially a philosophical dilemma.

The U.S. Constitution was intended to limit the exercise of power; it was crafted at a time in human history when governments held a clear monopoly on that power. That is arguably no longer the case–and it isn’t simply social media giants. Today, multiple social and economic institutions have the power to pose credible threats both to individual liberty and to social cohesion. How we navigate the minefield created by that reality–how we restrain the power of theoretically “private” enterprises– will determine the life prospects of our children and grandchildren.

At the very least, we need rules that will limit the ability of miscreants to falsely shout fire in our digital environments.

Comments