Our ‘Bespoke Realities’

A New York Times essay by David French recently considered the differences between belief in what he termed “confined” conspiracy theories and what he aptly labeled “bespoke realities.”

As French pointed out, lots of people have suspicions or doubts about official reports of  phenomena like UFO sightings, or official explanations of events like the assassination of JFK, but those suspicions are limited to specific situations. As he says, that’s nothing new.

But in recent years I’ve encountered, in person and online, a phenomenon that is different from the belief or interest in any given conspiracy theory. People don’t just have strange or quirky ideas on confined subjects. They have entire worldviews rooted in a comprehensive network of misunderstandings and false beliefs.

And these aren’t what you’d call low-information voters. They’re some of the most politically engaged people I know. They consume news voraciously. They’re perpetually online. For them, politics isn’t just a hobby; in many ways, it’s a purpose.

What we are seeing these days is something different, and infinitely more troubling.

There is a fundamental difference between, on the one hand, someone who lives in the real world but also has questions about the moon landing and, on the other, a person who believes the Covid vaccines are responsible for a vast number of American deaths and Jan. 6 was an inside job and the American elite is trying to replace the electorate with new immigrant voters and the 2020 election was rigged and Donald Trump is God’s choice to save America.

These are not individuals who simple believe in one or another conspiracy theory.  These are folks who’ve gone all the way down the rabbit hole. French adopts the term “bespoke reality”  from his friend Renée DiResta.  “Bespoke,” of course, is a word that we most often associate with tailors–usually British –who create clothing fashioned specifically for a given customer. The residents of French’s “bespoke realities” operate within a world created and maintained just for them, a world with “its own norms, media, trusted authorities and frameworks of facts.”

The essay took me back to Eli Pariser’s warning in his 2012  book, “The Filter Bubble.” Filter bubble was Pariser’s term for the informational environment produced by the algorithms that allow content to be personalized to each user–algorithms that bias or skew or limit the information an individual user sees on the internet. We all inhabit those information “bubbles” to a greater or lesser extent. As French wrote,

Combine vast choice with algorithmic sorting, and we now possess a remarkable ability to become arguably the most comprehensively, voluntarily and cooperatively misinformed generation of people ever to walk the earth. The terms “voluntarily” and “cooperatively” are key. We don’t live in North Korea, Russia or the People’s Republic of China. We’re drunk on freedom by comparison. We’re misinformed not because the government is systematically lying or suppressing the truth. We’re misinformed because we like the misinformation we receive and are eager for more.

The market is very, very happy to provide us with all the misinformation we like. Algorithms recognize our preferences and serve up the next video or article that echoes or amplifies the themes of the first story we clicked. Media outlets and politicians notice the online trends and serve up their own content that sometimes deliberately and sometimes mistakenly reinforces false narratives and constructs alternative realities.

Thoughtful folks can and do escape these bubbles, at least partially, by purposely accessing a wide variety of sources having different viewpoints, but confirmation bias is a strong element in most of our psyches.

As DiResta writes in her upcoming book, “Invisible Rulers: The People Who Turn Lies Into Reality,” “Bespoke realities are made for — and by — the individual.” Americans experience a “choose-your-own-adventure epistemology: Some news outlet somewhere has written the story you want to believe, some influencer is touting the diet you want to live by or demonizing the group you also hate.”

On the Internet, “you can always find evidence, real or imagined, to validate your priors.” You can also protect yourself from information contrary to your preferred worldview.

It isn’t difficult to identify the people who have chosen to occupy an alternate “reality;” you see them often in comments to Facebook posts, and even in occasional aggressive–if factually deficient– posts by trolls to this site.

The urgent political question is: how many Americans occupy a “bespoke reality” that is inconsistent with demonstrable empirical fact? And how many of them will go to the polls to vote their bespoke realities in November of 2024?

Comments

The Challenges Of Modern Life

The Supreme Court’s docket this year has two cases that will require the Court to confront a thorny challenge of modern life–to adapt (or not) to the novel realities of today’s communication technologies.

Given the fact that at least five of the Justices cling to the fantasy that they are living in the 1800s, I’m not holding my breath.

The cases I’m referencing are two that challenge Section 230, social media’s “safe space.”

As Time Magazine explained on February 19th,

The future of the federal law that protects online platforms from liability for content uploaded on their site is up in the air as the Supreme Court is set to hear two cases that could change the internet this week.

The first case, Gonzalez v. Google, which is set to be heard on Tuesday, argues that YouTube’s algorithm helped ISIS post videos and recruit members —making online platforms directly and secondarily liable for the 2015 Paris attacks that killed 130 people, including 23-year-old American college student Nohemi Gonzalez. Gonzalez’s parents and other deceased victims’ families are seeking damages related to the Anti-Terrorism Act.

Oral arguments for Twitter v. Taamneh—a case that makes similar arguments against Google, Twitter, and Facebook—centers around another ISIS terrorist attack that killed 29 people in Istanbul, Turkey, will be heard on Wednesday.

The cases will decide whether online platforms can be held liable for the targeted advertisements or algorithmic content spread on their platforms.

Re-read that last sentence, because it accurately reports the question the Court must address. Much of the media coverage of these cases misstates that question. These cases  are not about determining whether the platforms can be held responsible for posts by the individuals who upload them. The issue is whether they can be held responsible for the algorithms that promote those posts–algorithms that the platforms themselves developed.

Section 230, which passed in 1996, is a part of the Communications Decency Act.

The law explicitly states, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider,” meaning online platforms are not responsible for the content a user may post.

Google argues that websites like YouTube cannot be held liable as the “publisher or speaker” of the content users created, because Google does not have the capacity to screen “all third-party content for illegal or tortious materia.l” The company also argues that “the threat of liability could prompt sweeping restrictions on online activity.”

It’s one thing to insulate tech platforms from liability for what users post–it’s another to allow them free reign to select and/or promote certain content–which is what their algorithms do. In recognition of that distinction, in 2021, Senators Amy Klobuchar and Ben Ray Lujan introduced a bill that would remove tech companies’ immunity from lawsuits if their algorithms promoted health misinformation.

As a tech journalist wrote in a NYT opinion essay,

The law, created when the number of websites could be counted in the thousands, was designed to protect early internet companies from libel lawsuits when their users inevitably slandered one another on online bulletin boards and chat rooms. But since then, as the technology evolved to billions of websites and services that are essential to our daily lives, courts and corporations have expanded it into an all-purpose legal shield that has acted similarly to the qualified immunity doctrine that often protects policeofficers from liability even for violence and killing.

As a journalist who has been covering the harms inflicted by technology for decades, I have watched how tech companies wield Section 230 to protect themselves against a wide array of allegations, including facilitating deadly drug sales, sexual harassment, illegal arms sales and human trafficking — behavior that they would have likely been held liable for in an offline context….

There is a way to keep internet content freewheeling while revoking tech’s get-out-of-jail-free card: drawing a distinction between speech and conduct.

In other words, continue to offer tech platforms immunity for the defamation cases that Congress had in mind when Section 230 passed, but impose liability for illegal conduct that their own technology enables and/or promotes. (For example, the author confirmed that advertisers could easily use Facebook’s ad targeting algorithms to violate the Fair Housing Act.)

Arguably, the creation of an algorithm is an action–not the expression or communication of an opinion or idea. When that algorithm demonstrably encourages and/or facilitates illegal behavior, its creator ought to be held liable.

It’s like that TV auto ad that proclaims “this isn’t your father’s Oldsmobile.” The Internet isn’t your mother’s newspaper, either. Some significant challenges come along with the multiple benefits of modernity– how to protect free speech without encouraging the barbarians at the gate is one of them.

 

Comments

Is Design Censorship?

We live in a world where seemingly settled issues are being reframed. A recent, fascinating discussion on the Persuasion podcast focused on the role of social media in spreading both misinformation and what Renee DiResta, the expert being interviewed, labeled “rumors.”

As she explained, using the term “misinformation” (a use to which I plead guilty) isn’t a particularly useful way of framing  the problem we face, because so many of the things that raise people’s hackles aren’t statements of fact; they aren’t falsifiable. And even when they are, even when what was posted or asserted was demonstrably untrue, and is labeled untrue, a lot of people simply won’t believe it is false. As she says, “if you’re in Tribe A, you distrust the media of Tribe B and vice versa. And so even the attempt to correct the misinformation, when it is misinformation, is read with a particular kind of partisan valence. “Is this coming from somebody in my tribe, or is this more manipulation from the bad guys?”

If we aren’t dealing simply in factual inaccuracies or even outright lies, how should we describe the problem?

One of the more useful frameworks for what is happening today is rumors: people are spreading information that can maybe never be verified or falsified, within communities of people who really care about an issue. They spread it amongst themselves to inform their friends and neighbors. There is a kind of altruistic motivation. The platforms find their identity for them based on statistical similarity to other users. Once the network is assembled and people are put into these groups or these follower relationships, the way that information is curated is that when one person sees it, they hit that share button—it’s a rumor, they’re interested, and they want to spread it to the rest of their community. Facts are not really part of the process here. It’s like identity engagement: “this is a thing that I care about, that you should care about, too.” This is rewarmed media theory from the 1960s: the structure of the system perpetuates how the information is going to spread. Social media is just a different type of trajectory, where the audience has real power as participants. That’s something that is fundamentally different from all prior media environments. Not only can you share the rumor, but millions of people can see in aggregate the sharing of that rumor.

Her explanation of how social media algorithms work is worth quoting at length

When you pull up your Twitter feed, there’s “Trends” on the right hand side, and they’re personalized for you. And sometimes there’s a very, very small number of participants in the trend, maybe just a few hundred tweets. But it’s a nudge, it says you are going to be interested in this topic. It’s bait: go click this thing that you have engaged with before that you are probably going to be interested in, and then you will see all of the other people’s tweets about it. Then you engage. And in the act of engagement, you are perpetuating that trend.

Early on, I was paying attention to the anti-vaccine movement. I was a new mom, and I was really interested in what people were saying about this on Facebook. I was kind of horrified by it, to be totally candid. I started following some anti-vaccine groups, and then Facebook began to show me Pizzagate, and then QAnon. I had never typed in Pizzagate, and I had never typed in QAnon. But through the power of collaborative filtering, it understood that if you were an active participant in a conspiracy theory community that fundamentally distrusts the government, you are probably similar to these other people who maybe have a different flavor of the conspiracy. And the recommendation engine didn’t understand what it was doing. It was not a conscious effort. It just said: here’s an active community, you have some similarities, you should go join that active community. Let’s give you this nudge. And that is how a lot of these networks were assembled in the early and mid-2010s.

Then DiResta posed what we used to call the “sixty-four thousand dollar question:”  are changes to the design of an algorithm censorship?

Implicit in that question, of course, is another: what about the original design of an algorithm?  Those mechanisms have been designed  to respond to certain inputs in certain ways, to “nudge” the user to visit X rather than Y.  Is that censorship? And if the answer to either of those questions is “yes,” is the First Amendment implicated?

To say that we are in uncharted waters is an understatement.

Comments

Who’s Talking?

I finally got around to reading an article about Facebook by a Professor Scott Galloway, sent to me by a reader. In it, Galloway was considering the various “fixes” that have been suggested in the wake of continuing revelations about the degree to which Facebook and other social media platforms have facilitated America’s divisions.

There have been a number of similar articles, but what Galloway did better than most was explain the origin of Section 230 of the Communications Act in language we non-techie people can understand.

In most industries, the most robust regulator is not a government agency, but a plaintiff’s attorney. If your factory dumps toxic chemicals in the river, you get sued. If the tires you make explode at highway speed, you get sued. Yes, it’s inefficient, but ultimately the threat of lawsuits reduces regulation; it’s a cop that covers a broad beat. Liability encourages businesses to make risk/reward calculations in ways that one-size-fits-all regulations don’t. It creates an algebra of deterrence.

Social media, however, is largely immunized from such suits. A 1996 law, known as “Section 230,” erects a fence around content that is online and provided by someone else. It means I’m not liable for the content of comments on the No Mercy website, Yelp isn’t liable for the content of its user reviews, and Facebook, well, Facebook can pretty much do whatever it wants.

There are increasing calls to repeal or reform 230. It’s instructive to understand this law, and why it remains valuable. When Congress passed it — again, in 1996 — it reasoned online companies were like bookstores or old-fashioned bulletin boards. They were mere distribution channels for other people’s content and shouldn’t be liable for it.

Seems reasonable. So–why the calls for its repeal? Galloway points to the multiple ways in which the information and communication environments have changed since 1996.

In 1996, 16% of Americans had access to the Internet, via a computer tethered to a phone cord. There was no Wi-Fi. No Google, Facebook, Twitter, Reddit, or YouTube — not even Friendster or MySpace had been birthed. Amazon sold only books. Section 230 was a fence protecting a garden plot of green shoots and untilled soil.

Today, as he points out, some 3 billion individuals use Facebook, and fifty-seven percent of the world population uses some sort of social media. Those are truly astonishing numbers.

I have previously posted about externalities–the ability of manufacturers and other providers to compete more successfully in the market by “offloading” certain of their costs to society at large. When it comes to social media, Galloway tells us that its externalities have grown as fast as the platforms’ revenues–and thanks to Section 230, society has borne the costs.

In sum, behind the law’s liability shield, tech platforms have morphed from Model UN members to Syria and North Korea. Only these Hermit Kingdoms have more warheads and submarines than all other nations combined.

As he points out, today’s social media has the resources to play by the same rules as other powerful media. Bottom line: We need a new fence. We need to redraw Section 230 so that it that protects society from the harms of social media companies without destroying  their  usefulness or economic vitality.

What we have learned since 1996 is that Facebook and other social media companies are not neutral platforms.  They aren’t bulletin boards. They are rigorously managed– personalized for each user, and actively boosting or suppressing certain content. Galloway calls that “algorithmic amplification” and it didn’t exist in 1996.

There are evidently several bills pending in Congress that purport to address the problem–aiming at the ways in which social media platforms weaponize these algorithms. Such approaches should avoid raising credible concerns about chilling free expression.

Reading the essay gave me some hope that we can deal–eventually–with the social damage being inflicted by social media. It didn’t, however, suggest a way to counter the propaganda spewed daily by Fox News or Sinclair or their clones…

Comments

Increasing Intensity–For Profit

Remember when Donald Rumsfeld talked about “known unknowns”? It was a clunky phrase, but in a weird way, it describes much of today’s world.

Take social media, for example. What we know is that pretty much everyone is on one or another (or many) social media platforms. What we don’t know is how the various algorithms those sites employ are affecting our opinions, our relationships and our politics. (Just one of the many reasons to be nervous about the reach of wacko conspiracies like QAnon, not to mention the upcoming election…)

A recent essay in the “subscriber only” section of Talking Points Memo focused on those algorithms, and especially on the effect of those used by Facebook. The analysis suggested that the algorithms were designed to increase users’ intensities and Facebook’s profits, designs that have contributed mightily to the current polarization of American voters.

The essay referenced recent peer-reviewed research confirming something we probably all could have guessed: the more time people spend on Facebook the more polarized their beliefs become. What most of us wouldn’t have guessed is the finding that the effect is  five times greater for conservatives than for liberals–an effect that was not found for other social media sites.

The study looked at the effect on conservatives of Facebook usage and Reddit usage. The gist is that when conservatives binge on Facebook the concentration of opinion-affirming content goes up (more consistently conservative content) but on Reddit it goes down significantly. This is basically a measure of an echo chamber. And remember too that these are both algorithmic, automated sites. Reddit isn’t curated by editors. It’s another social network in which user actions, both collectively and individually, determine what you see. If you’ve never visited Reddit let’s also just say it’s not all for the faint of heart. There’s stuff there every bit as crazy and offensive as anything you’ll find on Facebook.

The difference is in the algorithms and what the two sites privilege in content. Read the article for the details but the gist is that Reddit focuses more on interest areas and viewers’ subjective evaluations of quality and interesting-ness whereas Facebook focuses on intensity of response.

Why the difference? Reddit is primarily a “social” site; Facebook is an advertising site. Its interest in stoking intensity is in service of that advertising–the longer you are engaged with the platform, the more time you spend on it, and especially how intensely you are engaged, all translate into increased profit.

Facebook argues that the platform is akin to the telephone; no one blames telephone when people use them to spread extremist views. It argues that the site is simply facilitating communication. But–as the essay points out– that’s clearly not true. Facebook’s search engine is designed to encourage and amplify some emotions and responses–something your telephone doesn’t do.  It’s a “polarization/extremism generating machine.”

The essay ends with an intriguing–and apt–analogy to the economic description of externalities:

Producing nuclear energy is insanely profitable if you sell the energy, take no safety precautions and dump the radioactive waste into the local river. In other words, if the profits remain private and the costs are socialized. What makes nuclear energy an iffy financial proposition is the massive financial costs associated with doing otherwise. Facebook is like a scofflaw nuclear power company that makes insane profits because it runs its reactor in the open and dumps the waste in the bog behind the local high school.

Facebook’s externality is political polarization.

The question–as always–is “what should we do about it?”

Comments