Technology-R-Us?

Among the recurring elements of what my sons call “family photos”  are the iPhone pictures snapped at family get-togethers in which we’re all looking at our iPhones. My youngest son (who is one of the worst offenders) usually labels those pictures “warm family moments” or something equally sarcastic.

I don’t think my family is unique. Enter an elevator or restaurant, or just walk down a city street, and most people you encounter are staring at the small screens. That reality–and it certainly seems to be a universal reality–raises the question: what is this seductive technology doing to our brains?

Ezra Klein recently addressed that question in an essay for the New York Times.

I am of the generation old enough to remember a time before cyberspace but young enough to have grown up a digital native. And I adored my new land. The endless expanses of information, the people you met as avatars but cared for as humans, the sense that the mind’s reach could be limitless. My life, my career and my identity were digital constructs as much as they were physical ones. I pitied those who came before me, fettered by a physical world I was among the first to escape.

A decade passed, and my certitude faded. Online life got faster, quicker, harsher, louder. “A little bit of everything all of the time,” as the comedian Bo Burnham put it. Smartphones brought the internet everywhere, colonizing moments I never imagined I’d fill. Many times I’ve walked into a public bathroom and everyone is simultaneously using a urinal and staring at a screen.

Klein referenced several of the 20th-century media theorists, including Marshall McLuhan, Walter Ong and Neil Postman, who “tried to warn us.” And he quoted Nicholas Carr’s book, “The Shallows: What the Internet Is Doing to Our Brains.”

The very way my brain worked seemed to be changing. It was then that I began worrying about my inability to pay attention to one thing for more than a couple of minutes. At first I’d figured that the problem was a symptom of middle-age mind rot. But my brain, I realized, wasn’t just drifting. It was hungry. It was demanding to be fed the way the Net fed it — and the more it was fed, the hungrier it became. Even when I was away from my computer, I yearned to check email, click links, do some Googling. I wanted to be connected.

Sound familiar? It sure does to me. And it resonated with Klein, who was particularly struck by the word “hungry.”

That was the word that hooked me. That’s how my brain felt to me, too. Hungry. Needy. Itchy. Once it wanted information. But then it was distraction. And then, with social media, validation. A drumbeat of “You exist. You are seen.”

How important is the choice of the platform–the medium–through which we receive messages? Like Klein, I’d always supposed that content is more important than the medium through which we access that content, but the theorists he cites beg to differ.

McLuhan’s famous insistence that “the medium is the message” reflected his view that mediums matter a lot–in fact, that they matter more than the content of the messages being conveyed. Different mediums create and communicate content differently, and those differences change people (and ultimately, society). As Klein concedes, “oral culture teaches us to think one way, written culture another. Television turned everything into entertainment, and social media taught us to think with the crowd.”

Like several commenters on this blog, Klein has been influenced by Neil Postman’s “Amusing Ourselves to Death.”

McLuhan says: Don’t just look at what’s being expressed; look at the ways it’s being expressed. And then Postman says: Don’t just look at the way things are being expressed; look at how the way things are expressed determines what’s actually expressible.” In other words, the medium blocks certain messages.

Postman was planting a flag here: The border between entertainment and everything else was blurring, and entertainers would be the only ones able to fulfill our expectations for politicians. He spends considerable time thinking, for instance, about the people who were viable politicians in a textual era and who would be locked out of politics because they couldn’t command the screen.

Later, in this very long essay (which is well worth your time to read in its entirety,) Klein makes an important point:

There is no stable, unchanging self. People are capable of cruelty and altruism, farsightedness and myopia. We are who we are, in this moment, in this context, mediated in these ways. It is an abdication of responsibility for technologists to pretend that the technologies they make have no say in who we become.

I wonder: what have we become?

Comments

Cheap Speech

Richard Hasen recently had a column–pardon me, a “guest essay”–in the New York Times. Hasen is a pre-eminent scholar of elections and electoral systems; whose most recent book is  “Cheap Speech: How Disinformation Poisons Our Politics — and How to Cure It.”

In the “guest essay,” Hasen joins the scholars and pundits concerned about the negative consequences of so-called “fake news.”

The same information revolution that brought us Netflix, podcasts and the knowledge of the world in our smartphone-gripping hands has also undermined American democracy. There can be no doubt that virally spread political disinformation and delusional invective about stolen, rigged elections are threatening the foundation of our Republic. It’s going to take both legal and political change to bolster that foundation, and it might not be enough.

Hasen uses the term “cheap speech” in two ways. It’s an acknowledgement that the Internet has slashed the cost of promulgating all communications–credible and not. But it is also recognition that the information environment has become increasingly “cheap” in the sense of “favoring speech of little value over speech that is more valuable to voters.”

It is expensive to produce quality journalism but cheap to produce polarizing political “takes” and easily shareable disinformation. The economic model for local newspapers and news gathering has collapsed over the past two decades; from 2000 to 2018, journalists lost jobs faster than coal miners.

Hasen catalogues the various ways in which that collapse has undermined confidence in American institutions, especially government, and he points out that much “fake news” is not mere misinformation. but” deliberately spread disinformation, which can be both politically and financially profitable.”

Reading the essay, I thought back to Marshall McLuhan’s famous dictum that “the medium is the message.”  Hasen says that even if politics in the 1950s had been as polarized as they are today, it is highly unlikely that those division would have triggered the insurrection of Jan. 6th, and equally unlikely that millions of Republicans would believe phony claims about a “stolen” 2020 election. Social media has had a profoundly detrimental effect on democracy.

A democracy cannot function without “losers’ consent,” the idea that those on the wrong side of an election face disappointment but agree that there was a fair vote count. Those who believe the last election was stolen will have fewer compunctions about attempting to steal the next one. They are more likely to threaten election officials, triggering an exodus of competent election officials. They are more likely to see the current government as illegitimate and to refuse to follow government guidance on public health, the environment and other issues crucial to health and safety. They are comparatively likely to see violence as a means of resolving political grievances.

Hasen buttresses his argument with several examples of the ways cheap speech –and weakened political parties–damage democracy. His litany leaves us with a very obvious question: what can we do? Assuming the accuracy of his diagnosis, what is the prescribed treatment? Hasen gives us a list of his preferred fixes:  updating campaign finance laws so that they apply to what is now mostly unregulated political advertising disseminated over the internet; mandating the labeling of deep fakes as “altered;” and tightening the ban on foreign campaign expenditures, among others.

Congress should also make it a crime to lie about when, where and how people vote. A Trump supporter has been charged with targeting voters in 2016 with false messages suggesting that they could vote by text or social media post, but it is not clear if existing law makes such conduct illegal. We also need new laws aimed at limiting microtargeting, the use by campaigns or interest groups of intrusive data collected by social media companies to send political ads, including some misleading ones, sometimes to vulnerable populations.

He also acknowledges that such measures would be a hard sell to today’s Supreme Court, noting that much of the court’s jurisprudence depends upon faith in an arguably outmoded “marketplace of ideas” metaphor, which assumes that the truth will emerge through counter-speech.

If that was ever true in the past, it is not true in the cheap speech era. Today, the clearest danger to American democracy is not government censorship but the loss of voter confidence and competence that arises from the sea of disinformation and vitriol.

He argues that we need to find a way to subsidize real  journalism, especially local journalism, and that journalism bodies should use accreditation methods to signal which content is reliable and which is counterfeit. “Over time and with a lot of effort, we can reestablish greater faith in real journalism, at least for a significant part of the population.”

I would add a requirement that schools teach media literacy.

That said, how much of this is do-able is an open question.

Comments

Those Dueling Realities

News literacy matters more than ever–and we live at a time when it is harder and harder to tell truth from fiction.

One example from the swamps of the Internet. The link will take you to a doctored photo of  actor Sylvester Stallone wearing a t-shirt that says  “4 Useless Things: woke people, COVID-19 vaccines, Dr. Anthony Fauci and President Joe Biden.” In the original, authentic photo, Stallone is wearing a plain dark t-shirt.

The News Literacy Project, which issues ongoing reports of these sorts of visual misrepresentation, says this about the Stallone t-shirt.

Digitally manipulating photos of celebrities to make it look like they endorse a provocative political message — often on t-shirts — is extremely common. Such posts are designed to resonate with people who have strong partisan views and may share the image without pausing to consider whether it’s authentic. It’s also likely that some of these fakes are marketing ploys to boost sales of t-shirts that are easily found for sale online. For example, this reply to an influential Twitter account includes the same doctored image and a link to a product page where the shirt can be purchased.

It’s bad enough that there are literally thousands of sites using text to promote lies. But people have a well-known bias toward visual information (“Who am I going to believe, you or my lying eyes?””Seeing is believing.” Etc.) With the availability of “deep fake” technologies, the ability to doctor photographs has become easier, more widespread, and much harder to detect.

The Guardian recently reported on the phenomenon, beginning with a definition.

Have you seen Barack Obama call Donald Trump a “complete dipshit”, or Mark Zuckerberg brag about having “total control of billions of people’s stolen data”, or witnessed Jon Snow’s moving apology for the dismal ending to Game of Thrones? Answer yes and you’ve seen a deepfake. The 21st century’s answer to Photoshopping, deepfakes use a form of artificial intelligence called deep learning to make images of fake events, hence the name deepfake. Want to put new words in a politician’s mouth, star in your favourite movie, or dance like a pro? Then it’s time to make a deepfake.

As the article noted, a fair percentage of deep-fake videos are pornographic. A firm called “Deeptrace” identified 15,000 altered videos online in September 2019, and a “staggering 96%” were pornographic. Ninety-nine percent of those “mapped faces from female celebrities on to porn stars.”

As new techniques allow unskilled people to make deepfakes with a handful of photos, fake videos are likely to spread beyond the celebrity world to fuel revenge porn. As Danielle Citron, a professor of law at Boston University, puts it: “Deepfake technology is being weaponised against women.” Beyond the porn there’s plenty of spoof, satire and mischief.

But it isn’t just about videos. Deepfake technology can evidently create convincing phony photos from scratch. The report noted that a supposed Bloomberg journalist, “Maisy Kinsley”,  who was a deepfake, had even been given profiles on LinkedIn and Twitter.

Another LinkedIn fake, “Katie Jones”, claimed to work at the Center for Strategic and International Studies, but is thought to be a deepfake created for a foreign spying operation.

Audio can be deepfaked too, to create “voice skins” or ”voice clones” of public figures. Last March, the chief of a UK subsidiary of a German energy firm paid nearly £200,000 into a Hungarian bank account after being phoned by a fraudster who mimicked the German CEO’s voice. The company’s insurers believe the voice was a deepfake, but the evidence is unclear. Similar scams have reportedly used recorded WhatsApp voice messages.

No wonder levels of trust have declined so precipitously! The Guardian addressed the all-important question: how can you tell whether a visual image is real or fake? It turns out, it’s very hard–and getting harder.

In 2018, US researchers discovered that deepfake faces don’t blink normally. No surprise there: the majority of images show people with their eyes open, so the algorithms never really learn about blinking. At first, it seemed like a silver bullet for the detection problem. But no sooner had the research been published, than deepfakes appeared with blinking. Such is the nature of the game: as soon as a weakness is revealed, it is fixed.

Governments, universities and tech firms are currently funding research that will  detect deepfakes, and we can only hope that research is successful–and soon. The truly insidious consequence of a widespread inability to tell whether an image is or is not authentic would be the creation of a “zero-trust society, where people cannot, or no longer bother to, distinguish truth from falsehood.”

Deepfakes are just one more element of an information environment that encourages us to construct, inhabit and defend our own, preferred “realities.” 
 

Comments

Section 230

These are hard times for free speech advocates. The Internet–with its capacity for mass distribution of lies, misinformation, bigotry and incitement to violence–cries out for reform, but it is not apparent (certainly not to me) what sort of reforms might curb the dangers without also stifling free expression.

One approach is focused on a law that is older than Google: Section 230 of the Communications Decency Act. 

What is Section 230? Is it really broken? Can it be fixed without inadvertently doing more damage? 

The law is just 26 words that allow online platforms to make rules about what people can or can’t post without being held legally responsible for the content. (There are some exceptions, but not many. )As a recent newsletter on technology put it (sorry, for some reason link doesn’t work),

If I accuse you of murder on Facebook, you might be able to sue me, but you can’t sue Facebook. If you buy a defective toy from a merchant on Amazon, you might be able to take the seller to court, but not Amazon. (There is some legal debate about this, but you get the gist.)

The law created the conditions for Facebook, Yelp and Airbnb to give people a voice without being sued out of existence. But now Republicans and Democrats are asking whether the law gives tech companies either too much power or too little responsibility for what happens under their watch.


Republicans mostly worry that Section 230 gives internet companies too much power to suppress online debate and discussion, while Democrats mostly worry that it lets those companies ignore or even enable dangerous incitements and/or illegal transactions. 

The fight over Section 230 is really a fight over the lack of control exercised by Internet giants like Facebook and Twitter. In far too many situations, the law allows people to lie online without consequence–lets face it, that high school kid who is spreading lewd rumors about a girl who turned him down for a date isn’t likely to be sued, no matter how damaging, reprehensible and untrue his posts may be. The recent defamation suits brought by the voting machine manufacturers were salutary and satisfying, but most people harmed by the bigotry and disinformation online are not in a position to pursue such remedies.

The question being debated among techies and lawyers is whether Section 230 is too protective; whether it reduces incentives for platforms like Facebook and Twitter to make and enforce stronger measures that would be more effective in curtailing obviously harmful rhetoric and activities. 

Several proposed “fixes” are currently being considered. The Times article described them.


Fix-it Plan 1: Raise the bar. Some lawmakers want online companies to meet certain conditions before they get the legal protections of Section 230.

One example: A congressional proposal would require internet companies to report to law enforcement when they believe people might be plotting violent crimes or drug offenses. If the companies don’t do so, they might lose the legal protections of Section 230 and the floodgates could open to lawsuits.

Facebook this week backed a similar idea, which proposed that it and other big online companies would have to have systems in place for identifying and removing potentially illegal material.

Another proposed bill would require Facebook, Google and others to prove that they hadn’t exhibited political bias in removing a post. Some Republicans say that Section 230 requires websites to be politically neutral. That’s not true.

Fix-it Plan 2: Create more exceptions. One proposal would restrict internet companies from using Section 230 as a defense in legal cases involving activity like civil rights violations, harassment and wrongful death. Another proposes letting people sue internet companies if child sexual abuse imagery is spread on their sites.

Also in this category are legal questions about whether Section 230 applies to the involvement of an internet company’s own computer systems. When Facebook’s algorithms helped circulate propaganda from Hamas, as David detailed in an article, some legal experts and lawmakers said that Section 230 legal protections should not have applied and that the company should have been held complicit in terrorist acts.


Slate has an article describing all of the proposed changes to Section 230.

I don’t have a firm enough grasp of the issues involved–let alone the technology needed to accomplish some of the proposed changes–to have a favored “fix” to Section 230.

I do think that this debate foreshadows others that will arise in a world where massive international companies–online and not– in many cases wield more power than governments. Constraining these powerful entities will require new and very creative approaches.

Comments

Mandating Fairness

Whenever one of my posts addresses America’s problem with disinformation, at least one commenter will call for re-institution of the Fairness Doctrine–despite the fact that, each time, another commenter (usually a lawyer) will explain why that doctrine wouldn’t apply to social media or most other Internet sites causing contemporary mischief.

The Fairness Doctrine was contractualGovernment owned the broadcast channels that were being auctioned for use by private media companies, and thus had the right to require certain undertakings from responsive bidders. In other words, in addition to the payments being tendered, bidders had to promise to operate “in the public interest,” and the public interest included an obligation to give contending voices a fair hearing.

The government couldn’t have passed a law requiring newspapers and magazines to be “fair,” and it cannot legally require fair and responsible behavior from cable channels and social media platforms, no matter how much we might wish it could.

So–in this era of QAnon and Fox News and Rush Limbaugh clones– where does that leave us?

The Brookings Institution, among others, has wrestled with the issue.

The violence of Jan. 6 made clear that the health of online communities and the spread of disinformation represents a major threat to U.S. democracy, and as the Biden administration takes office, it is time for policymakers to consider how to take a more active approach to counter disinformation and form a public-private partnership aimed at identifying and countering disinformation that poses a risk to society.

Brookings says that a non-partisan public-private effort is required because disinformation crosses platforms and transcends political boundaries. They recommend a “public trust” that would provide analysis and policy proposals intended to defend democracy against the constant stream of  disinformation and the illiberal forces at work disseminating it. 
It would identify emerging trends and methods of sharing disinformation, and would
support data-driven initiatives to improve digital media-literacy. 

Frankly, I found the Brookings proposal unsatisfactorily vague, but there are other, more concrete proposals for combatting online and cable propaganda. Dan Mullendore pointed to one promising tactic in a comment the other day. Fox News income isn’t–as we might suppose– dependent mostly on advertising; significant sums come from cable fees. And one reason those fees are so lucrative is that Fox gets bundled with other channels, meaning that many people pay for Fox who wouldn’t pay for it if it weren’t a package deal . A few days ago, on Twitter, a lawyer named Pam Keith pointed out that a simple regulatory change ending  bundling would force Fox and other channels to compete for customers’ eyes, ears and pocketbooks.

Then there’s the current debate over Section 230 of the Communications Decency Act, with many critics advocating its repeal, and others, like the Electronic Frontier Foundation, defending it.

Section 230 says that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” (47 U.S.C. § 230). In other words, online intermediaries that host or republish speech are protected against a range of laws that might otherwise be used to hold them legally responsible for what others say and do. The protected intermediaries include not only regular Internet Service Providers (ISPs), but also a range of “interactive computer service providers,” including basically any online service that publishes third-party content. Though there are important exceptions for certain criminal and intellectual property-based claims, CDA 230 creates a broad protection that has allowed innovation and free speech online to flourish.

Most observers believe that an outright repeal of Section 230 would destroy social networks as we know them (the linked article explains why, as do several others), but there is a middle ground between total repeal and naive calls for millions of users to voluntarily leave platforms that fail to block hateful and/or misleading posts.

Fast Company has suggested that middle ground.

One possibility is that the current version of Section 230 could be replaced with a requirement that platforms use a more clearly defined best-efforts approach, requiring them to use the best technology and establishing some kind of industry standard they would be held to for detecting and mediating violating content, fraud, and abuse. That would be analogous to standards already in place in the area of advertising fraud….

Another option could be to limit where Section 230 protections apply. For example, it might be restricted only to content that is unmonetized. In that scenario, you would have platforms displaying ads only next to content that had been sufficiently analyzed that they could take legal responsibility for it. 

A “one size fits all” reinvention of the Fairness Doctrine isn’t going to happen. But that doesn’t mean we can’t make meaningful, legal improvements that would make a real difference online.

Comments