Tag Archives: social media

Mandating Fairness

Whenever one of my posts addresses America’s problem with disinformation, at least one commenter will call for re-institution of the Fairness Doctrine–despite the fact that, each time, another commenter (usually a lawyer) will explain why that doctrine wouldn’t apply to social media or most other Internet sites causing contemporary mischief.

The Fairness Doctrine was contractualGovernment owned the broadcast channels that were being auctioned for use by private media companies, and thus had the right to require certain undertakings from responsive bidders. In other words, in addition to the payments being tendered, bidders had to promise to operate “in the public interest,” and the public interest included an obligation to give contending voices a fair hearing.

The government couldn’t have passed a law requiring newspapers and magazines to be “fair,” and it cannot legally require fair and responsible behavior from cable channels and social media platforms, no matter how much we might wish it could.

So–in this era of QAnon and Fox News and Rush Limbaugh clones– where does that leave us?

The Brookings Institution, among others, has wrestled with the issue.

The violence of Jan. 6 made clear that the health of online communities and the spread of disinformation represents a major threat to U.S. democracy, and as the Biden administration takes office, it is time for policymakers to consider how to take a more active approach to counter disinformation and form a public-private partnership aimed at identifying and countering disinformation that poses a risk to society.

Brookings says that a non-partisan public-private effort is required because disinformation crosses platforms and transcends political boundaries. They recommend a “public trust” that would provide analysis and policy proposals intended to defend democracy against the constant stream of  disinformation and the illiberal forces at work disseminating it. 
It would identify emerging trends and methods of sharing disinformation, and would
support data-driven initiatives to improve digital media-literacy. 

Frankly, I found the Brookings proposal unsatisfactorily vague, but there are other, more concrete proposals for combatting online and cable propaganda. Dan Mullendore pointed to one promising tactic in a comment the other day. Fox News income isn’t–as we might suppose– dependent mostly on advertising; significant sums come from cable fees. And one reason those fees are so lucrative is that Fox gets bundled with other channels, meaning that many people pay for Fox who wouldn’t pay for it if it weren’t a package deal . A few days ago, on Twitter, a lawyer named Pam Keith pointed out that a simple regulatory change ending  bundling would force Fox and other channels to compete for customers’ eyes, ears and pocketbooks.

Then there’s the current debate over Section 230 of the Communications Decency Act, with many critics advocating its repeal, and others, like the Electronic Frontier Foundation, defending it.

Section 230 says that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” (47 U.S.C. § 230). In other words, online intermediaries that host or republish speech are protected against a range of laws that might otherwise be used to hold them legally responsible for what others say and do. The protected intermediaries include not only regular Internet Service Providers (ISPs), but also a range of “interactive computer service providers,” including basically any online service that publishes third-party content. Though there are important exceptions for certain criminal and intellectual property-based claims, CDA 230 creates a broad protection that has allowed innovation and free speech online to flourish.

Most observers believe that an outright repeal of Section 230 would destroy social networks as we know them (the linked article explains why, as do several others), but there is a middle ground between total repeal and naive calls for millions of users to voluntarily leave platforms that fail to block hateful and/or misleading posts.

Fast Company has suggested that middle ground.

One possibility is that the current version of Section 230 could be replaced with a requirement that platforms use a more clearly defined best-efforts approach, requiring them to use the best technology and establishing some kind of industry standard they would be held to for detecting and mediating violating content, fraud, and abuse. That would be analogous to standards already in place in the area of advertising fraud….

Another option could be to limit where Section 230 protections apply. For example, it might be restricted only to content that is unmonetized. In that scenario, you would have platforms displaying ads only next to content that had been sufficiently analyzed that they could take legal responsibility for it. 

A “one size fits all” reinvention of the Fairness Doctrine isn’t going to happen. But that doesn’t mean we can’t make meaningful, legal improvements that would make a real difference online.

 

Falsely Shouting “Fire” In The Digital Theater

Tom Wheeler is one of the savviest observers of the digital world.

Now at the Brookings Institution, Wheeler headed up the FCC during the Obama administration, and recently authored an essay titled “The Consequences of Social Media’s Giant Experiment.” That essay–like many of his other publications–considered the impact of legally-private enterprises that have had a huge public impact.

The “experiment” Wheeler considers is the shutdown of Trump’s disinformation megaphones: most consequential, of course, were the Facebook and Twitter bans of Donald Trump’s accounts, but it was also important that  Parler–a site for rightwing radicalization and conspiracy theories–was effectively shut down for a time by Amazon’s decision to cease hosting it, and decisions by both Android and Apple to remove it from their app stores. (I note that, since Wheeler’s essay, Parler has found a new hosting service–and it is Russian owned.)

These actions are better late than never. But the proverbial horse has left the barn. These editorial and business judgements do, however, demonstrate how companies have ample ability to act conscientiously to protect the responsible use of their platforms.

Wheeler addresses the conundrum that has been created by a subsection of the law that  insulates social media companies from responsibility for making the sorts of  editorial judgements that publishers of traditional media make every day. As he says, these 26 words are the heart of the issue: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

As he points out,

If you are insulated from the consequences of your actions and make a great deal of money by exploiting that insulation, then what is the incentive to act responsibly?…

The social media companies have put us in the middle of a huge and explosive lab experiment where we see the toxic combination of digital technology, unmoderated content, lies and hate. We now have the answer to what happens when these features and large profits are blended together in a connected world. The result not only has been unproductive for civil discourse, it also represents a danger to democratic systems and effective problem-solving.

Wheeler repeats what most observers of our digital world have recognized: these platforms have the technological capacity to exercise the same sort of responsible moderation that  we expect of traditional media. What they lack is the will–because more responsible moderating algorithms would eat into their currently large–okay, obscene– profits.

The companies’ business model is built around holding a user’s attention so that they may display more paying messages. Delivering what the user wants to see, the more outrageous the better, holds that attention and rings the cash register.

Wheeler points out that we have mischaracterized these platforms–they are not, as they insist, tech enterprises. They are media, and should be required to conform to the rules and expectations that govern media sources. He has other suggestions for tweaking the rules that govern these platforms, and they are worth consideration.

That said, the rise of these digital giants creates a bigger question and implicates what is essentially a philosophical dilemma.

The U.S. Constitution was intended to limit the exercise of power; it was crafted at a time in human history when governments held a clear monopoly on that power. That is arguably no longer the case–and it isn’t simply social media giants. Today, multiple social and economic institutions have the power to pose credible threats both to individual liberty and to social cohesion. How we navigate the minefield created by that reality–how we restrain the power of theoretically “private” enterprises– will determine the life prospects of our children and grandchildren.

At the very least, we need rules that will limit the ability of miscreants to falsely shout fire in our digital environments.

 

 

 

A Way Forward??

A recent column from the Boston Globe began with a paragraph that captures a discussion we’ve had numerous times on this blog.

Senator Daniel Patrick Moynihan once said, “Everyone is entitled to his own opinion, but not his own facts.” These days, though, two out of three Americans get their news from social media sites like Facebook, its subsidiary Instagram, Google’s YouTube, and Twitter. And these sites supply each of us with our own facts, showing conservatives mostly news liked by other conservatives, feeding liberals mostly liberal content.

The author, Josh Bernoff, explained why reimposing the Fairness Doctrine isn’t an option; that doctrine was a quid pro quo of sorts. It required certain behaviors in return for permission to use broadcast frequencies controlled by the government. It never applied to communications that didn’t use those frequencies–and there is no leverage that would allow government to require a broader application.

That said, policymakers are not entirely at the mercy of the social networking giants who have become the most significant purveyors of news and information–as well as propaganda and misinformation.

As the column points out, social media sites are making efforts–the author calls them “baby steps”–to control the worst content, like hate speech. But they’ve made only token efforts to alter the algorithms that generate clicks and profits by feeding users materials that increase involvement with the site. Unfortunately, those algorithms also intensify American tribalism.

These algorithms keep users on the site longer by sustaining their preferred worldviews, irrespective of the factual basis of those preferences–and thus far, social media sites have not  been held accountable for the damage that causes.

Their shield is Section 230 of the Communications Decency Act. Section 230 is

a key part of US media regulation that enables social networks to operate profitably. It creates a liability shield so that sites like Facebook that host user-generated content can’t be held responsible for defamatory posts on their sites and apps. Without it, Facebook, Twitter, Instagram, YouTube, and similar sites would get sued every time some random poster said that Mike Pence was having an affair or their neighbor’s Christmas lights were part of a satanic ritual.

Removing the shield entirely isn’t the answer. Full repeal would drastically curb free expression–not just on social media, but in other places, like the comment sections of newspapers. But that doesn’t mean we can’t take a leaf from the Fairness Doctrine book, and make Section 230 a quid pro quo–something that could be done without eroding the protections of the First Amendment.

Historically, Supreme Court opinions regarding First Amendment protections for problematic speech have taken the position that the correct remedy is not shutting it down but stimulating “counterspeech.” Justice Oliver Wendell Holmes wrote in a 1919 opinion, “The ultimate good desired is better reached by free trade in ideas — that the best test of truth is the power of the thought to get itself accepted in the competition of the market.” And in 1927, Justice Louis Brandeis wrote, “If there be time to expose through discussion the falsehood and fallacies, to avert the evil by the processes of education, the remedy to be applied is more speech, not enforced silence.”….

Last year, Facebook generated $70 billion in advertising revenue; YouTube, around $15 billion; and Twitter, $3 billion. Now the FCC should require them to set aside 10 percent of their total ad space to expose people to diverse sources of content. They would be required to show free ads for mainstream liberal news sources to conservatives, and ads for mainstream conservative news sites to liberals. (They already know who’s liberal and who’s conservative — how do you think they bias the news feed in the first place?) The result would be sort of a tax, paid in advertising, to compensate for the billions these companies make under the government’s generous Section 230 liability shield and counteract the toxicity of their algorithms.

Sounds good to me. 

 

 

 

FaceBook, Disinformation And The First Amendment

These are tough times for Free Speech purists–of whom I am one.

I have always been persuaded by the arguments that support freedom of expression. In a genuine  marketplace of ideas, I believe–okay, I want to believe– that better ideas will drive out worse ones. More compelling is the argument that, while some ideas may be truly dangerous, giving   government the right to decide which ideas get expressed and which ones don’t would be much more dangerous. 

But FaceBook and other social media sites are really testing my allegiance to unfettered, unregulated–and amplified–expression. Recently, The Guardian reported that more than 3 million followers and members support the crazy QAnon conspiracy on Facebook, and their numbers are growing.

For those unfamiliar with QAnon, it

is a movement of people who interpret as a kind of gospel the online messages of an anonymous figure – “Q” – who claims knowledge of a secret cabal of powerful pedophiles and sex traffickers. Within the constructed reality of QAnon, Donald Trump is secretly waging a patriotic crusade against these “deep state” child abusers, and a “Great Awakening” that will reveal the truth is on the horizon.

Brian Friedberg, a senior researcher at the Harvard Shorenstein Center is quoted as saying that Facebook is a “unique platform for recruitment and amplification,” and that he doubts  QAnon would have been able to happen without the “affordances of Facebook.”

Facebook isn’t just providing a platform to QAnon groups–its  algorithms are actively recommending them to users who may not otherwise have been exposed to them. And it isn’t only QAnon. According to the Wall Street Journal, Facebook’s own internal research in 2016 found that “64% of all extremist group joins are due to our recommendation tools.”

If the problem was limited to QAnon and other conspiracy theories, it would be troubling enough, but it isn’t. A recent essay by a Silicone Valley insider named Roger McNamee in Time Magazine began with an ominous paragraph:

If the United States wants to protect democracy and public health, it must acknowledge that internet platforms are causing great harm and accept that executives like Mark Zuckerberg are not sincere in their promises to do better. The “solutions” Facebook and others have proposed will not work. They are meant to distract us.

McNamee points to a predictable cycle: platforms are pressured to “do something” about harassment, disinformation or conspiracy theories. They respond by promising to improve their content moderation. But– as the essay points out– none have been successful at limiting the harm from third party content, and  so the cycle repeats.  (As he notes, banning Alex Jones removed his conspiracy theories from the major sites, but did nothing to stop the flood of similar content from other people.)

The article identifies three reasons content moderation cannot work: scale, latency, and intent. Scale refers to the sheer hundreds of millions messages posted each day. Latency is the time it takes for even automated moderation to identify and remove a harmful message. The most important obstacle, however, is intent–a/k/a the platform’s business model.

The content we want internet platforms to remove is the content most likely to keep people engaged and online–and that makes it exceptionally valuable to the platforms.

As a result, the rules for AI and human moderators are designed to approve as much content as possible. Alone among the three issues with moderation, intent can only be addressed with regulation.

McNamee argues we should not have to accept disinformation as the price of access, and he offers a remedy:

At present, platforms have no liability for the harms caused by their business model. Their algorithms will continue to amplify harmful content until there is an economic incentive to do otherwise. The solution is for Congress to change incentives by implementing an exception to the safe harbor of Section 230 of the Communications Decency Act for algorithm amplification of harmful content and guaranteeing a right to litigate against platforms for this harm. This solution does not impinge on first amendment rights, as platforms are free to continue their existing business practices, except with liability for harms.

I’m not sure I share McNamee’s belief that his solution doesn’t implicate the First Amendment.

The (relative) newness of the Internet and social media creates uncertainty. What, exactly, are these platforms? How should they be classified? They aren’t traditional publishers–and third parties’ posts aren’t their “speech.” 

As 2020 campaigns heat up, more attention is being paid to how FaceBook promotes propaganda. Its refusal to remove or label clear lies from the Trump campaign has prompted advertisers to temporarily boycott the platform. FaceBook may react by tightening some moderation, but ultimately, McNamee is right: that won’t solve the problem.

One more conundrum of our Brave New World……

Happy 4th!

 

The Era Of Disinformation

I know I’ve shared this story before, but it seems more relevant than ever. After publication of my first book (What’s a Nice Republican Girl Like Me Doing at the ACLU?), I was interviewed on a South Carolina radio call-in show. It turned out to be the Rush Limbaugh station, so listeners weren’t exactly sympathetic.

A caller challenged the ACLU’s opposition to the then-rampant efforts to post the Ten Commandments on government buildings. He informed me that James Madison had said “We are giving the Bill of Rights to people who follow the Ten Commandments.” When I responded that Madison scholars had debunked that “quotation” (a fabrication that had been circulating in rightwing echo chambers), and that, by the way, it was contrary to everything we knew Madison had said, he yelled “Well, I choose to believe it!” and hung up.

That caller’s misinformation–and his ability to indulge his confirmation bias–have been amplified enormously by the propaganda mills that litter the Internet. The New York Times recently ran articles about one such outlet, and the details are enough to chill your bones.

It may not be a household name, but few publications have had the reach, and potentially the influence, in American politics as The Western Journal.

Even the right-wing publication’s audience of more than 36 million people, eclipsing many of the nation’s largest news organizations, doesn’t know much about the company, or who’s behind it.

Thirty-six million readers–prresumably, a lot like the caller who chose to believe what he wanted to believe.

The “good news”–sort of–is that the Silicon Valley is making an effort to lessen its reach.

The site has struggled to maintain its audience through Facebook’s and Google’s algorithmic changes aimed at reducing disinformation — actions the site’s leaders see as evidence of political bias.

This is the question for our “Information Age”–what is the difference between an effort to protect fact-based information and political bias ? And who should have the power to decide? As repulsive as this particular site appears to be, the line between legitimate information and “curated reality” is hard to define.

Here’s the lede for the Times investigative report on the site:

Each day, in an office outside Phoenix, a team of young writers and editors curates reality.

In the America presented on their news and opinion website, WesternJournal.com, tradition-minded patriots face ceaseless assault by anti-Christian bigots, diseased migrants and race hustlers concocting hate crimes. Danger and outrages loom. A Mexican politician threatens the “takeover”of several American states. Police officers are kicked out of an Arizona Starbucks. Kamala Harris, the Democratic presidential candidate, proposesa “$100 billion handout” for black families.

The report notes that the publication doesn’t bother with reporters. Nevertheless, it shapes the political beliefs of those 36 million readers– and in the last three years, its Facebook posts earned three-quarters of a billion shares, likes and comments, “almost as many as the combined tally of 10 leading American news organizations that together employ thousands of reporters and editors.”

The Western Journal rose on the forces that have remade — and warped — American politics, as activists, publishers and politicians harnessed social media’s power and reach to serve fine-tuned ideological content to an ever-agitated audience. Founded by the veteran conservative provocateur Floyd G. Brown, who began his career with the race-baiting “Willie Horton” ad during the 1988 presidential campaign, and run by his younger son, Patrick, The Western Journal uses misleading headlines and sensationalized stories to attract partisans, then profit from their anger.

But Silicon Valley’s efforts to crack down on clickbait and disinformation have pummeled traffic to The Western Journal and other partisan news sites. Some leading far-right figures have been kicked off social media platforms entirely, after violating rules against hate speech and incitement. Republican politicians and activists have alleged that the tech companies are unfairly censoring the right, threatening conservatives’ ability to sway public opinion and win elections.

In the U.S., only government can “censor” in violation of the First Amendment. But tech platforms have vast power to determine what Americans see, whether the exercise of that power is legally considered censorship or not, and they will increasingly determine what Americans see and read.

Most of my students get their news from social media. To say that the outcome (not to mention the sincerity) of Silicon Valley’s efforts to clean up cyberspace will determine what kind of world we inhabit isn’t hyperbole.