Tag Archives: social media

A Way Forward??

A recent column from the Boston Globe began with a paragraph that captures a discussion we’ve had numerous times on this blog.

Senator Daniel Patrick Moynihan once said, “Everyone is entitled to his own opinion, but not his own facts.” These days, though, two out of three Americans get their news from social media sites like Facebook, its subsidiary Instagram, Google’s YouTube, and Twitter. And these sites supply each of us with our own facts, showing conservatives mostly news liked by other conservatives, feeding liberals mostly liberal content.

The author, Josh Bernoff, explained why reimposing the Fairness Doctrine isn’t an option; that doctrine was a quid pro quo of sorts. It required certain behaviors in return for permission to use broadcast frequencies controlled by the government. It never applied to communications that didn’t use those frequencies–and there is no leverage that would allow government to require a broader application.

That said, policymakers are not entirely at the mercy of the social networking giants who have become the most significant purveyors of news and information–as well as propaganda and misinformation.

As the column points out, social media sites are making efforts–the author calls them “baby steps”–to control the worst content, like hate speech. But they’ve made only token efforts to alter the algorithms that generate clicks and profits by feeding users materials that increase involvement with the site. Unfortunately, those algorithms also intensify American tribalism.

These algorithms keep users on the site longer by sustaining their preferred worldviews, irrespective of the factual basis of those preferences–and thus far, social media sites have not  been held accountable for the damage that causes.

Their shield is Section 230 of the Communications Decency Act. Section 230 is

a key part of US media regulation that enables social networks to operate profitably. It creates a liability shield so that sites like Facebook that host user-generated content can’t be held responsible for defamatory posts on their sites and apps. Without it, Facebook, Twitter, Instagram, YouTube, and similar sites would get sued every time some random poster said that Mike Pence was having an affair or their neighbor’s Christmas lights were part of a satanic ritual.

Removing the shield entirely isn’t the answer. Full repeal would drastically curb free expression–not just on social media, but in other places, like the comment sections of newspapers. But that doesn’t mean we can’t take a leaf from the Fairness Doctrine book, and make Section 230 a quid pro quo–something that could be done without eroding the protections of the First Amendment.

Historically, Supreme Court opinions regarding First Amendment protections for problematic speech have taken the position that the correct remedy is not shutting it down but stimulating “counterspeech.” Justice Oliver Wendell Holmes wrote in a 1919 opinion, “The ultimate good desired is better reached by free trade in ideas — that the best test of truth is the power of the thought to get itself accepted in the competition of the market.” And in 1927, Justice Louis Brandeis wrote, “If there be time to expose through discussion the falsehood and fallacies, to avert the evil by the processes of education, the remedy to be applied is more speech, not enforced silence.”….

Last year, Facebook generated $70 billion in advertising revenue; YouTube, around $15 billion; and Twitter, $3 billion. Now the FCC should require them to set aside 10 percent of their total ad space to expose people to diverse sources of content. They would be required to show free ads for mainstream liberal news sources to conservatives, and ads for mainstream conservative news sites to liberals. (They already know who’s liberal and who’s conservative — how do you think they bias the news feed in the first place?) The result would be sort of a tax, paid in advertising, to compensate for the billions these companies make under the government’s generous Section 230 liability shield and counteract the toxicity of their algorithms.

Sounds good to me. 

 

 

 

FaceBook, Disinformation And The First Amendment

These are tough times for Free Speech purists–of whom I am one.

I have always been persuaded by the arguments that support freedom of expression. In a genuine  marketplace of ideas, I believe–okay, I want to believe– that better ideas will drive out worse ones. More compelling is the argument that, while some ideas may be truly dangerous, giving   government the right to decide which ideas get expressed and which ones don’t would be much more dangerous. 

But FaceBook and other social media sites are really testing my allegiance to unfettered, unregulated–and amplified–expression. Recently, The Guardian reported that more than 3 million followers and members support the crazy QAnon conspiracy on Facebook, and their numbers are growing.

For those unfamiliar with QAnon, it

is a movement of people who interpret as a kind of gospel the online messages of an anonymous figure – “Q” – who claims knowledge of a secret cabal of powerful pedophiles and sex traffickers. Within the constructed reality of QAnon, Donald Trump is secretly waging a patriotic crusade against these “deep state” child abusers, and a “Great Awakening” that will reveal the truth is on the horizon.

Brian Friedberg, a senior researcher at the Harvard Shorenstein Center is quoted as saying that Facebook is a “unique platform for recruitment and amplification,” and that he doubts  QAnon would have been able to happen without the “affordances of Facebook.”

Facebook isn’t just providing a platform to QAnon groups–its  algorithms are actively recommending them to users who may not otherwise have been exposed to them. And it isn’t only QAnon. According to the Wall Street Journal, Facebook’s own internal research in 2016 found that “64% of all extremist group joins are due to our recommendation tools.”

If the problem was limited to QAnon and other conspiracy theories, it would be troubling enough, but it isn’t. A recent essay by a Silicone Valley insider named Roger McNamee in Time Magazine began with an ominous paragraph:

If the United States wants to protect democracy and public health, it must acknowledge that internet platforms are causing great harm and accept that executives like Mark Zuckerberg are not sincere in their promises to do better. The “solutions” Facebook and others have proposed will not work. They are meant to distract us.

McNamee points to a predictable cycle: platforms are pressured to “do something” about harassment, disinformation or conspiracy theories. They respond by promising to improve their content moderation. But– as the essay points out– none have been successful at limiting the harm from third party content, and  so the cycle repeats.  (As he notes, banning Alex Jones removed his conspiracy theories from the major sites, but did nothing to stop the flood of similar content from other people.)

The article identifies three reasons content moderation cannot work: scale, latency, and intent. Scale refers to the sheer hundreds of millions messages posted each day. Latency is the time it takes for even automated moderation to identify and remove a harmful message. The most important obstacle, however, is intent–a/k/a the platform’s business model.

The content we want internet platforms to remove is the content most likely to keep people engaged and online–and that makes it exceptionally valuable to the platforms.

As a result, the rules for AI and human moderators are designed to approve as much content as possible. Alone among the three issues with moderation, intent can only be addressed with regulation.

McNamee argues we should not have to accept disinformation as the price of access, and he offers a remedy:

At present, platforms have no liability for the harms caused by their business model. Their algorithms will continue to amplify harmful content until there is an economic incentive to do otherwise. The solution is for Congress to change incentives by implementing an exception to the safe harbor of Section 230 of the Communications Decency Act for algorithm amplification of harmful content and guaranteeing a right to litigate against platforms for this harm. This solution does not impinge on first amendment rights, as platforms are free to continue their existing business practices, except with liability for harms.

I’m not sure I share McNamee’s belief that his solution doesn’t implicate the First Amendment.

The (relative) newness of the Internet and social media creates uncertainty. What, exactly, are these platforms? How should they be classified? They aren’t traditional publishers–and third parties’ posts aren’t their “speech.” 

As 2020 campaigns heat up, more attention is being paid to how FaceBook promotes propaganda. Its refusal to remove or label clear lies from the Trump campaign has prompted advertisers to temporarily boycott the platform. FaceBook may react by tightening some moderation, but ultimately, McNamee is right: that won’t solve the problem.

One more conundrum of our Brave New World……

Happy 4th!

 

The Era Of Disinformation

I know I’ve shared this story before, but it seems more relevant than ever. After publication of my first book (What’s a Nice Republican Girl Like Me Doing at the ACLU?), I was interviewed on a South Carolina radio call-in show. It turned out to be the Rush Limbaugh station, so listeners weren’t exactly sympathetic.

A caller challenged the ACLU’s opposition to the then-rampant efforts to post the Ten Commandments on government buildings. He informed me that James Madison had said “We are giving the Bill of Rights to people who follow the Ten Commandments.” When I responded that Madison scholars had debunked that “quotation” (a fabrication that had been circulating in rightwing echo chambers), and that, by the way, it was contrary to everything we knew Madison had said, he yelled “Well, I choose to believe it!” and hung up.

That caller’s misinformation–and his ability to indulge his confirmation bias–have been amplified enormously by the propaganda mills that litter the Internet. The New York Times recently ran articles about one such outlet, and the details are enough to chill your bones.

It may not be a household name, but few publications have had the reach, and potentially the influence, in American politics as The Western Journal.

Even the right-wing publication’s audience of more than 36 million people, eclipsing many of the nation’s largest news organizations, doesn’t know much about the company, or who’s behind it.

Thirty-six million readers–prresumably, a lot like the caller who chose to believe what he wanted to believe.

The “good news”–sort of–is that the Silicon Valley is making an effort to lessen its reach.

The site has struggled to maintain its audience through Facebook’s and Google’s algorithmic changes aimed at reducing disinformation — actions the site’s leaders see as evidence of political bias.

This is the question for our “Information Age”–what is the difference between an effort to protect fact-based information and political bias ? And who should have the power to decide? As repulsive as this particular site appears to be, the line between legitimate information and “curated reality” is hard to define.

Here’s the lede for the Times investigative report on the site:

Each day, in an office outside Phoenix, a team of young writers and editors curates reality.

In the America presented on their news and opinion website, WesternJournal.com, tradition-minded patriots face ceaseless assault by anti-Christian bigots, diseased migrants and race hustlers concocting hate crimes. Danger and outrages loom. A Mexican politician threatens the “takeover”of several American states. Police officers are kicked out of an Arizona Starbucks. Kamala Harris, the Democratic presidential candidate, proposesa “$100 billion handout” for black families.

The report notes that the publication doesn’t bother with reporters. Nevertheless, it shapes the political beliefs of those 36 million readers– and in the last three years, its Facebook posts earned three-quarters of a billion shares, likes and comments, “almost as many as the combined tally of 10 leading American news organizations that together employ thousands of reporters and editors.”

The Western Journal rose on the forces that have remade — and warped — American politics, as activists, publishers and politicians harnessed social media’s power and reach to serve fine-tuned ideological content to an ever-agitated audience. Founded by the veteran conservative provocateur Floyd G. Brown, who began his career with the race-baiting “Willie Horton” ad during the 1988 presidential campaign, and run by his younger son, Patrick, The Western Journal uses misleading headlines and sensationalized stories to attract partisans, then profit from their anger.

But Silicon Valley’s efforts to crack down on clickbait and disinformation have pummeled traffic to The Western Journal and other partisan news sites. Some leading far-right figures have been kicked off social media platforms entirely, after violating rules against hate speech and incitement. Republican politicians and activists have alleged that the tech companies are unfairly censoring the right, threatening conservatives’ ability to sway public opinion and win elections.

In the U.S., only government can “censor” in violation of the First Amendment. But tech platforms have vast power to determine what Americans see, whether the exercise of that power is legally considered censorship or not, and they will increasingly determine what Americans see and read.

Most of my students get their news from social media. To say that the outcome (not to mention the sincerity) of Silicon Valley’s efforts to clean up cyberspace will determine what kind of world we inhabit isn’t hyperbole.

 

The New Censorship

One of the many causes of increased tribalism and chaos worldwide is the unprecedented nature of the information environment we inhabit. A quote from Yuval Noah Harari’s Homo Deus is instructive–

In the past, censorship worked by blocking the flow of information. In the twenty-first century, censorship works by flooding people with irrelevant information.

We are only dimly beginning to understand the nature of the threat posed by the mountains of “information” with which we are inundated. Various organizations are mounting efforts to fight that threat–to increase news literacy and control disinformation– with results that are thus far imperceptible.

The Brookings Institution has engaged in one of those efforts; it has a series on Cybersecurity and Election Interference, and in a recent report, offered four steps to “stop the spread of disinformation.” The linked report begins by making an important point about the actual targets of such disinformation.

The public discussion of disinformation often focuses on targeted candidates, without recognizing that disinformation actually targets voters. In the case of elections, actors both foreign and domestic are trying to influence whether or not you as an individual vote, and for whom to cast your ballot. The effort goes farther than elections: it is about the information on whether to vaccinate children or boycott the NFL. What started with foreign adversaries now includes domestic groups, all fighting for control over what you believe to be true.

The report also recognizes that the preservation of democratic and economic institutions in the digital era will ultimately depend on efforts to control disinformation by  government and the various platforms on which it is disseminated. Since the nature of the necessary action is not yet clear–so far as I can tell, we don’t have a clue how to accomplish this– Brookings says that the general public needs to make itself less susceptible, and its report offers four ways to accomplish that.

You’ll forgive me if I am skeptical of the ability/desire of most Americans to follow their advice, but for what it is worth, here are the steps they advocate:

Know your algorithm
Get to know your own social media feed and algorithm, because disinformation targets us based on our online behavior and our biases. Platforms cater information to you based on what you stop to read, engage with, and send to friends. This information is then accessible to advertisers and can be manipulated by those who know how to do so, in order to target you based on your past behavior. The result is we are only seeing information that an algorithm thinks we want to consume, which could be biased and distorted.

Retrain your newsfeed
Once you have gotten to know your algorithm, you can change it to start seeing other points of view. Repeatedly seek out reputable sources of information that typically cater to viewpoints different than your own, and begin to see that information occur in your newsfeed organically.

Scrutinize your news sources
Start consuming information from social media critically. Social media is more than a news digest—it is social, and it is media. We often scroll through passively, absorbing a combination of personal updates from friends and family—and if you are among the two-thirds of Americans who report consuming news on social media—you are passively scrolling through news stories as well. A more critical eye to the information in your feed and being able to look for key indicators of whether or not news is timely and accurate, such as the source and the publication date, is incredibly important.

Consider not sharing
Finally, think before you share. If you think that a “news” article seems too sensational or extreme to be true, it probably is. By not sharing, you are stopping the flow of disinformation and falsehoods from getting across to your friends and network. While the general public cannot be relied upon to solve this problem alone, it is imperative that we start doing our part to stop this phenomenon. It is time to stop waiting for someone to save us from disinformation, and to start saving ourselves.

All good advice. Why do I think the people who most need to follow it, won’t?

Journalism Declines And Scandals Rise

I know I harp a lot on the importance of accurate, credible journalism–especially at the local level, but it is really, really important.

Believe it or not, the ongoing scandals in Virginia, which have embroiled the top three state officeholders, are illustrations of what happens when local coverage goes missing.

As Amanda Marcotte observed in Salon, 

The Virginia scandal is a reflection of a larger trend where politics will be driven more and more by revelations, gotcha moments and resulting scandals. The decline in robust, in-depth journalism, particularly on the local level — coupled with the rise of social media and well-funded partisan opposition research — is creating an atmosphere where political scandals, legitimate or not, will increasingly dominate politics and media.

“You have this degradation of resources in local journalism, which has been going on for a while now,” said Joshua Benton, director of the Nieman Journalism Lab, which is currently offering a fellowshipfor local investigative journalism. “You also have this counterpart, which is that it’s easier than before for opposition researchers on all sides to dig up dirt of this sort.”

Benton explained that the decline in local journalism allows politicians in the early stages of their careers, when they are likely to be running for school board or city council,  to escape the scrutiny they would previously have gotten from the relevant local media.

Philip Napoli, a professor at Duke University’s Sanford School of Public Policy, added that this trend has coincided with another, “the rise of social media and the ways that political candidates are able to communicate with their constituencies directly” and present a version of themselves that’s more to their own liking.

The result is that politicians simply don’t get the vetting they might once have received as they climb the career ladder from smaller offices to statewide and even national offices. Red flags that might have been noticed before a politician reached a position of significant power get overlooked, because local papers simply don’t have the resources to catch them.

The decline in local coverage has coincided with the rise of partisan outlets– not just national networks like Fox and Sinclair, but local talk radio and blogs less concerned with accuracy than with scoring points.  Add to that the gift of the internet– the wealth of materials that vigorous opposition research can now unearth– and you have a recipe for ongoing scandals appearing at extremely “inconvenient” times in politicians’ careers.

In the “old model,” Benton said, people  who wanted to share damning information like sexual assault allegations or past episodes of racist conduct would “go to a reporter and hand him or her the documents or the evidence,” and that reporter would “determine whether the information that’s being handed to them is correct or not.”

 “Now, increasingly you can just post it online and skip that step in the process,” he added. So questions about whether the information is true and legitimately newsworthy don’t get answered in advance.

It appears that the Virginia accusations are all true, although the stories were “broken” by a sleazy partisan web site. But in other cases, innocent parties and organizations sustained real (and sometimes permanent) damage before manufactured allegations could be debunked. Remember when Breitbart accused the nonprofit ACORN of being involved in sex trafficking? Its story was entirely false, but it led to the group’s collapse. A doctored video was used to accuse Planned Parenthood of selling “baby parts” from aborted fetuses and was gleefully spread far and wide. It was later shown to be part of the ongoing, deceptive effort to convince lawmakers to stop funding Planned Parenthood, but pro-life groups continue to cite it as “evidence” of the organization’s evil doings.

In the absence of adequate, reliable reporting, conspiracy theories and partisan invention will fill the void. And citizens won’t know what they can and can’t believe.

The problem is national, but far more prevalent at the local level.