Tag Archives: social media

FaceBook, Disinformation And The First Amendment

These are tough times for Free Speech purists–of whom I am one.

I have always been persuaded by the arguments that support freedom of expression. In a genuine  marketplace of ideas, I believe–okay, I want to believe– that better ideas will drive out worse ones. More compelling is the argument that, while some ideas may be truly dangerous, giving   government the right to decide which ideas get expressed and which ones don’t would be much more dangerous. 

But FaceBook and other social media sites are really testing my allegiance to unfettered, unregulated–and amplified–expression. Recently, The Guardian reported that more than 3 million followers and members support the crazy QAnon conspiracy on Facebook, and their numbers are growing.

For those unfamiliar with QAnon, it

is a movement of people who interpret as a kind of gospel the online messages of an anonymous figure – “Q” – who claims knowledge of a secret cabal of powerful pedophiles and sex traffickers. Within the constructed reality of QAnon, Donald Trump is secretly waging a patriotic crusade against these “deep state” child abusers, and a “Great Awakening” that will reveal the truth is on the horizon.

Brian Friedberg, a senior researcher at the Harvard Shorenstein Center is quoted as saying that Facebook is a “unique platform for recruitment and amplification,” and that he doubts  QAnon would have been able to happen without the “affordances of Facebook.”

Facebook isn’t just providing a platform to QAnon groups–its  algorithms are actively recommending them to users who may not otherwise have been exposed to them. And it isn’t only QAnon. According to the Wall Street Journal, Facebook’s own internal research in 2016 found that “64% of all extremist group joins are due to our recommendation tools.”

If the problem was limited to QAnon and other conspiracy theories, it would be troubling enough, but it isn’t. A recent essay by a Silicone Valley insider named Roger McNamee in Time Magazine began with an ominous paragraph:

If the United States wants to protect democracy and public health, it must acknowledge that internet platforms are causing great harm and accept that executives like Mark Zuckerberg are not sincere in their promises to do better. The “solutions” Facebook and others have proposed will not work. They are meant to distract us.

McNamee points to a predictable cycle: platforms are pressured to “do something” about harassment, disinformation or conspiracy theories. They respond by promising to improve their content moderation. But– as the essay points out– none have been successful at limiting the harm from third party content, and  so the cycle repeats.  (As he notes, banning Alex Jones removed his conspiracy theories from the major sites, but did nothing to stop the flood of similar content from other people.)

The article identifies three reasons content moderation cannot work: scale, latency, and intent. Scale refers to the sheer hundreds of millions messages posted each day. Latency is the time it takes for even automated moderation to identify and remove a harmful message. The most important obstacle, however, is intent–a/k/a the platform’s business model.

The content we want internet platforms to remove is the content most likely to keep people engaged and online–and that makes it exceptionally valuable to the platforms.

As a result, the rules for AI and human moderators are designed to approve as much content as possible. Alone among the three issues with moderation, intent can only be addressed with regulation.

McNamee argues we should not have to accept disinformation as the price of access, and he offers a remedy:

At present, platforms have no liability for the harms caused by their business model. Their algorithms will continue to amplify harmful content until there is an economic incentive to do otherwise. The solution is for Congress to change incentives by implementing an exception to the safe harbor of Section 230 of the Communications Decency Act for algorithm amplification of harmful content and guaranteeing a right to litigate against platforms for this harm. This solution does not impinge on first amendment rights, as platforms are free to continue their existing business practices, except with liability for harms.

I’m not sure I share McNamee’s belief that his solution doesn’t implicate the First Amendment.

The (relative) newness of the Internet and social media creates uncertainty. What, exactly, are these platforms? How should they be classified? They aren’t traditional publishers–and third parties’ posts aren’t their “speech.” 

As 2020 campaigns heat up, more attention is being paid to how FaceBook promotes propaganda. Its refusal to remove or label clear lies from the Trump campaign has prompted advertisers to temporarily boycott the platform. FaceBook may react by tightening some moderation, but ultimately, McNamee is right: that won’t solve the problem.

One more conundrum of our Brave New World……

Happy 4th!

 

The Era Of Disinformation

I know I’ve shared this story before, but it seems more relevant than ever. After publication of my first book (What’s a Nice Republican Girl Like Me Doing at the ACLU?), I was interviewed on a South Carolina radio call-in show. It turned out to be the Rush Limbaugh station, so listeners weren’t exactly sympathetic.

A caller challenged the ACLU’s opposition to the then-rampant efforts to post the Ten Commandments on government buildings. He informed me that James Madison had said “We are giving the Bill of Rights to people who follow the Ten Commandments.” When I responded that Madison scholars had debunked that “quotation” (a fabrication that had been circulating in rightwing echo chambers), and that, by the way, it was contrary to everything we knew Madison had said, he yelled “Well, I choose to believe it!” and hung up.

That caller’s misinformation–and his ability to indulge his confirmation bias–have been amplified enormously by the propaganda mills that litter the Internet. The New York Times recently ran articles about one such outlet, and the details are enough to chill your bones.

It may not be a household name, but few publications have had the reach, and potentially the influence, in American politics as The Western Journal.

Even the right-wing publication’s audience of more than 36 million people, eclipsing many of the nation’s largest news organizations, doesn’t know much about the company, or who’s behind it.

Thirty-six million readers–prresumably, a lot like the caller who chose to believe what he wanted to believe.

The “good news”–sort of–is that the Silicon Valley is making an effort to lessen its reach.

The site has struggled to maintain its audience through Facebook’s and Google’s algorithmic changes aimed at reducing disinformation — actions the site’s leaders see as evidence of political bias.

This is the question for our “Information Age”–what is the difference between an effort to protect fact-based information and political bias ? And who should have the power to decide? As repulsive as this particular site appears to be, the line between legitimate information and “curated reality” is hard to define.

Here’s the lede for the Times investigative report on the site:

Each day, in an office outside Phoenix, a team of young writers and editors curates reality.

In the America presented on their news and opinion website, WesternJournal.com, tradition-minded patriots face ceaseless assault by anti-Christian bigots, diseased migrants and race hustlers concocting hate crimes. Danger and outrages loom. A Mexican politician threatens the “takeover”of several American states. Police officers are kicked out of an Arizona Starbucks. Kamala Harris, the Democratic presidential candidate, proposesa “$100 billion handout” for black families.

The report notes that the publication doesn’t bother with reporters. Nevertheless, it shapes the political beliefs of those 36 million readers– and in the last three years, its Facebook posts earned three-quarters of a billion shares, likes and comments, “almost as many as the combined tally of 10 leading American news organizations that together employ thousands of reporters and editors.”

The Western Journal rose on the forces that have remade — and warped — American politics, as activists, publishers and politicians harnessed social media’s power and reach to serve fine-tuned ideological content to an ever-agitated audience. Founded by the veteran conservative provocateur Floyd G. Brown, who began his career with the race-baiting “Willie Horton” ad during the 1988 presidential campaign, and run by his younger son, Patrick, The Western Journal uses misleading headlines and sensationalized stories to attract partisans, then profit from their anger.

But Silicon Valley’s efforts to crack down on clickbait and disinformation have pummeled traffic to The Western Journal and other partisan news sites. Some leading far-right figures have been kicked off social media platforms entirely, after violating rules against hate speech and incitement. Republican politicians and activists have alleged that the tech companies are unfairly censoring the right, threatening conservatives’ ability to sway public opinion and win elections.

In the U.S., only government can “censor” in violation of the First Amendment. But tech platforms have vast power to determine what Americans see, whether the exercise of that power is legally considered censorship or not, and they will increasingly determine what Americans see and read.

Most of my students get their news from social media. To say that the outcome (not to mention the sincerity) of Silicon Valley’s efforts to clean up cyberspace will determine what kind of world we inhabit isn’t hyperbole.

 

The New Censorship

One of the many causes of increased tribalism and chaos worldwide is the unprecedented nature of the information environment we inhabit. A quote from Yuval Noah Harari’s Homo Deus is instructive–

In the past, censorship worked by blocking the flow of information. In the twenty-first century, censorship works by flooding people with irrelevant information.

We are only dimly beginning to understand the nature of the threat posed by the mountains of “information” with which we are inundated. Various organizations are mounting efforts to fight that threat–to increase news literacy and control disinformation– with results that are thus far imperceptible.

The Brookings Institution has engaged in one of those efforts; it has a series on Cybersecurity and Election Interference, and in a recent report, offered four steps to “stop the spread of disinformation.” The linked report begins by making an important point about the actual targets of such disinformation.

The public discussion of disinformation often focuses on targeted candidates, without recognizing that disinformation actually targets voters. In the case of elections, actors both foreign and domestic are trying to influence whether or not you as an individual vote, and for whom to cast your ballot. The effort goes farther than elections: it is about the information on whether to vaccinate children or boycott the NFL. What started with foreign adversaries now includes domestic groups, all fighting for control over what you believe to be true.

The report also recognizes that the preservation of democratic and economic institutions in the digital era will ultimately depend on efforts to control disinformation by  government and the various platforms on which it is disseminated. Since the nature of the necessary action is not yet clear–so far as I can tell, we don’t have a clue how to accomplish this– Brookings says that the general public needs to make itself less susceptible, and its report offers four ways to accomplish that.

You’ll forgive me if I am skeptical of the ability/desire of most Americans to follow their advice, but for what it is worth, here are the steps they advocate:

Know your algorithm
Get to know your own social media feed and algorithm, because disinformation targets us based on our online behavior and our biases. Platforms cater information to you based on what you stop to read, engage with, and send to friends. This information is then accessible to advertisers and can be manipulated by those who know how to do so, in order to target you based on your past behavior. The result is we are only seeing information that an algorithm thinks we want to consume, which could be biased and distorted.

Retrain your newsfeed
Once you have gotten to know your algorithm, you can change it to start seeing other points of view. Repeatedly seek out reputable sources of information that typically cater to viewpoints different than your own, and begin to see that information occur in your newsfeed organically.

Scrutinize your news sources
Start consuming information from social media critically. Social media is more than a news digest—it is social, and it is media. We often scroll through passively, absorbing a combination of personal updates from friends and family—and if you are among the two-thirds of Americans who report consuming news on social media—you are passively scrolling through news stories as well. A more critical eye to the information in your feed and being able to look for key indicators of whether or not news is timely and accurate, such as the source and the publication date, is incredibly important.

Consider not sharing
Finally, think before you share. If you think that a “news” article seems too sensational or extreme to be true, it probably is. By not sharing, you are stopping the flow of disinformation and falsehoods from getting across to your friends and network. While the general public cannot be relied upon to solve this problem alone, it is imperative that we start doing our part to stop this phenomenon. It is time to stop waiting for someone to save us from disinformation, and to start saving ourselves.

All good advice. Why do I think the people who most need to follow it, won’t?

Journalism Declines And Scandals Rise

I know I harp a lot on the importance of accurate, credible journalism–especially at the local level, but it is really, really important.

Believe it or not, the ongoing scandals in Virginia, which have embroiled the top three state officeholders, are illustrations of what happens when local coverage goes missing.

As Amanda Marcotte observed in Salon, 

The Virginia scandal is a reflection of a larger trend where politics will be driven more and more by revelations, gotcha moments and resulting scandals. The decline in robust, in-depth journalism, particularly on the local level — coupled with the rise of social media and well-funded partisan opposition research — is creating an atmosphere where political scandals, legitimate or not, will increasingly dominate politics and media.

“You have this degradation of resources in local journalism, which has been going on for a while now,” said Joshua Benton, director of the Nieman Journalism Lab, which is currently offering a fellowshipfor local investigative journalism. “You also have this counterpart, which is that it’s easier than before for opposition researchers on all sides to dig up dirt of this sort.”

Benton explained that the decline in local journalism allows politicians in the early stages of their careers, when they are likely to be running for school board or city council,  to escape the scrutiny they would previously have gotten from the relevant local media.

Philip Napoli, a professor at Duke University’s Sanford School of Public Policy, added that this trend has coincided with another, “the rise of social media and the ways that political candidates are able to communicate with their constituencies directly” and present a version of themselves that’s more to their own liking.

The result is that politicians simply don’t get the vetting they might once have received as they climb the career ladder from smaller offices to statewide and even national offices. Red flags that might have been noticed before a politician reached a position of significant power get overlooked, because local papers simply don’t have the resources to catch them.

The decline in local coverage has coincided with the rise of partisan outlets– not just national networks like Fox and Sinclair, but local talk radio and blogs less concerned with accuracy than with scoring points.  Add to that the gift of the internet– the wealth of materials that vigorous opposition research can now unearth– and you have a recipe for ongoing scandals appearing at extremely “inconvenient” times in politicians’ careers.

In the “old model,” Benton said, people  who wanted to share damning information like sexual assault allegations or past episodes of racist conduct would “go to a reporter and hand him or her the documents or the evidence,” and that reporter would “determine whether the information that’s being handed to them is correct or not.”

 “Now, increasingly you can just post it online and skip that step in the process,” he added. So questions about whether the information is true and legitimately newsworthy don’t get answered in advance.

It appears that the Virginia accusations are all true, although the stories were “broken” by a sleazy partisan web site. But in other cases, innocent parties and organizations sustained real (and sometimes permanent) damage before manufactured allegations could be debunked. Remember when Breitbart accused the nonprofit ACORN of being involved in sex trafficking? Its story was entirely false, but it led to the group’s collapse. A doctored video was used to accuse Planned Parenthood of selling “baby parts” from aborted fetuses and was gleefully spread far and wide. It was later shown to be part of the ongoing, deceptive effort to convince lawmakers to stop funding Planned Parenthood, but pro-life groups continue to cite it as “evidence” of the organization’s evil doings.

In the absence of adequate, reliable reporting, conspiracy theories and partisan invention will fill the void. And citizens won’t know what they can and can’t believe.

The problem is national, but far more prevalent at the local level.

Brave New World

As the reporting about Cambridge Analytica’s sophisticated propaganda campaign suggests, we humans are far more “manipulatable” than we like to think–and Huxley was wrong to predict that it would require drugs (remember Soma?) to pacify or mislead us.

The linked article by two Harvard University researchers suggests that the discovery of this political operation raises the stakes of our ongoing concerns about the impact of digital technology on democracy.

There was already a debate raging about how targeted digital ads and messages from campaigns, partisan propagandists and even Russian agents were sowing outrage and division in the U.S. electorate. Now it appears that Cambridge Analytica took it one step farther, using highly sensitive personal data taken from Facebook users without their knowledge to manipulate them into supporting Donald Trump. This scandal raises major questions about how this could have happened, how it can be stopped and whether the connection between data-driven ads and democracy is fundamentally toxic.

It also raises concerns about the new ability of political operatives, armed with the results of political psychology research, to identify and prey on voters’ vulnerabilities. Extensive personal data amassed through social media platforms–especially Facebook– can be used  to manipulate voters and distort democratic debate. Cambridge Analytica exploited that ability on behalf of the Trump campaign.

We’ve come a long, long way from the days when we collectively received our news from a mass media. Instead, we now have what a scholar once predicted and dubbed “the daily me,” information (and disinformation) that feeds a personalized reality–Eli Pariser’s “filter bubble”–that isn’t necessarily shared with others.

On the internet, you don’t know much about the political ads you’re shown. You often don’t know who is creating them, since the disclaimers are so small, if they exist at all. You also don’t really know who else is seeing them. Sure, you can share a political ad — thus fulfilling the advertiser’s hopes — and then at least some other people you know will have witnessed the same ad. But you don’t really know if your neighbor has seen it, let alone someone else across the state or the country. In addition, digital advertising companies distribute ads based on how likely you are to interact with them. This most often means that they send you ads they think you are likeliest to engage with. They don’t determine what the nature of that engaging content might be — but they know (just as all advertisers do) that content works well if it makes you very emotional. An ad like that doesn’t make you contemplative or curious, it makes you elated, excited, sad or angry. It could make you so angry, in fact, that you’ll share it and make others angry — which in turn gives the ad free publicity, effectively making the advertiser’s purchase cheaper per viewer, since they pay for the initial outreach and not the shares.

What this can lead to is communities and, eventually, a nation infuriated by things others don’t know about. The information that makes us angriest becomes the information least likely to be questioned. We wind up stewing over things that, by design, few others can correct, engage with or learn from. A Jeffersonian public square where lots of viewpoints go to mingle, debate and compromise, this is not.

As the authors note, none of this means that Facebook and Twitter intentionally undermined Hillary Clinton. It’s much worse, because the technology that powers social media uses the personal data to which they become privy to divide the American population and then feed us “highly personalized messages designed to push our particular buttons so well that we share them and they go viral, thus keeping people on the site longer.”

Social media rewards provocation — again, without repercussion, since we usually only share content with our friends in a way that is largely invisible to the broader public. Morality and integrity count little in online advertising.

The real question here isn’t which campaign got the advantage. The real question is whether this micro-targeted free-for-all should be allowed in the political sphere at all in the way it is currently designed —with very little transparency about who is pulling these strings and how they are doing it.

We truly do inhabit a new world. I don’t know how brave it is.