Tag Archives: propaganda

Why Language Matters…

On the most basic level, language matters because the ability to use words accurately to convey one’s meaning is a critically important skill in modern society.

And let’s be honest: we assess the probable intelligence of the people we meet based largely on their use of language. That isn’t simply snobbery–fuzzy language more often than not signals fuzzy thinking.

An individual’s use of language is a reasonably reliable clue to that person’s conceptual agility.

Those of us who are unimpressed with Donald Trump’s repeated assertion that he is “like really, really smart” often point to his lack of language skills. Newsweek recently compared the vocabularies of the last 15 U.S. Presidents, and ranked Trump at the very bottom.

President Donald Trump—who boasted over the weekend that his success in life was a result of “being, like, really smart”—communicates at the lowest grade level of the last 15 presidents, according to a new analysis of the speech patterns of presidents going back to Herbert Hoover….

By every metric and methodology tested, Donald Trump’s vocabulary and grammatical structure is significantly more simple, and less diverse, than any President since Herbert Hoover, when measuring “off-script” words, that is, words far less likely to have been written in advance for the speaker,” Factba.se CEO Bill Frischling wrote. “The gap between Trump and the next closest president … is larger than any other gap using Flesch-Kincaid. Statistically speaking, there is a significant gap.”

Of course, it’s also true that genuinely bright people rarely find it necessary to tell people how smart they are…

Effective propaganda requires the manipulation of language, and that’s another reason to be alert to its use. Trump’s former consiglieri, Steve Bannon, clearly understands that in order to change social attitudes, it is necessary to change reactions to certain words. As a recent, fascinating opinion piece in the New York Times recounts,

In a speech last weekend in France, Stephen Bannon, the former top adviser to President Trump, urged an audience of far-right National Front Party members to “let them call you racists, let them call you xenophobes.” He went on: “Let them call you nativists. Wear it as a badge of honor.”

The author notes that this is a departure from the usual “dog whistle” approach taken by racists and xenophobes–Trump’s constant references to immigrants as criminals, for example, or the traditional, negative euphemisms for Jews and blacks. Bannon wants to eliminate the pretense, and change our reaction to words that convey straightforward bigotry.

Bannon is urging the adoption of an irrational bias against racial minorities, immigrants and foreigners, one that does not require reasons, even bad ones, to support it. And he recommends presenting such irrationality as virtuous….

But taking Bannon’s advice also requires rejecting any recognizable practice of giving plausible reasons for holding a view or position. To proudly identify as a xenophobe is to identify as someone who is not interested in argument. It is to be irrationally fearful of foreigners, and proudly so. It means not masking one’s irrationality even from oneself.

Bannon’s rhetorical move of transforming vices based on irrational prejudice into virtues is not without historical precedent. Hitler devotes the second chapter of “Mein Kampf” to explaining how his time in Vienna as a young man transformed him into a “fanatical anti-Semite.” …. Such fanatical irrationality is, in Hitler’s rhetoric, virtuous.

Of course, comparing rhetoric and policies are two different things. No recent far-right movement in Europe or the United States has enacted the sort of genocidal policies that the Nazis did, and no such comparison is intended. But history has shown that the sort of subversion of language that Bannon has engaged in is often deeply intertwined with what a government will do, and what its people will allow. Bannon’s own cheer to the National Front members — “The tide of history is with us and it will compel us to victory after victory after victory” — shows clearly enough that he does not mean his efforts to end in mere speech.

Performing such inversions is an attempt to change the ideologies and behaviors of large groups of people. It is done to legitimate extreme, inhumane treatment of minority populations (or perhaps, to render such treatment no longer in need of legitimation). In this country, we are familiar with it from the criminal justice system’s treatment of black Americans, in some of the “get tough on crime” rhetoric that fed racialized mass incarceration in Northern cities, or the open racism sometimes connected to Southern white identity or “heritage.” Its aim is to create a population seeking leaders who are utterly ruthless and cruel, intolerant, irrational and unyielding in the face of challenges to the cultural and political dominance of the majority racial or religious group. It normalizes fascism.

Remember “sticks and stones may break my bones, but words can never hurt me”? It was wrong.

Language matters.

Computational Propaganda, Part Two

After each new Trump travesty, my friends and family have taken to asking each other the same question: “Who the hell could still support this buffoon? How stupid would someone have to be to drink this particular Kool-aid?”

A recent study conducted by Oxford University apparently answers that (not-so-rhetorical) question.

Low-quality, extremist, sensationalist and conspiratorial news published in the US was overwhelmingly consumed and shared by rightwing social network users, according to a new study from the University of Oxford.

The study, from the university’s “computational propaganda project”, looked at the most significant sources of “junk news” shared in the three months leading up to Donald Trump’s first State of the Union address this January, and tried to find out who was sharing them and why.

“On Twitter, a network of Trump supporters consumes the largest volume of junk news, and junk news is the largest proportion of news links they share,” the researchers concluded. On Facebook, the skew was even greater. There, “extreme hard right pages – distinct from Republican pages – share more junk news than all the other audiences put together.”

The researchers monitored 13,500 politically-active US Twitter users, and a separate group of 48,000 public Facebook pages, and looked at the external websites that they were sharing.

The findings speak to the level of polarisation common across the US political divide. “The two main political parties, Democrats and Republicans, prefer different sources of political news, with limited overlap,” the researchers write.

The study did not find a high percentage of social media penetration by the Russians, but it did identify clear political preferences of those who consumed junk news.

But there was a clear skew in who shared links from the 91 sites the researchers had manually coded as “junk news” (based on breaching at least three of five quality standards including “professionalism”, “bias” and “credibility”). “The Trump Support group consumes the highest volume of junk news sources on Twitter, and spreads more junk news sources, than all the other groups put together. This pattern is repeated on Facebook, where the Hard Conservatives group consumed the highest proportion of junk news.”

There has always been a credulous segment of the American public; given our embarrassingly low levels of civic literacy, it shouldn’t surprise us that a percentage of voters unhappy with their position in the polity would “choose the news” that confirmed their biases. As a colleague of mine recently wrote (citations omitted),

The flourishing of scientific polling and the increased sophistication of social science research methods have provided scholars with an opportunity to put these concerns to the test, and the results have largely confirmed the worst fears of political philosophers. Foundational studies of voters and elections published in the mid-20th Century documented voters’ ignorance, wishful thinking, and reliance on simple cues like partisanship, and nearly 8 decades of subsequent research has largely confirmed those conclusions.The democratic polity is not now and has never been made up of highly knowledgeable, informed and engaged civic citizens.

And there are plenty of charlatans, would-be power-brokers and snake-oil salesmen ready to lead the willing down the garden path…..

Computational Propaganda

[Sorry to clutter your inboxes; I published this in error. Consider it an “extra.”]

I am now officially befuddled. Out of my depth. And very worried.

Politico has published the results of an investigation that the magazine conducted into the popularity (in social-media jargon, the “viral-ness”) of the hashtag “release the memo.” It found that the committee vote

marked the culmination of a targeted, 11-day information operation that was amplified by computational propaganda techniques and aimed to change both public perceptions and the behavior of American lawmakers….Computational propaganda—defined as “the use of information and communication technologies to manipulate perceptions, affect cognition, and influence behavior”—has been used, successfully, to manipulate the perceptions of the American public and the actions of elected officials.

I’ve been struggling just to understand what “bots” are. The New York Times recent lengthy look at these artificial “followers”–you can evidently buy followers to pump up your perceived popularity–helped to an extent, but left me thinking that these “fake” followers were mostly a form of dishonest puffery by celebrities and would-be celebrities.

Politico disabused me.

The publication’s analysis showed how the #releasethememo campaign had been fueled by  computational propaganda. As the introduction says, ” It is critical that we understand how this was done and what it means for the future of American democracy.”

I really encourage readers to click through and read the article in its entirety. If you are like me, the technical aspects require slow and careful reading. Here, however, are a few of the findings that particularly worry me–and should worry us all.

Whether it is Republican or Russian or “Macedonian teenagers”—it doesn’t really matter. It is computational propaganda—meaning artificially amplified and targeted for a specific purpose—and it dominated political discussions in the United States for days. The #releasethememo campaign came out of nowhere. Its movement from social media to fringe/far-right media to mainstream media so swift that both the speed and the story itself became impossible to ignore. The frenzy of activity spurred lawmakers and the White House to release the Nunes memo, which critics say is a purposeful misrepresentation of classified intelligence meant to discredit the Russia probe and protect the president.

And this, ultimately, is what everyone has been missing in the past 14 months about the use of social media to spread disinformation. Information and psychological operations being conducted on social media—often mischaracterized by the dismissive label “fake news”—are not just about information, but about changing behavior. And they can be surprisingly effective.

An original tweet from a right-wing conspiracy buff with few followers was amplified by an account named KARYN.

The KARYN account is an interesting example of how bots lay a groundwork of information architecture within social media. It was registered in 2012, tweeting only a handful of times between July 2012 and November 2013 (mostly against President Barack Obama and in favor of the GOP). Then the account goes dormant until June 2016—the period that was identified by former FBI Director James Comey as the beginning of the most intense phase of Russian operations to interfere in the U.S. elections. The frequency of tweets builds from a few a week to a few a day. By October 11, there are dozens of posts a day, including YouTube videos, tweets to political officials and influencers and media personalities, and lots of replies to posts by the Trump team and related journalists. The content is almost entirely political, occasionally mentioning Florida, another battleground state, and sometimes posting what appear to be personal photos (which, if checked, come from many different phones and sources and appear “borrowed”). In October 2016, KARYN is tweeting a lot about Muslims/radical Islam attacking democracy and America; how Bill Clinton had lots of affairs; alleged financial wrongdoing on Clinton’s part; and, of course, WikiLeaks.

There’s much more evidence that KARYN is a bot—a bot that follows a random Republican guy in Michigan with 70-some followers. Why?

It would be fair to say that if you were setting up accounts to track views representative of a Trump-supporter, @underthemoraine would be a pulse to keep a finger on—the virtual Michigan “man in the diner” or “taxi driver” that journalists are forever citing as proof of conversations with real, nonpolitical humans in swing states. KARYN follows hundreds of such accounts, plus conservative media, and a lot of other bots.

KARYN triggers other bots and political operatives, and they combine to create a “tweet storm” or viral message. Many of these accounts are “organizers and amplifiers”—accounts with “human conductors” that are partly automated and linked to networks that automatically amplify content.

The article is very long, and very detailed–and I hope many of you will read it in its entirety. For now, I will leave you with the concluding paragraphs:

So what are the lessons of #releasethememo? Regardless of how much of the campaign was American and how much was Russian, it’s clear there was a massive effort to game social media and put the Nunes memo squarely on the national agenda—and it worked to an astonishing degree. The bottom line is that the goals of the two overlapped, so the origin—human, machine or otherwise—doesn’t actually matter. What matters is that someone is trying to manipulate us, tech companies are proving hopelessly unable or unwilling to police the bad actors manipulating their platforms, and politicians are either clueless about what to do about computational propaganda or—in the case of #releasethememo—are using it to achieve their goals. Americans are on their own.

And, yes, that also reinforces the narrative the Russians have been pushing since 2015: You’re on your own; be angry, and burn things down. Would that a leader would step into this breech, and challenge the advancing victory of the bots and the cynical people behind them.

 

Why Trust Matters

In 2009, I wrote a book titled Distrust, American Style. In it, I looked at the issue of trust through the lens of social capital scholarship. Trust and reciprocity are essential to social capital–and especially to the creation of “bridging” social capital, the relationships that allow us to connect with and value people different from ourselves.

I didn’t address an issue that I now see as critical: the intentional production of distrust.

Today’s propagandists learned a valuable tactic from Big Tobacco. For many years, as health professionals insisted that smoking was harmful, Big Tobacco responded brilliantly. Rather than flatly disputing the validity of the claim, a response that would have invited people to take sides and decide who they trusted, their doctors or tobacco manufacturers,  they trotted out their own well-paid “scientists” to claim that the research was still inconclusive, that “we just don’t know what medical science will ultimately conclude.”

In other words, they sowed confusion–while giving people who didn’t want to believe that smoking was harmful something to hang their hat on. If “we don’t really know…,” then why  stop smoking? Just wait for a definitive answer.

It is a tactic that has since been adopted by several interest groups, most notably the fossil fuel industry. Recognizing that– as ice shelves melted and oceans rose– few would believe a flat denial that climate change is real and occurring, they focused their disinformation efforts on creating confusion about what was causing the globe to warm. Thus their insistence that the scientific “jury” was still out, that the changes visible to everyone might be part of natural historical cycles, and especially that there wasn’t really consensus among climate scientists. (Ninety-seven percent isn’t everyone!)

The goal was to sow doubt among all us non-scientists. Who and what should we believe?

Now, as information about Russia’s interference with the 2016 election is emerging, it is becoming apparent that Russian operatives, too, made effective use of that strategy. In addition to exacerbating American racial and religious divisions, Russian bots relentlessly cast doubt on the accuracy of traditional media reporting. Taking a cue from Sarah Palin and her ilk, they portrayed the “lamestream” media as a cesspool of liberal bias.

In fact, the GOP’s right wing has been employing this tactic for years–through Fox, Hannity, Limbaugh and a variety of others, the Republican party has engaged in a steady attack on the very notion of objective fact. That attack reached its apogee with Donald Trump’s insistence that any reporting he doesn’t like is “Fake News.”

Both the Republican and Democratic bases have embraced the belief that inconvenient facts are simply untrue, that reality is whatever they choose to believe. (Granted, this is far more prevalent on the Right, but there’s plenty of evidence that the fringe Left does the same thing.)

The rest of us are left in an uncomfortable gray area, increasingly unsure of the veracity of the items that fill our Facebook and Twitter feeds. It’s bad enough that years of Republican propaganda have convinced the GOP base that credible outlets like the New York Times and Washington Post have “libtard agendas,” but thanks to the explosion of new media outlets made possible by the Internet, even those of us who are trying to access accurate, objective reporting are inundated with “news” from unfamiliar sources, many of which are reliable and many of which are not. The result is insecurity–is this true? Has that been report verified? By whom? What should I believe? Who can I trust?

Zealots don’t worry about the accuracy of the information they act on, but rational people who distrust their facts tend to be paralyzed.

And that, of course, is the goal.

 

 

 

Weaponizing Speech

A couple of weeks ago, I came across a provocative article by Tim Wu, a media historian who teaches at Columbia University, titled “Did Twitter Kill the First Amendment?” He began with the question:

You need not be a media historian to notice that we live in a golden age of press harassment, domestic propaganda and coercive efforts to control political debate. The Trump White House repeatedly seeks to discredit the press, threatens to strip broadcasters of their licenses and calls for the firing of journalists and football players for speaking their minds. A foreign government tries to hack our elections, and journalists and public speakers are regularly attacked by vicious, online troll armies whose aim is to silence opponents.

In this age of “new” censorship and blunt manipulation of political speech, where is the First Amendment?

Where, indeed? As Wu notes, the First Amendment was written for a different set of problems in a very different world, and much of the jurisprudence it has spawned deals with issues far removed from the ones that bedevil us today.

As my students are all too often surprised to learn, the Bill of Rights protects us against government misbehavior–in the case of our right to free speech, the First Amendment prohibits government censorship. For the most part, in this age of Facebook and Twitter and other social media, the censors come from the private sector–or in some cases, from governments other than our own, through various internet platforms.

The Russian government was among the first to recognize that speech itself could be used as a tool of suppression and control. The agents of its “web brigade,” often called the “troll army,” disseminate pro-government news, generate false stories and coordinate swarm attacks on critics of the government. The Chinese government has perfected “reverse censorship,” whereby disfavored speech is drowned out by “floods” of distraction or pro-government sentiment. As the journalist Peter Pomerantsev writes, these techniques employ information “in weaponized terms, as a tool to confuse, blackmail, demoralize, subvert and paralyze.”

It’s really difficult for most Americans to get our heads around this new form of warfare. We understand many of the negative effects of our fragmented and polarized media environment, the ability to live in an information bubble, to “choose our news”–and we recognize the role social media plays in constructing and reinforcing that bubble. It’s harder to visualize how Russia’s infiltration of Facebook and Twitter might have influenced our election.

Wu wants law enforcement to do more to protect journalists from cyber-bullying and threats of violence. And he wants Congress to step in to regulate social media (lots of luck with that in this anti-regulatory age.) For example, he says much too little is being done to protect American politics from foreign attack.

The Russian efforts to use Facebook, YouTube and other social media to influence American politics should compel Congress to act. Social media has as much impact as broadcasting on elections, yet unlike broadcasting it is unregulated and has proved easy to manipulate. At a minimum, new rules should bar social media companies from accepting money for political advertising by foreign governments or their agents. And more aggressive anti-bot laws are needed to fight impersonation of humans for propaganda purposes.

When Trump’s White House uses Twitter to encourage people to punish Trump’s critics — Wu cites the President’s demand that the N.F.L., on pain of tax penalties, censor players — “it is wielding state power to punish disfavored speech. There is precedent for such abuses to be challenged in court.”

It is hard to argue with Wu’s conclusion that

no defensible free-speech tradition accepts harassment and threats as speech, treats foreign propaganda campaigns as legitimate debate or thinks that social-media bots ought to enjoy constitutional protection. A robust and unfiltered debate is one thing; corruption of debate itself is another.

The challenge will be to craft legislation that addresses these unprecedented issues effectively–without inadvertently limiting the protections of the First Amendment.

We have some time to think about this, because the current occupants of both the White House and the Congress are highly unlikely to act. In the meantime, Twitter is the weapon and tweets are the “incoming.”