Tag Archives: propaganda

Why Trust Matters

In 2009, I wrote a book titled Distrust, American Style. In it, I looked at the issue of trust through the lens of social capital scholarship. Trust and reciprocity are essential to social capital–and especially to the creation of “bridging” social capital, the relationships that allow us to connect with and value people different from ourselves.

I didn’t address an issue that I now see as critical: the intentional production of distrust.

Today’s propagandists learned a valuable tactic from Big Tobacco. For many years, as health professionals insisted that smoking was harmful, Big Tobacco responded brilliantly. Rather than flatly disputing the validity of the claim, a response that would have invited people to take sides and decide who they trusted, their doctors or tobacco manufacturers,  they trotted out their own well-paid “scientists” to claim that the research was still inconclusive, that “we just don’t know what medical science will ultimately conclude.”

In other words, they sowed confusion–while giving people who didn’t want to believe that smoking was harmful something to hang their hat on. If “we don’t really know…,” then why  stop smoking? Just wait for a definitive answer.

It is a tactic that has since been adopted by several interest groups, most notably the fossil fuel industry. Recognizing that– as ice shelves melted and oceans rose– few would believe a flat denial that climate change is real and occurring, they focused their disinformation efforts on creating confusion about what was causing the globe to warm. Thus their insistence that the scientific “jury” was still out, that the changes visible to everyone might be part of natural historical cycles, and especially that there wasn’t really consensus among climate scientists. (Ninety-seven percent isn’t everyone!)

The goal was to sow doubt among all us non-scientists. Who and what should we believe?

Now, as information about Russia’s interference with the 2016 election is emerging, it is becoming apparent that Russian operatives, too, made effective use of that strategy. In addition to exacerbating American racial and religious divisions, Russian bots relentlessly cast doubt on the accuracy of traditional media reporting. Taking a cue from Sarah Palin and her ilk, they portrayed the “lamestream” media as a cesspool of liberal bias.

In fact, the GOP’s right wing has been employing this tactic for years–through Fox, Hannity, Limbaugh and a variety of others, the Republican party has engaged in a steady attack on the very notion of objective fact. That attack reached its apogee with Donald Trump’s insistence that any reporting he doesn’t like is “Fake News.”

Both the Republican and Democratic bases have embraced the belief that inconvenient facts are simply untrue, that reality is whatever they choose to believe. (Granted, this is far more prevalent on the Right, but there’s plenty of evidence that the fringe Left does the same thing.)

The rest of us are left in an uncomfortable gray area, increasingly unsure of the veracity of the items that fill our Facebook and Twitter feeds. It’s bad enough that years of Republican propaganda have convinced the GOP base that credible outlets like the New York Times and Washington Post have “libtard agendas,” but thanks to the explosion of new media outlets made possible by the Internet, even those of us who are trying to access accurate, objective reporting are inundated with “news” from unfamiliar sources, many of which are reliable and many of which are not. The result is insecurity–is this true? Has that been report verified? By whom? What should I believe? Who can I trust?

Zealots don’t worry about the accuracy of the information they act on, but rational people who distrust their facts tend to be paralyzed.

And that, of course, is the goal.

 

 

 

Weaponizing Speech

A couple of weeks ago, I came across a provocative article by Tim Wu, a media historian who teaches at Columbia University, titled “Did Twitter Kill the First Amendment?” He began with the question:

You need not be a media historian to notice that we live in a golden age of press harassment, domestic propaganda and coercive efforts to control political debate. The Trump White House repeatedly seeks to discredit the press, threatens to strip broadcasters of their licenses and calls for the firing of journalists and football players for speaking their minds. A foreign government tries to hack our elections, and journalists and public speakers are regularly attacked by vicious, online troll armies whose aim is to silence opponents.

In this age of “new” censorship and blunt manipulation of political speech, where is the First Amendment?

Where, indeed? As Wu notes, the First Amendment was written for a different set of problems in a very different world, and much of the jurisprudence it has spawned deals with issues far removed from the ones that bedevil us today.

As my students are all too often surprised to learn, the Bill of Rights protects us against government misbehavior–in the case of our right to free speech, the First Amendment prohibits government censorship. For the most part, in this age of Facebook and Twitter and other social media, the censors come from the private sector–or in some cases, from governments other than our own, through various internet platforms.

The Russian government was among the first to recognize that speech itself could be used as a tool of suppression and control. The agents of its “web brigade,” often called the “troll army,” disseminate pro-government news, generate false stories and coordinate swarm attacks on critics of the government. The Chinese government has perfected “reverse censorship,” whereby disfavored speech is drowned out by “floods” of distraction or pro-government sentiment. As the journalist Peter Pomerantsev writes, these techniques employ information “in weaponized terms, as a tool to confuse, blackmail, demoralize, subvert and paralyze.”

It’s really difficult for most Americans to get our heads around this new form of warfare. We understand many of the negative effects of our fragmented and polarized media environment, the ability to live in an information bubble, to “choose our news”–and we recognize the role social media plays in constructing and reinforcing that bubble. It’s harder to visualize how Russia’s infiltration of Facebook and Twitter might have influenced our election.

Wu wants law enforcement to do more to protect journalists from cyber-bullying and threats of violence. And he wants Congress to step in to regulate social media (lots of luck with that in this anti-regulatory age.) For example, he says much too little is being done to protect American politics from foreign attack.

The Russian efforts to use Facebook, YouTube and other social media to influence American politics should compel Congress to act. Social media has as much impact as broadcasting on elections, yet unlike broadcasting it is unregulated and has proved easy to manipulate. At a minimum, new rules should bar social media companies from accepting money for political advertising by foreign governments or their agents. And more aggressive anti-bot laws are needed to fight impersonation of humans for propaganda purposes.

When Trump’s White House uses Twitter to encourage people to punish Trump’s critics — Wu cites the President’s demand that the N.F.L., on pain of tax penalties, censor players — “it is wielding state power to punish disfavored speech. There is precedent for such abuses to be challenged in court.”

It is hard to argue with Wu’s conclusion that

no defensible free-speech tradition accepts harassment and threats as speech, treats foreign propaganda campaigns as legitimate debate or thinks that social-media bots ought to enjoy constitutional protection. A robust and unfiltered debate is one thing; corruption of debate itself is another.

The challenge will be to craft legislation that addresses these unprecedented issues effectively–without inadvertently limiting the protections of the First Amendment.

We have some time to think about this, because the current occupants of both the White House and the Congress are highly unlikely to act. In the meantime, Twitter is the weapon and tweets are the “incoming.”

 

Media, Left and Right

Well, I see that the Republican Governors’ Association has decided to enter the “fake news” sweepstakes. According to reports,

The Republican Governors Association has quietly launched an online publication that looks like a media outlet and is branded as such on social media. The Free Telegraph blares headlines about the virtues of GOP governors, while framing Democrats negatively. It asks readers to sign up for breaking news alerts. It launched in the summer bearing no acknowledgement that it was a product of an official party committee whose sole purpose is to get more Republicans elected.

The website was registered July 7 through Domains By Proxy, a company that allows the originators of a website to shield their identities. […] As of early Monday afternoon, The Free Telegraph’s Twitter account and Facebook page still had no obvious identifiers tying the site to RGA. The site described itself on Twitter as “bringing you the political news that matters outside of Washington.” The Facebook account labeled The Free Telegraph a “Media/News Company.”

Evidently, after the Associated Press made inquiries, the site added a very small, grey box at the bottom of the page, disclosing its origins.

The “mastermind” behind this effort is Wisconsin Governor Scott Walker; he may have been inspired (if that’s the word) by Mike Pence’s ill-fated attempt to establish a state-owned Indiana “news bureau”(aka propaganda site).  Dubbed by critics “Pravda on the Prairie,” it was embarrassingly obvious and ignominiously withdrawn. Walker is evidently better at stealth.

The problem is, this sort of disinformation campaign works–especially with people who want to believe, who want both their own opinions and their own “facts.” As an article in the American Prospect put it,

As we learn more about how Russia used social media as part of its campaign to help elect Donald Trump, what stands out is how easy it was. Spend $100,000 on Facebook ads, create a bunch of Twitter bots, and before you know it you’ve whipped up a fog of disinformation that gives Trump just the boost he needs to get over the finish line. Even if it’s almost impossible to quantify how many votes it might have swayed, it was one of the many factors contributing to the atmosphere of chaos and confusion that helped Trump get elected.

As new as it might seem, this is just the latest manifestation of a broader problem that goes back a long way, one of the degradation of truth, a conservative electorate taught to disbelieve what’s real and accept whatever lunatic things their media figures tell them, and liberals who can’t figure out how to respond.

As the author points out, a liberal version of these mechanisms won’t work. The effect that right-wing media has on its audiences is of a “profoundly different character than what conservative media achieve.”

There’s a doctrinal basis to conservative media that makes it fundamentally different from liberal media, that makes Rush Limbaugh most definitely not the mirror image of a liberal radio host and Sean Hannity not the mirror image of Rachel Maddow. It’s not merely about the conservatives’ and liberals’ respective adherence to truth or penchant for ugly demonization of their opponents, though they differ in that too. It’s that an argument about the larger media world is the foundation of conservative media. Conservative hosts and writers tell their audiences over and over again that nothing they read in the mainstream media can be accepted, that it’s all twisted by a liberal agenda, and therefore they can only believe what conservatives tell them. It’s the driving backbeat to every episode, every story, and every rant.

Liberals complain about media coverage of one story or another all the time. What they don’t do is tell their audiences that any news source that is not explicitly and exclusively devoted to their ideological agenda cannot be trusted. But conservatives do.

The bottom line is that very few of the people who fall within the liberal camp are “good soldiers” in the same way that the Fox News audience is. Liberals still occupy a pretty big tent, and even when they agree on a broad premise–healthcare is a right, for example–they differ significantly on the policies to achieve their goals. As recent research has conclusively shown, conservative and liberal minds work differently.

Which leaves us at the mercy of propaganda. When some people are saying, in effect, “lie to me to reassure me that my tribe is right”–what do we do?

 

Brave New World

Okay, I am now officially worried. Really worried.

A few days ago, The Guardian reported on a recent conference of internet hackers, held in Las Vegas. (Yes, even hackers evidently have conferences….)

Using “psychographic” profiles of individual voters generated from publicly stated interests really does work, according to new research presented at the Def Con hacking conference in Las Vegas, Nevada.

The controversial practice allows groups to hone their messages to match the personality types of their targets during political campaigning, and is being used by firms including Cambridge Analytica and AggregateIQ to better target voters with political advertising with so-called “dark ads”.

Most of us don’t consider ourselves targets for “dark ads” aka propaganda. We like to believe that we are different–that we’re thoughtful consumers of information, people who can “smell a rat” or otherwise detect spin and disinformation. We shake our heads over reports like the one about the gullible 28-year-old who shot up a Washington Pizza Parlor because stories on social media and conservative websites had convinced him that Hillary Clinton was operating a Satanic child sex ring out of its (nonexistent) basement.

News flash: we are all more gullible than we like to believe. Confirmation bias is built in to human DNA.

Psychographic profiling classifies people into personality types using data from social networks such as Facebook. Sumner’s research focused on replicating some of the key findings of psychographic research by crafting adverts specifically targeted at certain personality types. Using publicly available data to ensure that the adverts were seen by the right people at the right time, Sumner tested how effective such targeting can be.

The referenced study used information that Facebook already generates about those who use its platform, and created two groups: one composed of “high-authoritarian” conservatives, and a “low-authoritarian” group of liberals.

Knowing the psychographic profiles of the two groups is more useful than simply being able to accurately guess what positions they already hold; it can also be used to craft messages to specifically target those groups, to more effectively shift their opinions. Sumner created four such adverts, two aimed at increasing support for internet surveillance and two aimed at decreasing it, each targeted to a low or high authoritarian group.

For example, the highly authoritarian group’s anti-surveillance advert used the slogan “They fought for your freedom. Don’t give it away!”, over an image of the D-Day landings, while the low authoritarian group’s pro-surveillance message was “Crime doesn’t stop where the internet starts: say YES to state surveillance”.

Sure enough, the targeted adverts did significantly better. The high-authoritarian group was significantly more likely to share a promoted post aimed at them than a similar one aimed at their opposites, while the low authoritarian group ranked the advert aimed at them as considerably more persuasive than the advert that wasn’t.

Think about the implications of this. Political campaigns can now target different messages to different groups far more efficiently and effectively than they could when the only mechanisms available were direct mail campaigns or placement of television ads. As the article noted, this technology allows politicians to appeal to the worst side of voters in an almost undiscoverable manner.

The importance of motivating and turning out your base is a “given” in electoral politics, and these new tools are undoubtedly already in use–further eroding the democratic ideal in which votes are cast after citizens weigh information provided through public policy debates conducted by honorable candidates using verifiable facts.

Thanks to gerrymandering, most of us don’t have genuine choices for Congress or our state legislatures on election day. Now, thanks to technology, we won’t be able to tell the difference between facts and “alternative facts.”

About That Echo Chamber…

As this blog frequently notes, one of the thorniest problems bedeviling our unravelling democracy is the distortion of reality–intentional and unintentional– provided via the Internet. That distortion is immensely aided by our tendency to live in echo chambers populated by friends who think like we do.

Most of us trust links from friends – a vulnerability exploited by phishing sites and other forms of online manipulation. An increasing number of us “unfriend” contacts who post uncongenial opinions or facts inconsistent with our political prejudices.

This is a real problem.

On the one hand, citizens who occupy different realities cannot have productive conversations or negotiate practical solutions to common problems; on the other hand, censorship of electronic media, in an effort to separate wheat from chaff, is neither wise nor possible.

Can technology save us?

Most of us, whatever our political orientation, recognize the problem. As an IU Professor of Computer Science and Informatics puts it,

If you get your news from social media, as most Americans do, you are exposed to a daily dose of hoaxes, rumors, conspiracy theories and misleading news. When it’s all mixed in with reliable information from honest sources, the truth can be very hard to discern.

In fact, my research team’s analysis of data from Columbia University’s Emergent rumor tracker suggests that this misinformation is just as likely to go viral as reliable information.

As he notes, the Internet has spawned an entire industry of fake news and digital misinformation.

Clickbait sites manufacture hoaxes to make money from ads, while so-called hyperpartisan sites publish and spread rumors and conspiracy theories to influence public opinion….

This industry is bolstered by how easy it is to create social bots, fake accounts controlled by software that look like real people and therefore can have real influence. Research in my lab uncovered many examples of fake grassroots campaigns, also called political astroturfing.

In response, we developed the BotOrNot tool to detect social bots. It’s not perfect, but accurate enough to uncover persuasion campaigns in the Brexit and antivax movements. Using BotOrNot, our colleagues found that a large portion of online chatter about the 2016 elections was generated by bots.

The real question–as the author readily concedes–is how to combat technology that spreads propaganda, or “fake news.” As he says, the first step is to analyze how these sites are operating.  Then we can hope that smart people adept in use of these technologies can devise tools to combat the spread of false and misleading information.

Long-term, however, “fixing” the problem of fake news will require fixing the humans who have a need to believe whatever it is that such “news” is peddling. That fix will necessarily begin with better civic education and news literacy, but it can’t end there.

Ultimately, we have a problem of political psychology…It would seem that we humans have invented tools that have outstripped our ability to properly use them.