Tag Archives: internet

And The Evidence Accumulates…

“Hate is a normal part of life. Get over it.”

Offensive as that sentiment about the “normalcy” of hate is, it’s probably correct. I prefer a different version of “getting over it,” however; the challenge of our time–made critical by Trump and Trumpism–is indeed “getting over it.” As in, refusing to normalize or condone it.

The quotation itself came about halfway through a recent Washington Post article documenting the rise of racist and anti-Semitic messages in the wake of Trump’s election.

Racist and anti-Semitic content has surged on shadowy social media platforms — spiking around President Trump’s Inauguration Day and the “Unite the Right Rally” in Charlottesville — spreading hate speech and extremist views to mainstream audiences, according to an analysis published this week.

The findings, from a newly formed group of scientists named the Network Contagion Research Institute who studied hundreds of millions of social media messages, bolster a growing body of evidence about how extremist speech online can be fueled by real-world events.

It’s actually pretty predictable that messages from “real world” events would be discussed and amplified on social media. What is far more disturbing is the iterative relationship between social media and the “real world,” revealed by the article. The cycle begins with a real-world event–in this case, Trump’s election–that triggers a burst of online response–in this case, a celebration of bigotry. That online response begins in the dark corners of the Internet, but thanks to its connection to the “real world,” it doesn’t stay there.  It infects more mainstream outlets.

One of the studies referenced in the article identified two such fringe forums, and found that

[a]lthough small relative to leading social media platforms, exerted an outsize influence on overall conversation online by transmitting hateful content to such mainstream sites as Reddit and Twitter, the researchers said. Typically this content came in the form of readily shareable “memes” that cloaked hateful ideas in crass or humorous words and imagery. (Facebook, the largest social media platform, with more than 2 billion users, is harder to study because of the closed nature of its platform and was not included in the research.)

“There may be 100 racists in your town, but in the past they would have to find each other in the real world. Now they just go online,” said one of the researchers, Jeremy Blackburn, an assistant professor of computer science at the University of Alabama at Birmingham. “These things move these radicals, these outliers in society, closer, and it gives them bigger voices as well.”

Niche hate movements that were once relegated to what the article calls the “dark corners of the Web” are increasingly influencing the mainstream.

The QAnon conspiracy theory began circulating on the same platforms last fall before exploding into public view in August, after months of refining its central allegations, purportedly from a top-secret government agent, that President Trump is secretly battling a shadowy cabal of sex rings, death squads and deep-state elites.

Trump is central to the most recent explosion of online racism and anti-Semitism. Surges in the number and intensity of “alt-right’ messaging occurred immediately after his inauguration and again after his “fine people on both sides” comments after Charlottesville. The alt-right celebrated–and continues to hail– the legitimacy they believe his election and rhetoric have conferred upon the white Christian supremicist worldview.

The article compares the spread of these tribal and racist sentiments to a virus for which there is not, as yet, an antidote.

The findings, researchers wrote, suggested a “worrying trend of real-world action mirroring online rhetoric” — and a possible feedback loop of online and offline hate.

That feedback loop requires both online and real-world support. We may not be able to do much about the rancid corners of the web, but we can vote to replace Trumpworld’s spineless enablers in the House and Senate.

Think of your midterm vote as an antibiotic.

A Different Kind Of Weapon

A story about the recent Santa Fe school shooting highlighted what worries me most of all about America’s future–not to mention humanity’s–and our ability to engage in fact-based, rational discussion and debate.

In the first hours after the Texas school shooting that left at least 10 dead Friday, online hoaxers moved quickly to spread a viral lie, creating fake Facebook accounts with the suspected shooter’s name and a doctored photo showing him wearing a “Hillary 2016” hat.

Several were swiftly flagged by users and deleted by the social network. But others rose rapidly in their place: Chris Sampson, a disinformation analyst for a counterterrorism think tank, saidhe could see new fakes as they were being created and filled out with false information, including images linking the suspect to the anti-fascist group Antifa.

The immediacy and reach of the disinformation about gun violence are nothing new, nor is this tactic limited to the gun debate–and that’s the problem.

Thanks to technology, we are marinating in propaganda and falsehood–weapons that are ultimately far more powerful than assault rifles.

There have always been efforts to mislead the gullible, to confirm the suspicions of cynics and the certainties of ideologues. No matter how diligently we try not to indulge in confirmation bias, most of us are susceptible to the “facts” that have been slanted in a direction we’re predisposed to accept. But we have never seen anything like the onslaught of utter fabrication that has been made possible by our new communication mediums, and the result is beginning to emerge: Americans are increasingly distrustful of all information.

We don’t know who or what to believe, so we suspend belief altogether.

When people occupy incommensurate realities, they can’t communicate with each other. The one thing Donald Trump does understand–and unfortunately, it is the only thing he appears to understand–is that lies and “alternate” facts undermine citizens’ ability to make decisions based in reality. Thus his attacks on the “fake” news media and his assertions of “achievements” that exist only in the precincts of his grandiose imagining.

The effectiveness of this technique of cultivating uncertainty was prominently displayed during the so-called “tobacco wars,” when flacks for the tobacco industry realized that a frontal attack on medical reports linking smoking to cancer were doomed, but that efforts to muddy the waters–to suggest that the “jury was still out”–could be very effective. If the attack was on the reliability of science, the public would discount it, but if the message was “scientists still aren’t sure,” people who wanted to be fair–and those who wanted to keep smoking– would withhold judgment.

That same tactic has been used–very effectively–by fossil fuel interests to undermine settled science on the reality and causes of climate change.

The problem is that people of good will–and, of course, those who are not so well-intentioned–no longer know what to believe. What is factual, and what is self-serving bullshit? And how do we tell the difference?

 Unless we can address this issue–unless we can reclaim the ability to determine what is fact and what is fiction, what is credible evidence and what is “disinformation”– humanity is in a world of hurt.

Humans: Clever, But Not Wise….

The election of Donald Trump (aka “Agent Orange”) is only one of many, many signs that we humans aren’t as smart as we think we are.

Consider our ability to invent technologies we then prove unable to use wisely.

Actually, being destroyed or enslaved by the machines we’ve created is a favorite theme of science fiction. Robots who turn on their makers, unanticipated consequences of laboratory experiments, the dehumanizing substitution of human-machine interaction for human contact–all are familiar scenarios of “futuristic” fantasy.

Being overwhelmed by our own inventions, however, is neither “futuristic” nor “fantastic.” Anyone who doesn’t believe that human society is being inexorably changed by social media and the Internet hasn’t been paying attention.

The Guardian recently ran a chilling column about those changes, and about our tendency to see new threats and challenges in terms of the past, rather than as harbingers of our future.

Both sides of the political divide seem to be awakening to the possibility that letting the tech industry do whatever it wants hasn’t produced the best of all possible worlds. “I have found a flaw,” Alan Greenspan famously said in 2008 of his free-market worldview, as the global financial system imploded. A similar discovery may be dawning on our political class when it comes to its hands-off approach to Silicon Valley.

But the new taste for techno-skepticism is unlikely to lead to meaningful reform, for several reasons. One is money. The five biggest tech firms spend twice as much as Wall Street on lobbying Washington. It seems reasonable to assume that this insulates them from anything too painful in a political system as corrupt as ours.

The FCC’s decision to repeal Net Neutrality despite the fact that 83% of the public want to retain the policy would certainly seem to validate the author’s assertion that our government responds to money, not public opinion.

The columnist, Ben Tarnoff, is especially concerned that the focus on Russia’s efforts to weaponize the Internet and influence the election is diverting our attention from far more serious issues. It is unlikely that Russian game-playing had much of an effect on the Presidential election (racism aka White Nationalism clearly played a far greater role), and while Congress fixates on Russia, far more significant threats go unnoticed.

As Tarnoff sees it, the focus on Russia isn’t just misplaced because that country’s social media influence wasn’t really all that effective. It’s misplaced because the Russians used the Internet platforms in precisely the way they’re designed to be used.

As Zeynep Tufekci has pointed out, the business model of social media makes it a perfect tool for spreading propaganda. The majority of that propaganda isn’t coming from foreigners, however – it’s coming from homegrown, “legitimate” actors who pump vast sums of cash into shaping opinion on behalf of a candidate or cause.

Social media is a powerful weapon in the plutocratization of our politics. Never before has it been so easy for propagandists to purchase our attention with such precision. The core issue is an old one in American politics: money talks too much, to quote an Occupy slogan. And online, it talks even louder.

Unfortunately, the fixation on Russian “cyberwarfare” isn’t likely to bring us any closer to taking away money’s megaphone. Instead, it will probably be used as a pretext to make us less free in other ways – namely by justifying more authoritarian incursions by the state into the digital sphere….

The tragedy of 9/11 has long been weaponized to justify mass surveillance and state repression. The myth of the “cyber 9/11” will almost certainly be used for the same ends.

Tarnoff reminds readers that–as usual–America’s wounds are largely self-inflicted.  We could and should take note of Russia’s efforts to subvert our election without ignoring the “deep domestic roots” of that catastrophe. As he reminds us,

Russia didn’t singlehandedly produce the crisis of legitimacy that helped put a deranged reality television star into the White House. Nor did it create the most sophisticated machinery in human history for selling our attention to the highest bidder.

It’s odd to blame Russian trolls for the destruction of American democracy when American democracy has proven more than capable of destroying itself. And rarely is it more self-destructive than when it believes it is protecting itself from its enemies.

We Americans are really, really good at whiz-bang technology. Creating a society that is just, fair and free? Not so much.

Why Trust Matters

In 2009, I wrote a book titled Distrust, American Style. In it, I looked at the issue of trust through the lens of social capital scholarship. Trust and reciprocity are essential to social capital–and especially to the creation of “bridging” social capital, the relationships that allow us to connect with and value people different from ourselves.

I didn’t address an issue that I now see as critical: the intentional production of distrust.

Today’s propagandists learned a valuable tactic from Big Tobacco. For many years, as health professionals insisted that smoking was harmful, Big Tobacco responded brilliantly. Rather than flatly disputing the validity of the claim, a response that would have invited people to take sides and decide who they trusted, their doctors or tobacco manufacturers,  they trotted out their own well-paid “scientists” to claim that the research was still inconclusive, that “we just don’t know what medical science will ultimately conclude.”

In other words, they sowed confusion–while giving people who didn’t want to believe that smoking was harmful something to hang their hat on. If “we don’t really know…,” then why  stop smoking? Just wait for a definitive answer.

It is a tactic that has since been adopted by several interest groups, most notably the fossil fuel industry. Recognizing that– as ice shelves melted and oceans rose– few would believe a flat denial that climate change is real and occurring, they focused their disinformation efforts on creating confusion about what was causing the globe to warm. Thus their insistence that the scientific “jury” was still out, that the changes visible to everyone might be part of natural historical cycles, and especially that there wasn’t really consensus among climate scientists. (Ninety-seven percent isn’t everyone!)

The goal was to sow doubt among all us non-scientists. Who and what should we believe?

Now, as information about Russia’s interference with the 2016 election is emerging, it is becoming apparent that Russian operatives, too, made effective use of that strategy. In addition to exacerbating American racial and religious divisions, Russian bots relentlessly cast doubt on the accuracy of traditional media reporting. Taking a cue from Sarah Palin and her ilk, they portrayed the “lamestream” media as a cesspool of liberal bias.

In fact, the GOP’s right wing has been employing this tactic for years–through Fox, Hannity, Limbaugh and a variety of others, the Republican party has engaged in a steady attack on the very notion of objective fact. That attack reached its apogee with Donald Trump’s insistence that any reporting he doesn’t like is “Fake News.”

Both the Republican and Democratic bases have embraced the belief that inconvenient facts are simply untrue, that reality is whatever they choose to believe. (Granted, this is far more prevalent on the Right, but there’s plenty of evidence that the fringe Left does the same thing.)

The rest of us are left in an uncomfortable gray area, increasingly unsure of the veracity of the items that fill our Facebook and Twitter feeds. It’s bad enough that years of Republican propaganda have convinced the GOP base that credible outlets like the New York Times and Washington Post have “libtard agendas,” but thanks to the explosion of new media outlets made possible by the Internet, even those of us who are trying to access accurate, objective reporting are inundated with “news” from unfamiliar sources, many of which are reliable and many of which are not. The result is insecurity–is this true? Has that been report verified? By whom? What should I believe? Who can I trust?

Zealots don’t worry about the accuracy of the information they act on, but rational people who distrust their facts tend to be paralyzed.

And that, of course, is the goal.

 

 

 

About That Echo Chamber…

As this blog frequently notes, one of the thorniest problems bedeviling our unravelling democracy is the distortion of reality–intentional and unintentional– provided via the Internet. That distortion is immensely aided by our tendency to live in echo chambers populated by friends who think like we do.

Most of us trust links from friends – a vulnerability exploited by phishing sites and other forms of online manipulation. An increasing number of us “unfriend” contacts who post uncongenial opinions or facts inconsistent with our political prejudices.

This is a real problem.

On the one hand, citizens who occupy different realities cannot have productive conversations or negotiate practical solutions to common problems; on the other hand, censorship of electronic media, in an effort to separate wheat from chaff, is neither wise nor possible.

Can technology save us?

Most of us, whatever our political orientation, recognize the problem. As an IU Professor of Computer Science and Informatics puts it,

If you get your news from social media, as most Americans do, you are exposed to a daily dose of hoaxes, rumors, conspiracy theories and misleading news. When it’s all mixed in with reliable information from honest sources, the truth can be very hard to discern.

In fact, my research team’s analysis of data from Columbia University’s Emergent rumor tracker suggests that this misinformation is just as likely to go viral as reliable information.

As he notes, the Internet has spawned an entire industry of fake news and digital misinformation.

Clickbait sites manufacture hoaxes to make money from ads, while so-called hyperpartisan sites publish and spread rumors and conspiracy theories to influence public opinion….

This industry is bolstered by how easy it is to create social bots, fake accounts controlled by software that look like real people and therefore can have real influence. Research in my lab uncovered many examples of fake grassroots campaigns, also called political astroturfing.

In response, we developed the BotOrNot tool to detect social bots. It’s not perfect, but accurate enough to uncover persuasion campaigns in the Brexit and antivax movements. Using BotOrNot, our colleagues found that a large portion of online chatter about the 2016 elections was generated by bots.

The real question–as the author readily concedes–is how to combat technology that spreads propaganda, or “fake news.” As he says, the first step is to analyze how these sites are operating.  Then we can hope that smart people adept in use of these technologies can devise tools to combat the spread of false and misleading information.

Long-term, however, “fixing” the problem of fake news will require fixing the humans who have a need to believe whatever it is that such “news” is peddling. That fix will necessarily begin with better civic education and news literacy, but it can’t end there.

Ultimately, we have a problem of political psychology…It would seem that we humans have invented tools that have outstripped our ability to properly use them.