As the reporting about Cambridge Analytica’s sophisticated propaganda campaign suggests, we humans are far more “manipulatable” than we like to think–and Huxley was wrong to predict that it would require drugs (remember Soma?) to pacify or mislead us.
The linked article by two Harvard University researchers suggests that the discovery of this political operation raises the stakes of our ongoing concerns about the impact of digital technology on democracy.
There was already a debate raging about how targeted digital ads and messages from campaigns, partisan propagandists and even Russian agents were sowing outrage and division in the U.S. electorate. Now it appears that Cambridge Analytica took it one step farther, using highly sensitive personal data taken from Facebook users without their knowledge to manipulate them into supporting Donald Trump. This scandal raises major questions about how this could have happened, how it can be stopped and whether the connection between data-driven ads and democracy is fundamentally toxic.
It also raises concerns about the new ability of political operatives, armed with the results of political psychology research, to identify and prey on voters’ vulnerabilities. Extensive personal data amassed through social media platforms–especially Facebook– can be used to manipulate voters and distort democratic debate. Cambridge Analytica exploited that ability on behalf of the Trump campaign.
We’ve come a long, long way from the days when we collectively received our news from a mass media. Instead, we now have what a scholar once predicted and dubbed “the daily me,” information (and disinformation) that feeds a personalized reality–Eli Pariser’s “filter bubble”–that isn’t necessarily shared with others.
On the internet, you don’t know much about the political ads you’re shown. You often don’t know who is creating them, since the disclaimers are so small, if they exist at all. You also don’t really know who else is seeing them. Sure, you can share a political ad — thus fulfilling the advertiser’s hopes — and then at least some other people you know will have witnessed the same ad. But you don’t really know if your neighbor has seen it, let alone someone else across the state or the country. In addition, digital advertising companies distribute ads based on how likely you are to interact with them. This most often means that they send you ads they think you are likeliest to engage with. They don’t determine what the nature of that engaging content might be — but they know (just as all advertisers do) that content works well if it makes you very emotional. An ad like that doesn’t make you contemplative or curious, it makes you elated, excited, sad or angry. It could make you so angry, in fact, that you’ll share it and make others angry — which in turn gives the ad free publicity, effectively making the advertiser’s purchase cheaper per viewer, since they pay for the initial outreach and not the shares.
What this can lead to is communities and, eventually, a nation infuriated by things others don’t know about. The information that makes us angriest becomes the information least likely to be questioned. We wind up stewing over things that, by design, few others can correct, engage with or learn from. A Jeffersonian public square where lots of viewpoints go to mingle, debate and compromise, this is not.
As the authors note, none of this means that Facebook and Twitter intentionally undermined Hillary Clinton. It’s much worse, because the technology that powers social media uses the personal data to which they become privy to divide the American population and then feed us “highly personalized messages designed to push our particular buttons so well that we share them and they go viral, thus keeping people on the site longer.”
Social media rewards provocation — again, without repercussion, since we usually only share content with our friends in a way that is largely invisible to the broader public. Morality and integrity count little in online advertising.
The real question here isn’t which campaign got the advantage. The real question is whether this micro-targeted free-for-all should be allowed in the political sphere at all in the way it is currently designed —with very little transparency about who is pulling these strings and how they are doing it.
We truly do inhabit a new world. I don’t know how brave it is.