Tag Archives: AI

Messing With Our Minds

As if the websites peddling conspiracy theories and political propaganda weren’t enough, we now have to contend with “Deepfakes.” Deepfakes, according to the Brookings Institution, are 

videos that have been constructed to make a person appear to say or do something that they never said or did. With artificial intelligence-based methods for creating deepfakes becoming increasingly sophisticated and accessible, deepfakes are raising a set of challenging policy, technology, and legal issues.

Deepfakes can be used in ways that are highly disturbing. Candidates in a political campaign can be targeted by manipulated videos in which they appear to say things that could harm their chances for election. Deepfakes are also being used to place people in pornographic videos that they in fact had no part in filming.

Because they are so realistic, deepfakes can scramble our understanding of truth in multiple ways. By exploiting our inclination to trust the reliability of evidence that we see with our own eyes, they can turn fiction into apparent fact. And, as we become more attuned to the existence of deepfakes, there is also a subsequent, corollary effect: they undermine our trust in all videos, including those that are genuine. Truth itself becomes elusive, because we can no longer be sure of what is real and what is not.

The linked article notes that researchers are trying to devise technologies to detect deep fakes, but until there are apps or other tools that will identify these very sophisticated forgeries, we are left with “legal remedies and increased awareness,” neither of which is very satisfactory.

We already inhabit an information environment that has done more damage to social cohesion than previous efforts to divide and mislead. Thanks to the ubiquity of the Internet and social media (and the demise of media that can genuinely be considered “mass”), we are all free to indulge our confirmation biases–free to engage in what a colleague dubs “motivated reasoning.” It has become harder and harder to separate truth from fiction, moderate spin from outright propaganda.

One result is that thoughtful people–people who want to be factually accurate and intellectually honest–are increasingly unsure of what they can believe.

What makes this new fakery especially dangerous is that, as the linked article notes, most of us do think that “seeing is believing.” We are far more apt to accept visual evidence than other forms of information. There are already plenty of conspiracy sites that offer altered photographic “evidence”–of the aliens who landed at Roswell, of purportedly criminal behavior by public figures, etc. Now people intent on deception have the ability to make those alterations virtually impossible to detect.

Even if technology is developed that can detect fakery, will “motivated” reasoners rely on it?

Will people be more likely to believe a deepfake or a detection algorithm that flags the video as fabricated? And what should people believe when different detection algorithms—or different people—render conflicting verdicts regarding whether a video is genuine?

We are truly entering a new and unsettling “hall of mirrors” version of reality.