Tag Archives: Deepfakes

Messing With Our Minds

As if the websites peddling conspiracy theories and political propaganda weren’t enough, we now have to contend with “Deepfakes.” Deepfakes, according to the Brookings Institution, are 

videos that have been constructed to make a person appear to say or do something that they never said or did. With artificial intelligence-based methods for creating deepfakes becoming increasingly sophisticated and accessible, deepfakes are raising a set of challenging policy, technology, and legal issues.

Deepfakes can be used in ways that are highly disturbing. Candidates in a political campaign can be targeted by manipulated videos in which they appear to say things that could harm their chances for election. Deepfakes are also being used to place people in pornographic videos that they in fact had no part in filming.

Because they are so realistic, deepfakes can scramble our understanding of truth in multiple ways. By exploiting our inclination to trust the reliability of evidence that we see with our own eyes, they can turn fiction into apparent fact. And, as we become more attuned to the existence of deepfakes, there is also a subsequent, corollary effect: they undermine our trust in all videos, including those that are genuine. Truth itself becomes elusive, because we can no longer be sure of what is real and what is not.

The linked article notes that researchers are trying to devise technologies to detect deep fakes, but until there are apps or other tools that will identify these very sophisticated forgeries, we are left with “legal remedies and increased awareness,” neither of which is very satisfactory.

We already inhabit an information environment that has done more damage to social cohesion than previous efforts to divide and mislead. Thanks to the ubiquity of the Internet and social media (and the demise of media that can genuinely be considered “mass”), we are all free to indulge our confirmation biases–free to engage in what a colleague dubs “motivated reasoning.” It has become harder and harder to separate truth from fiction, moderate spin from outright propaganda.

One result is that thoughtful people–people who want to be factually accurate and intellectually honest–are increasingly unsure of what they can believe.

What makes this new fakery especially dangerous is that, as the linked article notes, most of us do think that “seeing is believing.” We are far more apt to accept visual evidence than other forms of information. There are already plenty of conspiracy sites that offer altered photographic “evidence”–of the aliens who landed at Roswell, of purportedly criminal behavior by public figures, etc. Now people intent on deception have the ability to make those alterations virtually impossible to detect.

Even if technology is developed that can detect fakery, will “motivated” reasoners rely on it?

Will people be more likely to believe a deepfake or a detection algorithm that flags the video as fabricated? And what should people believe when different detection algorithms—or different people—render conflicting verdicts regarding whether a video is genuine?

We are truly entering a new and unsettling “hall of mirrors” version of reality.

As If We Needed Another Looming Threat

If I didn’t have a platform bed, I’d just crawl under my bed and hide.

I’m frantic about the elections. I’m depressed about climate change and our government’s unwillingness to confront it. The last issue of The Atlantic had several lengthy stories about technologies that will disrupt our lives and could conceivably end them. (Did you know that the government is doing research on the “weaponizing” of our brains? That Alexa is becoming our best friend and confidant?)

And now there’s “Deepfakes.”

Senator Ben Sasse (you remember him–he talks a great game, but then folds like a Swiss Army knife and votes the GOP party line) has written a truly terrifying explanation of what’s on the horizon.

Flash forward two years and consider these hypotheticals. You’re seated at your desk, having taken your second sip of coffee and just beginning to contemplate the breakfast sandwich steaming in the bag in front of you. You click on your favorite news site, one you trust. “Unearthed Video Shows President Conspiring with Putin.” You can’t resist.

The video, in ultrahigh definition, shows then-presidential candidate Donald Trump and Vladimir Putin examining an electoral map of the United States. They are nodding and laughing as they appear to discuss efforts to swing the election to Trump. Jared Kushner and Ivanka Trump smile wanly in the background. The report notes that Trump’s movements on the day in question are difficult to pin down.

Alternate scenario: Same day, same coffee and sandwich. This time, the headline reports the discovery of an audio recording of Democratic presidential candidate Hillary Clinton and Attorney General Loretta E. Lynch brainstorming about how to derail the FBI investigation of Clinton’s use of a private server to handle classified emails. The recording’s date is unclear, but its quality is perfect; Clinton and Lynch can be heard discussing the attorney general’s airport tarmac meeting with former president Bill Clinton in Phoenix on June 27, 2016.

The recordings in these hypothetical scenarios are fake — but who are you going to believe? Who will your neighbors believe? The government? A news outlet you distrust?

Sasse writes that these Deepfakes — defined as seemingly authentic video or audio recordings — are likely to send American politics into an even deeper tailspin, and he warns that Washington isn’t paying nearly enough attention to them. (Well, of course not. The moral midgets who run our government have power to amass, and a public to fleece–that doesn’t leave them time or energy to address the actual issues facing us.)

Consider: In December 2017, an amateur coder named “DeepFakes” was altering porn videos by digitally substituting the faces of female celebrities for the porn stars’. Not much of a hobby, but it was effective enough to prompt news coverage. Since then, the technology has improved and is readily available. The word deepfake has become a generic noun for the use of machine-learning algorithms and facial-mapping technology to digitally manipulate people’s voices, bodies and faces. And the technology is increasingly so realistic that the deepfakes are almost impossible to detect.

Creepy, right? Now imagine what will happen when America’s enemies use this technology for less sleazy but more strategically sinister purposes.

I’m imagining. And you’ll forgive me if I find Sasse’s solution–Americans have to stop distrusting each other–pretty inadequate, if not downright fanciful. On the other hand, I certainly don’t have a better solution to offer.

Maybe if I lose weight I can squeeze under that platform bed…..