Barton Keyes personifies the ideal insurance claims adjuster in the classic, and must-see, film noir “Double Indemnity.” Although he’s neither a journalist nor a real person, the character portrayed by the inimitable Edward G. Robinson is worth emulating. What truly sets him apart is the Little Man who dwells in the pit of his stomach. This Little Man is always on the lookout for the fake insurance claim with a skepticism born of life experience and a keen understanding of the statistics of actuarial tables.
In our polarized era, when Americans on the left and right scream “fake news,” and can’t even agree on what that means, we would all do well to cultivate our own Little Man, or for some of us, our Little Womyn.
As increasingly sophisticated online hoaxes leapfrog over genuine reporting to rise to the top of your Internet search results, you might ask how we, as journalists, can reliably spot deliberate fakery. We discussed this challenge at my first speaking engagement after being sworn in as your national president.
Last fall, I addressed a group of 25 courageous international journalists visiting the United Nations. They included an Iraqi woman covering Basrah, a Venezuelan woman on the political corruption beat, and a Ukrainian man reporting on Russian misinformation. Simply reciting platitudes about fake news to these journalists on the frontlines of covering government corruption and disinformation struck me as inadequate, so I dived into research about spreading deliberate lies (let’s call a spade a spade) via the internet.
What I learned was chilling. New technology could soon make the shenanigans of the 2016 U.S. election look like child’s play. Increasingly sophisticated tools will allow hoaxers to create hyper-realistic forgeries of digital audio, video and still images that can falsely portray people saying or doing things that never happened.
Three members of U.S. Congress—Democrat Adam Schiff of California, and Democrat Stephanie Murphy and Republican Carlos Curbelo of Florida—sent an open letter to the Director of National Intelligence in September requesting an assessment of the emerging threat posed by these so-called deep fakes. The members of Congress write: “By blurring the line between fact and fiction, deep fake technology could undermine public trust in recorded images and videos as objective depictions of reality.” The letter goes on to say that this could fuel disinformation campaigns in our elections and social divisions in our society, and that deep fake technology could soon be deployed by malicious foreign actors.
Although it still takes a high degree of skill to create such sophisticated fakes, plenty of crude, photoshopped hoaxes circulate today. Take, for example, the image of CNN chief media correspondent Brian Stelter and his wife with a pipe bomb that appeared after a mail bomb arrived at CNN’s New York office in October. “I can’t believe I had to deny this,” Stelter wrote in his widely-read newsletter after an Associated Press fact-checker contacted him about the image that was popping up on far-right Reddit message boards and Facebook pages. The original photo, before doctoring, was part of a feature about the couple’s Manhattan apartment on a real estate website. It could be easily found online.
Deliberately falsified images may not be the worst problem. Twitter recently deleted 10,000 accounts that employees decided were aimed at voter suppression ahead of the Nov. 6 election. Social media, after all, is about distilling messages that speak to the widest possible audience. Some divisive posts even made their way into campaign stump speeches.
It’s difficult to determine if a particular posting was powered by an automated account, also known as a “bot,” or by a teenager in a basement. After bots played a big role in the 2016 election, information researchers began developing tools to detect them. Two of these were created at the University of Indiana. “We made Hoaxley and Botometer free for anyone to use because people deserve to know what’s a bot and what’s not,” Filippo Menczer, an informatics and computer science professor at the University of Indiana, recently told Reuters.
SPJ teamed up with Google to host free workshops around the country teaching journalists how to verify images and research facts using the search engine’s tools. I attended a workshop before the midterm elections and highly recommend that your chapter or newsroom consider hosting one.
Staying current with the latest tech tools is a must, but it will never replace the inner skeptic inside each of us. After all, you fact-check only something that strikes you as fishy. So the next time you’re regarding the news or a social media post, pause a moment to listen to your Little Man.
Alex Tarquinio is 2018-2019 national SPJ President and an independent journalist.