First, bots pounce on fake news in the first few seconds after it’s published, and they retweet it broadly. That’s how they’re designed. And the initial spreaders of a fake news article are much more likely to be bots than humans. Think about the starburst pattern in the Twitter cascade of fake news shown in Figure 2.2. Many of these starbursts are created by bots. What happens next validates the effectiveness of this strategy, because humans do most of the retweeting. The early tweeting activity by bots triggers a disproportionate amount of human engagement, creating cascades of fake news triggered by bots but propagated by humans through the Hype Machine’s network.
Second, bots mention influential humans incessantly. If they can get an influential human to retweet fake news, it simultaneously amplifies and legitimizes it. Menczer and his colleagues point to an example in their data in which a single bot mentioned @realDonaldTrump nineteen times, linking to the false news claim that millions of votes were cast by illegal immigrants in the 2016 presidential election. The strategy works when influential people are fooled into sharing the content. Donald Trump, for example, has on a number of occasions shared content from known bots, legitimizing their content and spreading their misinformation widely in the Twitter network. It was Trump who adopted the false claim that millions of illegal immigrants voted in the 2016 presidential election as an official talking point.
But bots can’t spread fake news without people. In our ten-year study with Twitter, we found that it was humans, more than bots, that helped make false rumors spread faster and more broadly than the truth. In their study from 2016 to 2017, Menczer and his colleagues also found that humans, not bots, were the most critical spreaders of fake news in the Twitter network. In the end, humans and machines play symbiotic roles in the spread of falsity: bots manipulate humans to share fake news, and humans spread it on through the Hype Machine. Misleading humans is the ultimate goal of any misinformation campaign. It’s humans who vote, protest, boycott products, and decide whether to vaccinate their kids. These deeply human decisions are the very object of fake news manipulation. Bots are just a vehicle to achieve an end. But if humans are the objects of fake news campaigns, and if they are so critical to their spread, why are we so attracted to fake news? And why do we share it?
The Novelty Hypothesis
One explanation is what Soroush Vosoughi, Deb Roy, and I called the novelty hypothesis. Novelty attracts human attention because it is surprising and emotionally arousing. It updates our understanding of the world. It encourages sharing because it confers social status on the sharer, who is seen as someone who is “in the know” or who has access to “inside information.” Knowing that, we tested whether false news was more novel than the truth in the ten years of Twitter data we studied. We also examined whether Twitter users were more likely to retweet information that seemed to be more novel.
To assess novelty, we looked at users who shared true and false rumors and compared the content of rumor tweets to the content of all the tweets the users were exposed to in the sixty days prior to their decision to retweet a rumor. Our findings were consistent across multiple measures of novelty: false news was indeed more novel than the truth, and people were more likely to share novel information. This makes sense in the context of the “attention economy” (which I will discuss in detail in Chapter 9). In the context of competing social media memes, novelty attracts our scarce attention and motivates our consumption and sharing behaviors online.
Although false rumors were more novel than true rumors in our study, users may not have perceived them as such. So to further test our novelty hypothesis, we assessed users’ perceptions of true and false rumors by comparing the emotions they expressed in their replies to these rumors. We found that false rumors inspired more surprise and disgust, corroborating the novelty hypothesis, while the truth inspired more sadness, anticipation, joy, and trust. These emotions shed light on what inspires people to share false news beyond its novelty. To understand the mechanisms underlying the spread of fake news, we have to also consider humans’ susceptibility to it.
Our Susceptibility to Fake News
The science of human susceptibility to false beliefs is more developed than the science of fake news but is, unfortunately, no more settled. There’s currently a debate between “classical reasoning” and “motivated reasoning.” Classical reasoning contends that when we think analytically, we are better able to tell what’s real from what’s fake. Motivated reasoning, on the other hand, contends that when we are faced with corrective information about a false belief, the more analytically minded of us “dig in” and increase our commitment to those false beliefs, especially if we are more partisan or committed to those false beliefs to begin with.
My friend and colleague David Rand at MIT teamed up with Gordon Pennycook to study what types of people were better able to recognize fake news. They measured how cognitively reflective people were using a cognitive reflection task (CRT) and then asked them whether they believed a series of true and false news stories. A cognitive reflection task tests how reflective someone is by giving them a simple puzzle, like this one: “A bat and ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?” The problem elicits a fast, intuitive response—ten cents—that, upon reflection, is wrong: if the ball cost ten cents, the bat would have to cost $1.10 and they would total $1.20. Asking people to consider these types of puzzles tests their reflectiveness. And Rand and Pennycook found that people who were more reflective were better able to tell truth from falsity and to recognize overly partisan coverage of true events, supporting classical reasoning.
But repetition causes belief. If you beat us over the head with fake news, we’re more likely to believe it. It’s called the “illusory truth effect”—we tend to believe false information more after repeated exposure to it. People also tend to believe what they already think. (That’s confirmation bias.) So the more we hear something and the more it aligns with what we know, the more likely we are to believe it. Similar thinking has led some cognitive and political scientists to hypothesize that because of confirmation bias, corrective information can backfire—that trying to convince someone that their falsely held belief is wrong actually causes them to dig in to those false beliefs even more. But so far, evidence for this “backfire effect” seems weaker. For example, in three survey experiments, Andrew Guess and Alexander Coppock found “no evidence of backlash, even under theoretically favorable conditions.”
So reflection helps us distinguish truth from falsity, repetition causes belief, and corrective information doesn’t seem to backfire even though a confirmation bias generally leads us to believe what we already know. These findings give us leads for fighting fake news (which I will return to in Chapter 12, when I discuss how we must adapt).
The Economic Motive to Create Fake News
The political motive for creating fake news is abundantly clear from Russia’s foreign interference in Ukrainian and American politics. But the economic motive should not be underestimated. And nowhere has the economic motive to create fake news been more obvious than in Veles, Macedonia.
Veles is a sleepy mountain town with 55,000 residents, two TV channels, and a few lovely churches. It boasts a handful of notable historical figures and events, from Ottoman grand viziers to battles between the Serbian and Ottoman empires in the late fourteenth century. But perhaps the most important contribution to Veles’s global historical significance will be that during the 2016 U.S. presidential election, its unemployed teen population discovered how the Hype Machine could make them rich by spreading fake news online.
The teenagers of Veles developed and promoted hundreds of websites that spread fake news to voters in the United States through social media advertising networks. Companies like Google show ads to Internet browsers and pay website creators based on how many high-quality eyeballs they attract. The teenagers of Veles discovered that they could make a lot of money by creating websites and promoting their content though social media networks. The more people read and shared their articles, the more money they made.
They found that fake news attracted more readers and, as we found in our own research, that it was 70 percent more likely to be shared online. They created fake accounts to amplify the signal, and once the trending algorithms got hold of them, the fake news stories received a broadcasting boost, exposing them to even more people, in new areas of the network. What ensued was a deluge of fake news that washed over the American public just as they were heading to the polls. Money flowed in one direction and falsehood flowed in the other, leaving Veles flush with new BMWs, and the United States inundated with false news months before the 2016 presidential election. The town of Veles is only one such example. In 2019 fake news websites generated over $200 million a year in ad revenue. Fake news is big business, and our approaches to solving the problem (which I will address in Chapter 12) must recognize that economic reality.
The End of Reality
Unfortunately, everything I have described so far—from stock market crashes to coronavirus misinformation to measles outbreaks to election interference—is the good news. That’s because the age of fake news is about to get a whole lot worse. We are on the verge of a new era of synthetic media that some fear will usher us into an “end of reality.” This characterization may seem dramatic, but there is no doubt that technological innovation in the fabrication of falsity is advancing at a breakneck pace. The development of “deepfakes” is generating exceedingly convincing synthetic audio and video that is even more likely to fool us than textual fake news. Deepfake technology uses deep learning, a form of machine learning based on multilayered neural networks, to create hyperrealistic fake video and audio. If seeing is believing, then the next generation of falsity threatens to convince us more than any fake media we have seen so far.
In 2018 movie director (and expert impersonator) Jordan Peele teamed up with BuzzFeed to create a deepfake video of Barack Obama calling Donald Trump a “complete and total dipshit.” It was convincing but obviously fake. Peele added a tongue-in-cheek nod to the obvious falsity of his deepfake when he made Obama say, “Now, I would never say these things … at least not in a public address.” But what happens when the videos are not made to be obviously fake, but instead made to convincingly deceive?
Deepfake technology is based on a specific type of deep learning called generative adversarial networks, or GANs, which was first developed by Ian Goodfellow while he was a graduate student at the University of Montreal. One night while drinking beer with fellow graduate students at a local watering hole, Goodfellow was confronted with a machine-learning problem that had confounded his friends: training a computer to create photos by itself. Conventional methods were failing miserably. But that night while enjoying a few pints, Goodfellow had an epiphany. He wondered if they could solve the problem by pitting two neural networks against each other. It was the origin of GANs—a technology that Yann LeCun, former head of Facebook AI Research, dubbed “the coolest idea in deep learning in the last 20 years.” It’s also what manipulated Barack Obama to call Donald Trump a “dipshit.”
GANs pit two networks against each other: a “generator,” whose job is to generate synthetic media, and a “discriminator,” whose job is to determine if the content is real or fake. The generator learns from the discriminator’s decisions and optimizes its media to create more and more convincing video and audio. In fact, the generator’s whole job is to maximize the likelihood that it will fool the discriminator into thinking the synthetic video or audio is real. Imagine a machine, set in a hyper loop, trying to get better and better at fooling us. That’s the future of reality distortion in a world with exponentially improving GANs technology.
GANs can be used for good as well—for example, to generate convincing synthetic data in high-energy physics experiments or to accelerate drug discovery. But the potential geopolitical and economic harm they can create is troubling. Ambassador Daniel Benjamin, former coordinator for counterterrorism at the U.S. State Department, and Steven Simon, former National Security Council senior director for counterterrorism in the Clinton and Obama administrations, paint a grim picture: “One can easily imagine the havoc caused by falsified video that depicts foreign Iranian officials collaborating with terrorists to target the United States. Or by something as simple as invented news reports about Iranian or North Korean military plans for preemptive strikes on any number of targets. … It might end up causing a war, or just as consequentially, impeding a national response to a genuine threat.”
Deepfaked audio is already being used to defraud companies of millions of dollars. In the summer of 2019, Symantec CTO Hugh Thompson revealed that his company had seen deepfaked audio attacks against several of its clients. The attackers first trained a GAN on hours of public audio recordings of a CEO’s voice, while giving news interviews, delivering public speeches, speaking during earnings calls, or testifying before Congress. Using these audio files, the attackers built a system to automatically mimic the CEO’s voice. They would call, for example, the CFO of the company and pretend to be the CEO requesting an immediate wire transfer of millions of dollars into a bank account they controlled. The system didn’t just deliver a prerecorded message but converted the attacker’s voice into the CEO’s voice in real time so they could engage in a realistic conversation and answer the CFO’s questions. The synthesized audio of the CEO’s voice was so convincing that, coupled with a good story about why the money needed to be transferred right away—they were about to lose a big deal, or they had to beat an impending deadline at the end of the fiscal quarter—the CFO would comply with the CEO’s request and execute the transfer. Thompson revealed that each attack cost the target companies millions of dollars.
As Jordan Peele made Obama say in his BuzzFeed deepfake: “It may sound basic, but how we move forward, in the Age of Information, is going to be the difference between whether we survive or whether we become some kind of fucked up dystopia.” To understand whether dystopia is our destiny, we have to understand how the Hype Machine works. To do that, we’ll need to go back to first principles, starting with a deep dive under the hood of the Hype Machine, followed by an examination of social media’s effect on our brains.
* Disinformation is deliberate falsehood spread to deceive, while misinformation is falsehood spread regardless of its intent. Disinformation is a subset of misinformation.
* Chris Bail and his team found no evidence that interactions with IRA Twitter accounts in late 2017 impacted political attitudes or behavior. But several limitations prevented them from determining “whether IRA accounts influenced the 2016 presidential election.” The study was conducted a year after the election, after the IRA had ramped down their information operation and Twitter had suspended two-thirds of their accounts. The sample included no independents and only frequent Twitter users, was not representative of U.S. voters, and did not consider voting behavior. The study did find suggestive evidence that IRA interactions changed three things: opposing party ratings among “low news interest” respondents, the number of political accounts followed among “high news interest” respondents, and both opposing party ratings and the number of political accounts followed among Democrats.
Конец ознакомительного фрагмента.
Текст предоставлен ООО «ЛитРес».
Прочитайте эту книгу целиком, купив полную легальную версию на ЛитРес.
Безопасно оплатить книгу можно банковской картой Visa, MasterCard, Maestro, со счета мобильного телефона, с платежного терминала, в салоне МТС или Связной, через PayPal, WebMoney, Яндекс.Деньги, QIWI Кошелек, бонусными картами или другим удобным Вам способом.
Вы ознакомились с фрагментом книги.
Для бесплатного чтения открыта только часть текста.
Приобретайте полный текст книги у нашего партнера:
Полная версия книги