Epidemic

The Weaponization of ‘News’: How Algorithms are Shaping Behavior, from Voting to Vaccines

  • “As a health community, we need to do a better job of getting creative narratives out there that are accurate, that are credible, that are appealing,” says Heidi Larson, Director of the Vaccine Confidence Project and anthropology professor at the London School of Hygiene and Tropical Medicine.
  • “We have a culture of impunity now — not just for misinformation, but for the social-media companies,” says Imran Ahmed, CEO of the Center for Countering Digital Hate. 
  • Given their essential role in distributing public-health information, social-media platforms need to be more transparent about reporting — and blocking — racist algorithms and “bad actors.”

More than half of Americans get their news from social media, according to the Pew Research Center. Yet the delivery of that news — as well as its provenance — is far from straightforward. Social-media news is propelled by algorithms, which determine what stories gain exposure — and who sees them. These algorithms filter the news we see, set priorities for how and where we see it, and select additional stories based on our viewing patterns. In the midst of elections and a pandemic, they can powerfully influence decisions on voting and vaccines.

Yet the algorithms don’t assess the validity of what comes up — and they can be rife with bias, which becomes amplified as millions of users read, respond to, and redirect the news that strikes them. Along with the algorithms’ curation of our news, we unwittingly curate our own news exposure, as well as the reading habits of others, through what we engage with, comment on, and promote — whether eagerly or angrily. 

“Those algorithms are charged with looking [for] identifying patterns in our engagement behaviors,” says Mútale Nkonde, founder of AI For the People, a nonprofit that researches how Artificial Intelligence technology can be used for the social good. “So what do you like? What do you retweet? What do you comment on? And then serving you more news.”

In Episode 74 of EPIDEMIC, Dr. Céline Gounder discusses the metastasizing threat of social-media algorithms, disinformation, and bad actors across social-media platforms with Mútale Nkonde, Corin Faife, Heidi Larson, and Imran Ahmed. 

The weaponization of algorithms

For those seeking to shape opinion — and elections — algorithms can be weaponized, delivering disinformation to specific demographics with the aim of shaping people’s beliefs. Nkonde calls it “disinformation as a form of warfare,” and says that African Americans and other people of color may be at higher risk of exposure to misinformation. In part, she says, this may be due to greater skepticism of traditional news sources: Nkonde describes “a lack of trust of the news media because of historic racial bias and wrongful … framing of Black people and the Black experience.” 

But disinformation campaigns have also systematically targeted Black Americans — notably, during the 2016 election, when disinformation soared in an effort to suppress Black voting in swing states. In the Special Counsel investigation into Russian interference in the 2016 election, Black Americans were found to be the group most targeted by disinformation. Russian agents led campaigns that often began with an accepted truth — for example, the existence of systemic racism with economic ramifications. From that accepted truth, the campaigns would springboard into “a massive lie,” says Nkonde — such as the claim that unless reparations were promised on Day One of a new administration, Black Americans should not vote. 

These campaigns also encouraged Black Americans to vote “down ballot” — meaning a voter would leave the ballot blank or write a hashtag at the bottom. Such ballots often end up discarded, due to a lack of clarity on the voter’s choice. In Detroit — a swing city in a swing state — about 70,000 people voted “down ballot” in 2016. The state swung from Obama in 2012 to Trump in 2016, with Trump winning by just over 10,000 votes — Michigan’s narrowest-ever margin for a presidential election.

After the election, the campaign briefly went silent, but the hashtag soon resurfaced, with a rumor that African Americans were immune to COVID. Accounts using these hashtags appeared to belong to Black Americans — yet the messages sided with white supremacists and conspiracy theorists, says Nkonde, leading her to question the authenticity of the profiles. As it became clear that African Americans were, in fact, disproportionately affected by COVID, the hashtag changed its message — and denied the pandemic altogether

The onslaught of disinformation — the array of messages, the sense that they come from all directions, and the uncertainty about which might be true — can create what Nkonde calls “information fog.” “You log onto your Twitter account and … you don’t know what’s real,” she says, comparing the experience to waking up and walking into a fog, “where you just can’t really see anything and … you somehow have to … make your way in the world anyway.” Nkonde describes one Twitter account that tweeted 147,000 times in a single day — more than one tweet per second. Some Twitter accounts have their own websites, she notes, and reference their “news” stories in a disorienting self-referential spiral that can be difficult to parse. 

While the volume of tweets can sound like a disinformation firehose, the campaigns are, in fact, often quite precise in their targets. Algorithm gamers choose who will see their posts, requesting that specific demographics be targeted, and social-media algorithms analyze users’ profiles to determine who fits in a given demographic. 

The result can be marketing and messaging with racism at its core. A 2016 ProPublica report showed that advertisers on Facebook could block certain racial groups from seeing housing advertisements — a violation of the Fair Housing Act. While Facebook tweaked its system in response to a 2019 lawsuit by the Department of Housing and Urban Development, advertisers can still discriminate with ease: The targeting system now uses ethnic or cultural interests rather than identities — with very similar results. 

Earlier this year, a study published by data reporter Corin Faife at The Markup examined who saw sponsored posts from public-health agencies on Facebook — and found startling disparities. “Only 3% of the Black people in our panel saw any of these,” says Faife, “compared to 6.6% of white panelists and 9.5% of Asian panelists.” 

“There isn’t really a legal standard where we say that public-health information has to reach people equally,” says Faife. Although the report’s sample size was small, Faife says the need for change is clear: “We really need more transparency — especially if Facebook is going to play such a big [role] in distributing public-health information.” 

Curtailing the spread of disinformation 

If equitable distribution of information is one goal in making algorithms more fair, the suppression of disinformation is another. And in order to squelch its spread, those combating disinformation must first understand where it starts, how it moves, and what makes it so compelling. 

Heidi Larson, director of the Vaccine Confidence Project and anthropology professor at the London School of Hygiene and Tropical Medicine, tracks sources of disinformation online along with a team of researchers. “It’s a bit like the weather room,” she says. “We watch where things move and how fast they move. We capture … a word or phrase and a piece of the news or the rumor, and you can see where it goes around the world and … who picks it up and who amplifies it.” Many of the campaigns start in the U.S., and are tailored to the concerns — and vulnerabilities — of different regions, says Larson: “Those seeds, those beginnings of a rumor … get embellished with local culture, local politics, local anxieties, local histories.”

Regions with information voids, or areas where trust in government and public health runs low, can be particularly fertile ground for such rumors. In the absence of abundant credible information, says Larson, rumors and disinformation flourish. Without substantial communications budgets, it can be hard for government agencies — including the Centers for Disease Control and Prevention — to counter these enticing, fast-moving stories with evidence-based narratives of their own. “Stories are really powerful means of communication … and this is where the gap between science and public, I think, is the biggest,” says Larson. “As a health community, we need to do a better job of getting creative narratives out there that are accurate, that are credible, that are appealing in ways that some of these … alternative voices are.”

Take the case of a drug used to fight malaria, hydroxychloroquine: Early in the pandemic, an American doctor claimed that the antimalarial was an effective treatment for COVID. President Trump endorsed it, and rumors of hydroxychloroquine’s efficacy against COVID quickly took hold, spreading with particular intensity in Nigeria, where the treatment was touted as a cure. In response to social media’s propagation of disinformation, the U.S. House Subcommittee on Communications and Technology held a hearing on March 25. Some of the biggest social-media companies were present, including Facebook, Twitter, and Google. At the end of the hearing, all the CEOs said they would take action against the disinformation campaigns. But disinformation is both rampant and lucrative, and work against it has been spotty and slow.  

For social-media platforms, the very algorithms that amplify disinformation can also amplify profits. Imran Ahmed, CEO of the Center for Countering Digital Hate, studies how users’ personal data propels the movement of disinformation online. He offers the example of an algorithm that’s written solely to find the most engaging content — whether true or false, innocuous or harmful — in order to maximize users’ time on a social-media platform (and maximize the number of ads they view). A company might make around 5 cents per ad — but across a billion users, that could add up to $50 million per day, and $20 billion per year. With billions at stake, companies are often reluctant to change their algorithms. 

A new report released by the Center for Digital Hate shows how pervasive COVID falsehoods have been — with more than half of Instagram posts recommended to study participants containing misinformation about COVID, and one fifth of recommended posts promoting inaccuracies about vaccines. Yet most of this disinformation comes from a surprisingly small number of accounts: Imran’s organization identified just 12 accounts as being responsible for more than 60 percent of anti-vaccine content, and christened these accounts the “Disinformation Dozen.” 

One of the more famous distributors of COVID disinformation has been Robert F. Kennedy Jr. Kennedy has claimed that Black children should not get COVID vaccines because they have stronger reactions to them. “He’s spoken scientific nonsense,” says Imran, “and the end result of his misinformation, if it was believed by the African-American community, would be death.”

Since the release of Imran’s report, 12 state attorneys general and U.S. lawmakers (from both parties) have asked social-media companies to deplatform the “Disinformation Dozen.” But most of the accounts remain up. In an April Senate hearing, Facebook executive Monika Bickert falsely testified that Facebook had deplatformed 10 of the “Disinformation Dozen.”

“We have a culture of impunity now — not just for misinformation, but for the social-media companies,” says Ahmed. He points to Section 230 of the Communications Decency Act, which says such companies are not legally accountable for any content on their platforms. “They have really hidden behind that. They’ve gamed it in an aggressive way, saying, ‘You know what? We will obfuscate, will delay, will lie where necessary. We’ll just get through the next day so we can continue to reap as many profits as possible.’” 

But Ahmed is optimistic that “the time of impunity will come to an end.” And he believes that one of our most effective tools for reining in the spread of disinformation is simply deplatforming bad actors. “It damages their ability to fundraise,” says Ahmed. “It damages their ability to recruit. And it lessens the amount of information they get about the particular likes and dislikes of their audience. So we know deplatforming is incredibly effective in disrupting the malignant activity of these actors.”

The users on those platforms seem to know it, too. Lawsuits over deplatforming are piling up, and late last month, Florida passed a bill that would prohibit social-media companies from deplatforming political candidates. 

With social media the primary source of news for so many Americans, with algorithms amplifying messages, and with users’ own engagement propelling stories to ever-broader viewership, our reading and viewing habits are not only our own. Ahmed points to the way our interaction with information shapes that information for others, too — and to our shared responsibility for curtailing disinformation. “We are highly interdependent on the information that others consume,” says Ahmed, “so that we can build a coherent worldview that allows us to prevent things like violent extremism … [that] damage the whole of society.”

“It’s going to take all of us,” says Ahmed. “Government, civil society, individuals, but most of all, the tech companies. They are addicted to the profits that come from misinformation and hate.”