IE 11 is not supported. For an optimal experience visit our site on another browser.

Coronavirus misinformation makes neutrality a distant memory for tech companies

Facing the prospect that hoaxes or misinformation could worsen a global pandemic, tech platforms are taking control of the information ecosystem like never before.
Image: Coronavirus Pandemic Causes Climate Of Anxiety And Changing Routines In America
A passenger wearing a surgical mask uses his iPhone while riding an uptown subway in New York City on March 18, 2020.Robert Nickelsberg / Getty Images

Open up Instagram these days and you might be bombarded with calls to "Stay Home."

On YouTube, you may see a link to a government website about the coronavirus.

Or go to Twitter and try to find the phrase "social distancing is not effective." It might be there, but probably not for long — because Twitter has banned the phrase as harmful.

A few years ago, these kinds of warnings and filters would have been hard to imagine. Most major consumer technology platforms embraced the idea that they were neutral players, leaving the flow of information up to users.

Now, facing the prospect that hoaxes or misinformation could worsen a global pandemic, tech platforms are taking control of the information ecosystem like never before. It's a shift that may finally dispose of the idea that Big Tech provides a "neutral platform" where the most-liked idea wins, even if it's a conspiracy theory.

"What you're seeing is the platforms' being forced into a public health stand more than they've ever been before," said Ethan Zuckerman, director of the Center for Civic Media at the Massachusetts Institute of Technology.

Full coverage of the coronavirus outbreak

"It seems like the platforms have decided to take a clear stand, where they see COVID-19 as a significant enough public health problem that they're comfortable putting their thumb on the scale even if it runs the risk of some of their users claiming it's an unfair restriction on free speech," Zuckerman said.

From the start, major consumer internet companies had some rules. Most platforms didn't allow pornography or gore. As militants in the Middle East moved online, many companies — most notably Google and Facebook — worked to identify and remove content that tried to spread propaganda or recruit people. Still, those efforts weren't fully successful. YouTube, for example, wrongfully removed video documentation of human rights abuses as part of its campaign to delete terrorist-related content.

Beyond those narrow exceptions, tech companies resisted calls to influence what their users saw.

"We are a tech company, not a media company," Facebook CEO Mark Zuckerberg said in 2016, insisting he wanted to build tools, not moderate content. In 2018, he said Facebook shouldn't remove posts that deny the existence of the Holocaust so users have room to make unintentional mistakes.

That began to change in recent years, as academics, politicians, civil rights groups and even former employees scrutinized companies and pushed for change amid reports of lackluster content moderation. While tech companies hadn't been making specific editorial decisions, the systems that determined what people saw — often based on complex algorithms that tried to maximize engagement — became the focus of intense criticism.

Tech companies reacted. Major platforms stepped up enforcement around hate speech and abuse. Many changed how their systems worked. YouTube pledged last year to no longer recommend conspiracy videos, while Twitter added a feature for users to follow certain topics selected by the company. Amazon removed more than a dozen books that unscientifically claimed that a homemade bleach, chlorine dioxide, could cure conditions from malaria to childhood autism.

Facebook now has an elaborate rulebook on what stays up and what comes down, the result of countless internal meetings and feedback from lawmakers, interest groups and users. It has also been working on creating a body, independent at least in theory, that would rule on content removal questions almost like a supreme court.

A new willingness to moderate their platforms has culminated in an industrywide effort to crack down on misinformation and push people to authoritative information at a particularly crucial time.

"Neutrality — there's no such thing as that, because taking a neutral stance on an issue of public health consequence isn't neutral," said Whitney Phillips, a professor of communication at Syracuse University who researches online harassment and disinformation.

"Choosing to be neutral is a position," she said. "It's to say, ‘I am not getting involved because I do not believe it is worth getting involved.' It is internally inconsistent. It is illogical. It doesn't work as an idea.

"So these tech platforms can claim neutrality all they want, but they have never been neutral from the very outset," she added.

Image: Worker disinfects LIRR train during Covid-19 pandemic
A Long Island Rail Road employee disinfects a train car with an eco-friendly cleaner while at the Hicksville, New York LIRR station on March 19, 2020.Steve Pfost / Newsday RM via Getty Images

Many major tech platforms have gone beyond moderating coronavirus content to actively push messages from health professionals.

Instagram Chief Executive Adam Mosseri said Tuesday that the Facebook-owned service was being especially careful with recommendations, trying to vet accounts and posts before suggesting them to users. And he said it would take a hard line on bad medical advice.

Part of Instagram's push on the coronavirus has been to create a "sticker" with the phrase "Stay Home." People who post to their Instagram stories with the sticker may have their posts picked up and distributed widely, encouraging others to take up social distancing.

"Any misinformation related to COVID-19 that creates risk of real-world harm — we've seen things like ‘drink bleach if you're feeling any of the symptoms' as, like, a dangerous thing to say, a dangerous piece of information about coronavirus — we will take off Instagram, whether or not it's from a politician, no matter who it's from," Mosseri said during a live chat.

Facebook has a coronavirus "information center" with tips on subjects like hand-washing, and a search on the app for the name of the virus turns up posts from sources such as Johns Hopkins University, the World Health Organization and the American Medical Association.

Zuckerberg himself has posted frequently on the subject, even hosting a senior U.S. health official, Dr. Anthony Fauci, for a live video discussion.

Twitter has laid out sweeping policy changes related to the coronavirus. Last week, the company said it was broadening its definition of harm to cover 11 categories of tweets that it will ask users to remove, from denying health authority recommendations to describing ineffective treatments.

Download the NBC News app for full coverage and alerts about the coronavirus outbreak

“We recognize that people are increasingly coming to Twitter to find credible and authoritative information about the pandemic and we take that responsibility very seriously,” Twitter said in a statement Tuesday.

Twitter's moves drew applause from comedian Sacha Baron Cohen, a harsh critic of tech companies, who thanked Twitter "for putting facts and science ahead of misinformation and profit."

These efforts, however, have their limits.

Misinformation is still spreading far and wide about the coronavirus, including through messaging apps such as WhatsApp and iMessage. Snopes, the fact-checking organization, said Monday that it is overwhelmed by coronavirus misinformation.

"In moments where there's a lot of uncertainty, we will gravitate towards any information that has a unique or novel take on the problem. People share it because they aren't seeing it in other places," said Joan Donovan, research director of Harvard University's Shorenstein Center on Media, Politics and Public Policy.

Donovan, who researches disinformation campaigns, said the pandemic shows the value of labels on social media posts. Reddit, which has a thriving message board on the coronavirus, allows posts on the science behind the virus to be tagged as peer-reviewed or not.

"If our posts are not being curated and tagged with very important markers of legitimacy, then we don't know what to trust, and we start looking for other signals," she said.

Tech companies are also likely to take some steps back after the outbreak is over.

Politicians may not object when Instagram is working to suppress hoaxes about the coronavirus, but they have voiced serious concerns about moderation of many other topics.

"When they have pressure in the media from people saying, ‘You're controlling my speech,' my experience is that Facebook is very, very sensitive to that pressure, especially from the right," said Zuckerman of MIT.

Zuckerberg, for one, doesn't seem to think he's setting a precedent for how Facebook will act in future cases of misinformation.

"When you're dealing with a pandemic, a lot of the stuff we're seeing just crossed the threshold," the Facebook CEO told The New York Times in an interview about the coronavirus. "So it's easier to set policies that are a little more black and white and take a much harder line."