Skip to Main Content

In the years since the onset of the Covid-19 pandemic, many health care professionals have turned to Twitter as a way to share news and advice about public health. But Elon Musk’s takeover of Twitter, which closed last week, is raising concerns that the self-described “free speech absolutist” could change the social-media platform in ways that promote, rather than curb, the spread of mis- and disinformation.

Back in April, World Health Organization official Mike Ryan warned about the dangers of misinformation in the wake of Musk’s Twitter deal. More recently, Food and Drug Administration Commissioner Robert Califf condemned a surge of “divisive and hateful language” on Twitter but vowed to remain on the platform in an effort to protect public health, while cardiologist Eric Topol also said it was important for medical professionals to remain on Twitter to share facts and counter disinformation.

advertisement

While Musk’s history of downplaying and making false statements about the pandemic doesn’t necessarily inspire confidence about his plans, misinformation researchers say it’s hard to predict how the Tesla and SpaceX CEO will ultimately approach fact-checking, content moderation, and other concerns. Here’s what we do know so far.

How Elon Musk’s changes to Twitter could impact misinformation

Shortly after closing the deal to buy Twitter on Oct. 27, Musk fired four of Twitter’s top executives: CEO Parag Agrawal, chief financial officer Ned Segal, general counsel Sean Edgett, and legal chief Vijaya Gadde. Gadde had pioneered many of Twitter’s trust and safety initiatives, such as placing tweets that violate the platform’s rules behind an “interstitial” offering a warning to readers, and was central to Twitter’s 2021 decision to permanently ban former President Donald Trump from the platform.

Musk had previously said that he planned to reverse the Trump ban. But on Oct. 28, the day after completing his purchase, Musk tweeted, “Twitter will be forming a content moderation council with widely diverse viewpoints. No major content decisions or account reinstatements will happen before that council convenes.”

advertisement

While content moderation decisions aren’t changing quite yet, news broke over the weekend that Musk may start charging Twitter users to purchase or maintain verified accounts. This pay-to-play approach would change the iconic blue checkmark that appears on verified users’ profiles — currently sought after because it signals that the account holder is a noteworthy figure of some kind and therefore accountable for what they say — to a credential that is “not meaningful,” according to Vineet Arora, dean for medical education at the University of Chicago’s Pritzker School of Medicine. University of Washington biology professor Carl T. Bergstrom argued on Twitter that charging for verification would “dismantle the notion of expertise and create a free-for-all in which @FauciLied24’s voice is considered as valuable—or more so—than that of @MLipsitch,” a Harvard infectious disease epidemiologist.

On Monday evening, Bloomberg also reported that Twitter’s Trust and Safety team, in charge of enforcing Twitter’s content moderation policies, had dramatically decreased access to internal tools in the aftermath of the company’s acquisition. Employees told Bloomberg that Musk had already pointed out some rules he wants the team to review, including the misinformation policy that covers false Covid-19 statements and the hateful conduct policy — specifically, a section specifying users can be penalized for “targeted misgendering or deadnaming of transgender individuals.”

How has Twitter dealt with misinformation in the past?

Musk has framed his decision to purchase Twitter as motivated by a desire to protect and expand free speech on the platform. But disinformation is at odds with free speech, creating “a landscape where free discourse can’t thrive,” said Jillian York, director for international freedom of expression at the Electronic Frontier Foundation. York agrees that censorship can be a problem on social-media platforms. “But that’s what Twitter’s done so well, is that they’ve introduced features that aren’t simply notice-and-take-down features.”

The misinformation experts STAT consulted said that Twitter has done a better job than most other social-media platforms in developing approaches to combating misinformation. In addition to warning labels for disinformation, the features York referred to include “fact checks” that add context to potentially misleading tweets and a prompt that asks users if they want to read an article before retweeting it. De-promoting offensive content but not asking the user to delete it, a practice controversially known as “shadowbanning,” is another way that Twitter has tried to balance free speech protections with concerns over hate speech and other sensitive issues. Musk voiced his support for this solution in the past, saying in an interview with the Financial Times that tweets that are “destructive to the world” could be “made invisible or have very limited traction.”

“I’m a longtime Twitter user,” said Arora, who joined Twitter in 2008. “I remember when it was impossible to report something for misinformation. So the fact that you can now report for dis- or misinformation is an advance. Is it enough? No, but they put their toe in the water and I have seen content labeled as misinformation, which is helpful.”

Twitter has quite detailed guidelines for how it deals with Covid-19 misinformation specifically, including applying labels to misleading information, lowering the visibility of such tweets, and requiring users to delete tweets that make demonstrably false claims, such as the idea that the pandemic is a hoax. However, these policies seem to be applied unevenly. One University of Oxford study showed that 59% of Covid-19 claims on Twitter that fact-checkers had determined to be false still did not have misinformation labels on them. Conversely, in 2022, several scientific and medical professionals sharing correct information about Covid-19 had their tweets flagged mistakenly and their accounts temporarily suspended.

That said, the amount of misinformation shared on Twitter in general is quite low compared to other platforms, said Neil Johnson, who studies the flow of online misinformation as a professor and head of the Dynamic Online Networks Lab at George Washington University.

Johnson said that fixing misinformation on one platform isn’t ultimately the solution because it just pushes misinformation-sharing to other platforms where such behavior is more permissible. His latest research shows that platforms like Twitter and Facebook instead become jumping-off points for links to misinformation-full platforms like Gab, Telegram, and VKontakte, an Eastern European site that’s similar to Facebook.

What will Musk do next?

Exactly how Musk plans to change Twitter is unclear, as he has reversed course on various statements already. The Washington Post reported that Musk said on Oct. 20 he was going to cut over 75% of Twitter’s 7,500 staff members. The day before the acquisition closed, Musk walked back that statement in a Twitter staff meeting, according to Bloomberg.

Musk has also said he is pro-free-speech and “against censorship that goes far beyond the law.” York, however, points out that Twitter has resisted other countries’ demands for content takedowns in the past that don’t comply with international human rights standards. “[Musk] doesn’t know the laws of every country, and I don’t think he’s really thinking about, for example, an LGBTQ user in Saudi Arabia, whose speech might be illegal there.”

It’s also unclear how Musk is going to install the content moderation council he mentioned in his Oct. 28 tweet. Twitter already has a Trust and Safety Council comprised of worldwide organizations that advise Twitter, but the company specifies that membership “doesn’t imply endorsement of any decisions [Twitter makes].”

Musk’s council might be more operationally similar to Facebook’s Oversight Board, whose adjudications on Facebook’s content decisions are “binding, meaning Facebook will have to implement them, unless doing so could violate the law,” according to its site. The board is made up of a variety of experts in areas like law, human rights, and technology.

In an interview with TED Talk founder Chris Anderson earlier this year, Musk said that Twitter is the “de facto town square.” There’s a problem with that analogy, according to Vincent Hendricks, a professor of formal philosophy at the Center for Information and Bubble Studies at the University of Copenhagen.

“A pillar of the entire idea of the public spaces that no one person is able to decide what goes on in this place, nor there should be any peculiar interest that could decide who would have a voice, and who would not have a voice,” said Hendricks. Under Musk’s ownership, “we have a situation [where] public space is in private hands.” Kant and other Enlightenment thinkers, he said, would be rolling in their graves.

How health professionals are preparing for Twitter’s new era

While some Twitter users’ initial reaction to the Musk takeover was to jump ship to other platforms, health care professionals say it’s better to stay put.

“If everybody who is trying to put out evidence-based, science-based information leaves that platform before we see what exactly is going to happen, then we leave a void of factual information dissemination. We actually leave an opportunity for the spread of even more disinformation and misinformation,” said Shikha Jain, an associate professor of medicine and the director of communication strategies in medicine at the University of Illinois Chicago.

Jain, Arora, and several other Illinois physicians formed a group called IMPACT at the beginning of the pandemic in an effort to fight misinformation together. Group members jointly create clear health messages and communicate them at the same time across different platforms. The goal is to eliminate the confusion that can arise when many medical professionals share slightly different information at the same time, which can be harder for social-media users to parse.

Arora and Jain said that IMPACT’s strategy is working, though it’s not always obvious from social media engagement.

“We worry so much about the naysayers; who’s going to come back and give us a troll comment. We forget about the bystanders — people that are watching this happen,” said Arora. Both she and Jain have received private messages thanking them for providing information others could show uncles or brothers or friends or for inspiring them to take a next step to protect their health.

Jain praised Twitter for prioritizing verifying health care workers over the course of the pandemic — an effort that’s not been without its disparities, especially as it pertained to verifying Black female physicians. As a potential model for future ways that Twitter could improve public-health efforts, she pointed to YouTube, which last week announced licensed health care professionals would be eligible for features meant to designate them as trustworthy. These features include panels telling viewers that the content they’re viewing is from an authoritative source, and inclusion in collections of health-related videos that YouTube calls “health content shelves.”

Even content from medical professionals can require moderation when it comes to people who are “abusing their licensing credentials” to “willfully spread false information,” Arora said. But another way to fight further misinformation on Twitter, should it arise, is giving more training in science communication to health care professionals so the platform will offer even more accurate information on health and medicine.

“Physicians back in the ’50s and ’60s, part of our job was to educate our community, which was our churches, our neighbors, our patients, our families,” said Jain. “Now, as physicians, our job is to educate our community — but our community is more global, and Twitter allows us to do that.”

Correction: An earlier version of this story misstated Chris Anderson’s title.

STAT encourages you to share your voice. We welcome your commentary, criticism, and expertise on our subscriber-only platform, STAT+ Connect

To submit a correction request, please visit our Contact Us page.