colorful graphic image with the words fact and myth

UofSC faculty work to counteract the emotional power of misinformation

Improving media literacy can lead to more discerning news consumption



“Don't believe the news.”

“I don’t trust the media.”

“I’m not going to look up anything. I have my opinion which is just as valid as yours.”

“How is fact checking possible within 48 hours of an incident? We believe what we believe.”

These are actual comments posted on social media in the days following the Jan. 6 insurrection in which supporters of President Donald Trump breached the U.S. Capitol. They illustrate the gargantuan task of combatting mis- and disinformation that is circulated, consumed and believed by the public.

One of the challenges is that people connect and react emotionally to questionable information that aligns with their opinions.

“As educators we can give people all of the skills, tips and techniques in the world, but it's going to come down to how they feel about the person or the source. That is going to have a really huge influence on whether or not they believe that information,” says professor Nicole Cooke, the Augusta Baker Chair in the School of Information Science.

Cooke and her colleagues in the College of Information and Communications have conducted research to help improve media literacy, to teach people how to evaluate quality sources and to recognize clues for misinformation.

 

Digging deeper: Critical news consumption

A huge part of the threat and power of “fake news,” mis- and disinformation is that people tend to believe what they want to believe, Cooke says. It's very hard to overcome that emotional barrier, so the challenge becomes showing them how to be a more critical news consumer.

“We've seen a lot of partisanship and politicization of sources,” she says. “I might believe The New York Times is a credible source, and someone else may say it’s biased and won’t read it. I don't necessarily care where you get your information, but I do want you to have tips and techniques to transcend the source.”

While there are plugins and tools that can be used to help identify “fake news,” Cooke says she wants to help news consumers build that capacity for themselves. For example, she recommends a technique called triangulation: Have you seen the information in at least three different places? She also suggests maintaining a healthy dose of skepticism. Pay attention if your instincts are telling you something looks or sounds questionable. Check out the credentials of the person writing or sharing. Don’t get caught up in the moment; dig a little deeper to verify information before sharing it.

Cooke also recommends getting outside our filter bubbles and echo chambers. One way to recognize what those are is to keep a news consumption log for 24 hours. Note the news sources you rely on, the time you’re reading or watching, the headlines and the source. As you see patterns — for example, if you notice you get news only from social media or from podcasts — you might want to start reading a newspaper or watching a nightly newscast.

“The purpose is not necessarily to judge a source as ‘good’ or ‘bad’; it's just a tool for taking a deeper dive into our habits to become more cognizant of our news environment,” Cooke says.

 

When seeing is not believing

Technology in the form of deepfakes presents a different and perhaps more challenging disinformation dilemma for not only the public but for journalists as well.

Deepfakes are artificially rendered videos that are so believable they are undetectable.

“We’ve heard the phrase ‘seeing is believing.’ Deepfakes turn that upside down because we could see things that aren't actually real,” says Andrea Hickerson, director of the School of Journalism and Mass Communications. “As deepfakes continue to get more sophisticated, the challenge in detecting them grows.”

Hickerson is working with Matt Wright and John Sohrawardi, researchers at the Rochester Institute of Technology in New York, to build cloud-based software that will help journalists ferret out deepfakes. They are concentrating on journalists because they are seen as important arbiters of the truth and credibility and because they have a large outreach.

DeFake allows journalists to cut and paste a video link into a tool on a website to receive a score of the likelihood the video has been faked. The research team has been working in newsrooms to see how journalists will incorporate the software as a tool in news-gathering.  

“Some local reporters may think deepfakes are an issue that doesn’t apply to them,” Hickerson says. “But what if it’s the mayor or a local hospital or financial executive who is being misrepresented. Deepfakes can be in any context, and there are a lot of local implications. Everyone should be on guard.”

While deepfakes are often imperceptible to the human eye, Hickerson says the same questions for assessing the veracity of any information can be still be useful in deciding whether to believe what you’re seeing: Where is this video coming from? Who is sharing it? Why are they sharing it? What are the implications of it? Is there an alternative explanation for what the person is saying? What are reputable reporters saying about it?

“Deepfakes are created, obviously, to influence public opinion and perception. False information can diminish our decision-making capacity, manipulate emotions, beliefs, opinions and maybe even actions,” Hickerson says. “That’s not good for democracy or community.”

 

Bots: Benign or malevolent

As with deepfakes, the strategy behind bots is to influence public opinion. This can be done with good intentions — sharing links to reputable news sources or health information, for example — or with a more malevolent agenda such as trying to sow division or influence an election.

Bots are developed with computer programming to produce content and emulate human behavior. They may use complex algorithms that analyze the content of a post and tailor a response, or they may use an algorithm that simply looks for a specific word in a post and then generates a standard reply. Bots also may automatically retweet all the posts from a celebrity or politician.

Human trolls who do nothing from morning to night but share misinformation and disinformation also fall into the bot category, says Amir Karami, a professor in the School of Information Science.

Karami has done research on bot activity following the 2018 mass shooting at a high school in Winter Park, Florida, and more recently analysis of bot-produced tweets related to the COVID-19 vaccine, the opioid crisis, abortion, LGBTQ issues, trust in science and government, and the effect of dis- and misinformation on mental health. The more harmful of these tweets use strategies such as baiting and spreading conspiracy theories to elicit emotional responses.

“People are aware of bots, but they don’t understand them or know how to identify them,” Karami says. “If you don’t understand what a bot is, you can’t understand the impact of sharing their disinformation.”

Tipoffs that an account is a bot include a large difference between the number of followers and the number followed; no profile image or a suspicious image such as an animation (to check the validity of a real image, do a Google search); unusual activity such as hundreds of tweets per day or accounts that retweet only. For public figures’ accounts, check for the blue verified badge. Karami also recommends Botometer, an online tool that measures the probability an account is fake.

 

But everybody’s talking about it

Sometimes simply the volume of social media posts by actual real people can create misinformation.

Brooke McKeever, associate dean for research in the College of Information and Communications, and Robert McKeever, a professor in the School of Journalism and Mass Communications, have studied communication and dis- and misinformation in relation to vaccinations. Their research shows that mothers who do not support or have reservations about childhood vaccinations are more likely to communicate about the issue both on social media and in person.

“This outsized social media presence could give the impression of a false consensus, and people could start to believe that anti-vaccine sentiment or vaccine hesitancy is the norm when it’s really not,” Brooke McKeever says.

In addition, some of the McKeevers’ research with others found that widespread social media posts about the myth that vaccines are linked to autism drove some mainstream media coverage, giving further credence to misinformation.

To counteract these misleading impressions, McKeever says it is important for people who believe vaccines are safe and effective to speak out, to acknowledge concerns and answer questions by sharing trusted and legitimate sources, and to do so civilly — don’t pounce.

“A lot of us have considered childhood vaccinations standard and never found the need to say, ‘I just got my kids vaccinated,’” McKeever says. “For those who are strongly anti-vaccine or adhere to conspiracy theories, you're probably not going to change their mind, but there's a whole swath of people who are somewhere in the middle.”

With the unprecedented rapid development of the COVID vaccine, sharing accurate information is imperative for people who are nervous and have questions, McKeever says, but perhaps even more influential will be for citizens to see leaders, friends and neighbors in their own communities getting vaccinated themselves.

“The more that people get correct information from trusted local sources and experts — even in local Facebook groups dedicated to fact-sharing — it will become a layering effect,” she says, echoing her colleagues’ emphasis on encouraging the public to become more discerning in their media consumption.

 

Combatting misinformation with education

The College of Information and Communications has a role to support democracy and should be the leader of the discussion about mis- and disinformation on campus and in the state, faculty members say.

“We have a really important obligation to serve the public of our state by taking on the role of public editor or ombudsperson for news and information that reaches citizens in South Carolina,” says Hickerson.

Believing misinformation and disinformation can hinder how people make decisions and can lead to wrong — or even harmful — conclusions.

“We want to provide guidance so people can use information to their advantage and not be led astray,” Cooke says. “You can still make the bad decision if you want to, but at least you would have all of the possibilities at your disposal.”


Share this Story! Let friends in your social network know what you are reading about