Wrong information isn’t just confusing — it can be dangerous. The rise in information sources can mean it’s harder for the average person to decide where to get reliable information. And then there are intentional campaigns to mislead, within the U.S. and outside of it.
Carnegie Mellon University professor Kathleen Carley leads the Center for Informed Democracy and Social Cybersecurity. Her team helps ferret out disinformation widely prevalent on Twitter, Instagram, Facebook, and websites. Naturally, their work ramped up in the spring after the pandemic took hold in the U.S.
“It’s felt like a new crisis every day. So it was like every day, someone calls practically and says, ‘Hey, can you follow this? Can you tell me what’s happening here?’ It’s like, it’s just, it’s just crazy,” Carley said.
The team tracked claims related to voter fraud — the findings were not yet available at press time — and heavily, the coronavirus. Earlier in the year they tracked the campaign to reopen America which heavily targeted Michigan, North Carolina, and Pennsylvania online.
What they found suggests that bots were used to try to make the push to reopen appear bigger. Bots are operated on social media networks to automatically generate messages, promote ideas, and act as followers to bolster a viewpoint or movement.
“They tend to have very consistent messaging. Whereas the people who (didn’t support) reopening and down that side, they were not very well coordinated. They were all arguing for totally different reasons. They did not use common hashtags. It was much more diffused,” Carley said. “So it was very much a coordinated campaign in which bots were used to augment and push the message and spread it and make it seem even bigger than it probably was.”
What is happening to people, that you can provide information that is verifiably true but they will claim to other information that is verifiably not true?
This is just kind of built into our cognition and the way our brains work. We have our own internal value system and goals. And if information seems to confirm those values and beliefs, we’re more likely to believe it. And if it seems to disconfirm it, we’re less likely to believe it. When we’re emotionally stressed, when we’re operating more emotionally than rationally, we’re just going to take the easy route, which is to keep going with things that confirm our current beliefs.
How can we think about having a society where people know, for lack of a better term, how to be media literate, how to be critical thinkers?
I don’t think there’s a simple answer, but there’s a variety of things that will nudge things in the right direction.
One is starting to teach critical thinking skills and logic diagramming, even to very young students, like in junior high and high school. To get people to question, even in grade school. I think another part of it simply is reducing the level of highly emotional language in a lot of posts. The things that Twitter was trying, like reducing hate speech.
It’s not going to solve it, but it’s helpful because it kind of reduces the level of emotionality and lets people calm down. The third thing is getting people to find common ground where you can agree on something and build out from that. Build common ground and then build a shared view of things.
There’s always been a level of hyperbole among elected officials, but what has it done to the public to have political leaders promoting one reality when you have another reality, for example, the state election results?
It’s a sign of, and it’s exacerbating, polarization. The second thing that it’s doing is it’s reducing trust in science in general. And it’s also giving an increased freedom for individuals in their own life to speak in terms of hyperbole, to say, “I can say outrageous things and not be held accountable for it.”
It used to be that people who spoke in conspiratorial tones were considered outcast or at least at looked askance, but now it feels like it’s quite mainstream to do so. Why is that?
This kind of outrageous talk kind of goes in cycles, 50-, 60-, 70-year cycles. And polarization goes in cycles. Also, social media is allowing people to find others who share their similar beliefs and form clubs without having to be physically near each other.
And then that’s then coupled with the way our brains naturally work. Our brains naturally go into this kind of us-them mentality. We naturally try to simplify the world into categories. And then you can use other things like bots wanting to make groups appear larger than they really are.
What role do social media companies have in right now? What does the corporate responsibility piece a little bit look like in terms of trying to, trying to not help propagate misleading information?
I think one of their responsibilities is to coordinate with each other because if I’m a social media company and I ban a particular group, but all the other ones let it back on and that group is getting drink bleach, you know, the message still gets out there. So I think that they, I think coordination is something they need to think about.
And I don’t think that they should necessarily get in the habit of being the ones who decide and police on all things. But if they, but if through them working with policy makers, came up with a common standard, just like we have standards for electricity. We have standards for plumbing. You can have a standard for content.
Like you will not say things that will, that will cause death, for example. If there are standards on the extremes, that would be good.
And then finally, I think that they need to be more open about their algorithms, make them more so that people understand what the algorithms are doing. They shouldn’t just use these algorithms and experiment on them and therefore experiment on the public. They should be open about what their algorithms are.
Since the spring, has there been more disinformation about COVID-19 or has it kind of become better, do you think?
There’s been more disinformation now about the vaccine and what the vaccine would do or won’t do, about when it will be delivered, when it won’t be delivered, who was getting it as well as whether the vaccine itself causes death or sterility.
The Internet is fairly young. Is having some standards and guidelines as other industries would maybe be the direction that we’re headed?
Here’s something to consider: Imagine that we had a new group like the IRS, but instead of being for money, it was the Internal Information Service and their job would be to audit social media companies to make sure that their algorithms are doing what they claim they do, and to make sure that they weren’t getting other nefarious information (in their operations).
What else do you think is useful for people to know as we live in an age where disinformation is prevalent?
It’s very important that companies start thinking about investing in having their own social cybersecurity teams who will do two things. One, they will monitor their corporate brand to make sure it’s not getting polluted by disinformation, from attacks from the outside, but also to monitor what they’re sending out to make sure that it is truthful and written in clear, easy-to-understand language.
Also, it’s important for people to understand that with any kind of crisis situation all the facts are not known at the beginning. So it’s not that people are lying, it’s that they really don’t know.
This interview was edited for clarity and brevity.