Human Voices, Episode 2: Bot Networks and the Spread of Disinformation, ft. Doctor Kathleen Carley
Hosted By Bastian Purrer
October 21, 2020
Welcome back to Human Voices, the humanID’s podcast talking cybersecurity, privacy, and news in tech! This episode features Doctor Kathleen Carley, from the School of Computer Science within the Institute for Software Research International at Carnegie Mellon University. She is the director for the Center of Computational Analysis of Social Organizational Systems there. Recently, Carley lent her expertise to NPR about the movement of disinformation and divisive material relating to COVID-19 in the article “Researchers: Nearly Half Of Accounts Tweeting About Coronavirus Are Likely Bots”.
You can listen along to Human Voices right here or on Spotify, Apple Podcasts, Anchor, or ListenNotes! The below teaser has been edited for clarity and brevity by Ariana Garcia.
humanID: Can you give a brief breakdown of what your group studies?
Kathleen Carley: We use what’s called network analysis ー studying who talks to whom ー to explore how ideas connect to simulation or agent-based modeling, machine-learning, text-mining, and text analysis techniques, to model complex social problems. We’ve looked at terrorism and US elections in the past. Now we’re looking at COVID-19, through social media. Depending on how you look at it, from a machine learning perspective (and what level of precision you’re willing to accept), we find between 25% and 49% of bots on Twitter.
humanID: You and your lab have also done some work creating a tool to model the spread of disease. Can you draw any parallels between the way that disease spreads and the way that misinformation spreads?
Carley: We know that information and disease do not spread alike at all. People who try to use disease models for information, miss some of the important things about information, and vice versa. It has to do with the flow properties of networks. When information is spreading verbally, you can tell one person or many people at once. In that way, it’s sort of like a disease. But, I retain that information. I can continue to tell others, and there’s no blackout period when I’m not infectious. Meaning I don’t ever have to stop spreading that information. With disease, I’m only able to infect others for a certain time period and only if I’m using the right vector.
Information can spread faster than diseases in today’s world, since we have social media and online ways of interacting. Diseases aren’t transmitted just by reading an internet post.
humanID: What ideas might you have for limiting the spread of fake news and disinformation?
Carley: One of our findings from our model suggests that if you have a multi-platform environment, like we do, and you just simply remove a piece of disinformation on one site, that piece will re-emerge elsewhere. Now, the info is not just within one tweet, but referenced by other Twitter posts, talked about by the people sharing it, then it’s looped onto Facebook and then Reddit and then linked out to a YouTube video. Because of that movement, if Twitter decides to ban a particular post, that same piece of information can get back on Twitter, because it’s coming from a different site. One of the most promising strategies to stop the spread of disinformation would be simply to have cross-platform coordination. The other thing I want to say is: social media platforms finding and removing all disinformation is just wishful thinking because it takes many forms. It’s not just fact checking.
humanID: How might we be able to bring down that 22% bot population you mentioned? How do you think we could possibly prevent the bots from reappearing?
Carley: First off, you have to question Do you really want to bring them down? Keep in mind that most bots are not bad. They’re not the ones necessarily spreading disinformation. There are bots out there that retweet notices from the CDC or the World Health Organization. Just because it’s a bot, doesn’t mean it’s bad. The goal is not to just get rid of all bots. Secondly, there are people behind the bots that are spreading disinformation. If you delete them, that doesn’t stop the people from coming up with the next scheme. All you’ve done is delayed the problem. Removing bots categorically is not necessarily a good or effective idea. We need to think more about the fundamental root causes of what’s going on.
humanID: Do you think there is any way to preemptively distinguish between groups or user accounts who are attempting to spread mis/disinformation as opposed to groups who are trying to spread helpful information?
Carley: Yes, I think that is possible. It’s going to take a more concerted and collaborative effort across the platforms and the development of new technologies that are good at identifying coordinated efforts.
I think we have to enact more laws and policies that are enforceable, with very clear definitions of what we mean by ‘disinformation’, and so on. We can’t allow a policymaker to shut down opponents by claiming disinformation. That would be absolutely horrible for Americans and for democratic values.
humanID: Is there anything you are working on right now that you would want our listeners to be aware of, in terms of your research?
Carley: Right now we’re working on what we call ‘troll and cyborg detectors’. Cyborgs are accounts that are both human and bot. This is one of the ways this problem is mutating and evolving. Trolls are individuals who operate under a fake persona to disrupt groups, by using hate speech and identity bashing. We’re working on those things as well, now, because we’ve seen these bots playing around with information in the context of COVID-19. It’s a bad trend.
humanID: I think the goal of humanID, other than protecting privacy, is holding people more accountable to their actions online.