Disinformation and misinformation: why the distinction matters

By Jacob Hanna

November 18, 2020

If you’ve been on the internet for any amount of time, you have likely seen something — be it a news article, a social media post, or whatever else — which seems sensationalized. Let’s imagine  a self-described news agency that publishes an article about a presidential candidate making a policy promise, but intentionally alters what they say to such a degree that it no longer resembles the candidate’s initial statement; we’d call that fake news. Let’s also say that a family member, unaware of the fact that it is fake, shares this article on Facebook, and it goes viral. These are examples of disinformation and misinformation respectively. While the ability to distinguish their differences has always been important, their ability to harm social cohesion and even cause billions of dollars in economic harm makes it especially so. 

In this post, we will:
  • distinguish between disinformation and misinformation
  • explain the social and economic harms of both problems
  • explore how firms are tackling these issues
  • demonstrate how humanID attempts to solve the disinformation/misinformation problem
 

The difference between disinformation and misinformation

We may sometimes hear the words “disinformation” and “misinformation” used interchangeably, which is unfortunate because the interchangeable use muddies the role of intent. Let’s have a look at the prefixes of each word. The prefix dis– in disinformation is defined by Merriam-Webster as “do the opposite of; deprive of; to exclude or expel from”. By contrast, the prefix mis- in misinformation is defined as “badly, wrongly; unfavorably; in a suspicious manner”. If we are to understand disinformation and misinformation, we must understand it through the lens of intent. While disinformation requires intent, misinformation might not. Let’s go back to our example of the presidential candidate. Assume that I’m the author of that article. I might have an ax to grind against this candidate; I might not like their policies, I might not like them as a person, or my company’s business model is dependent on sensationalizing news that is otherwise fairly mundane. So, to keep my job, I might publish an article claiming, in hysterics, that this presidential candidate did something absolutely atrocious. I could claim that they had influence in a drug trafficking ring, or cheated on their spouse, or whatever salacious thing I can come up with that can sound plausible. This is disinformation because it is written with intent; I know that it might not be based in fact, in whole or in part, but I put it out there anyway. Now, let’s flip the script and say that I am the family member who comes across this article on their social media platform after it’s been proliferating there for a bit. I may not have entirely perfect information to know that the disinformation within the article is just that — disinformation. It certainly sounds from the article that this candidate truly did this bad thing, and the fact that it’s been shared so much makes it seem more legitimate than it might actually be. It’s even more likely than not — six times out of ten, in fact — that I didn’t read beyond the headline, which may sound legitimate enough. So, I share this article on my page. This is not necessarily disinformation, because the intent is unclear. I may have genuinely believed that the contents of the article were true. We would instead refer to this phenomenon as misinformation because we can’t determine whether I shared it to intentionally mislead others.

The costs of the problems

I specifically bring up the example of the presidential scandal because it isn’t just a hypothetical; it’s a reality that we’ve all seen before and will see again. One of the most recent (and pernicious) examples of disinformation as it relates to the presidential scandal is the Pizzagate conspiracy theory, the basis of which is the false claim that Hillary Clinton’s 2016 presidential campaign and/or Clinton herself was involved in operating a child trafficking ring out of the basement of a Washington, D.C. pizzeria. It spread to politically far-right circles across the internet, such as the /pol/ discussion board on 4chan and the r/The_Donald subreddit, and what followed was a campaign of harassment against the pizzeria’s employees and owners, and even an incident where a man shot an assault rifle inside the establishment (injuring no one) spurred on specifically because of the conspiracy theory. 

[https://i.gifer.com/74vs.mp4 — GIF of True Detective Season 1 where the protagonist crushes an aluminum can and says “Time is a flat circle.” This is mainly for visual content and is based on the opening sentence]

One can imagine the human cost and trauma of being the survivor of such harassment, let alone one of the employees on site during the shooting. But disinformation and misinformation also have an economic cost. A report by the cybersecurity firm CHEQ and the University of Baltimore found that dis- and misinformation in the form of fake news costs $78 billion in economic losses annually, half of which are losses in stock market value.

There is also the fact that dis- and misinformation undermines our ability to determine what is fact and what is fiction. The more dis-/misinformation gets disseminated by our friends and peers, the harder it becomes to detach ourselves from that connection and examine the validity of, say, the article the family member shares. Economist Daniel Kahneman put it as such: “A reliable way to make people believe in falsehoods is frequent repetition, because familiarity is not easily distinguished from truth. Authoritarian institutions and marketers have always known this fact.” It seems, though, that people who share misinformation, intentionally or not, are doing that dirty work for the institutions.

The costs of the problems

I specifically bring up the example of the presidential scandal because it isn’t just a hypothetical; it’s a reality that we’ve all seen before and will see again. One of the most recent (and pernicious) examples of disinformation as it relates to the presidential scandal is the Pizzagate conspiracy theory, the basis of which is the false claim that Hillary Clinton’s 2016 presidential campaign and/or Clinton herself was involved in operating a child trafficking ring out of the basement of a Washington, D.C. pizzeria. It spread to politically far-right circles across the internet, such as the /pol/ discussion board on 4chan and the r/The_Donald subreddit, and what followed was a campaign of harassment against the pizzeria’s employees and owners, and even an incident where a man shot an assault rifle inside the establishment (injuring no one) spurred on specifically because of the conspiracy theory. 

[https://i.gifer.com/74vs.mp4 — GIF of True Detective Season 1 where the protagonist crushes an aluminum can and says “Time is a flat circle.” This is mainly for visual content and is based on the opening sentence]

One can imagine the human cost and trauma of being the survivor of such harassment, let alone one of the employees on site during the shooting. But disinformation and misinformation also have an economic cost. A report by the cybersecurity firm CHEQ and the University of Baltimore found that dis- and misinformation in the form of fake news costs $78 billion in economic losses annually, half of which are losses in stock market value.

There is also the fact that dis- and misinformation undermines our ability to determine what is fact and what is fiction. The more dis-/misinformation gets disseminated by our friends and peers, the harder it becomes to detach ourselves from that connection and examine the validity of, say, the article the family member shares. Economist Daniel Kahneman put it as such: “A reliable way to make people believe in falsehoods is frequent repetition, because familiarity is not easily distinguished from truth. Authoritarian institutions and marketers have always known this fact.” It seems, though, that people who share misinformation, intentionally or not, are doing that dirty work for the institutions.

Real solutions, or tinkering at the margins?

There have been efforts by social media companies to combat the spread of disinformation and misinformation, particularly during the 2020 U.S. election cycle and the COVID-19 pandemic. Twitter, for example, has taken steps to label tweets which contain misleading or false information about coronavirus as such, and provide correct information from public health authorities under tweets which mention the virus. Sound familiar? It should, as one of the people whose tweets were slapped with such warnings include the 45th President of the United States, Donald Trump. Many of his tweets, not just on COVID-19 but also with regard to unfounded allegations of voter fraud in the 2020 presidential election, have been labelled as misleading.
A quote retweet from President Trump’s Twitter account marked with a label stating that it contains disinformation about the presidential election. Source: Twitter
While these solutions may sound good on paper, it does not address a critical way that disinformation and fake news spreads: via bot networks, or a large number of automated accounts which spread disinformation in a coordinated fashion. Researchers at Indiana University have found that twenty to thirty percent of disinformation content on Twitter about COVID-19 were spread by these bot networks. Couple that with the phenomenon of buying followers on social media, and you have a problem that isn’t going to go away just by labelling a tweet.

How humanID fights back against disinformation

This is where humanID comes in. humanID is a non-profit, open source online identity project where the fight against disinformation and bot networks is baked in. It does so in several ways: by allowing only one account per device and integrating a country-code level filter, the humanID login protects against bad actors and bot networks simultaneously. It also has the added benefit of protecting against data breaches by anonymizing user data both when it receives a login and when it’s transferred to the app using the login. It stores no user data on its end. humanID is committed to the fight against dis- and misinformation. Check out how to integrate it into your platform today!