Human Voices, Episode 7: Tech & Geopolitics: Why We Should Worry

Hosted By Bastian Purrer

Februrary 10, 2021

Welcome back to Human Voices, HumanID’s podcast talking cybersecurity, privacy, and news in tech! This week’s focus will be on the short- and long-term impact of big tech on domestic and world politics, and how the COVID-19 pandemic and surrounding disinformation is used by authoritarian regimes to gain power. Justin Sherman is a non-resident fellow at the Atlantic Council’s Cyber Statecraft Initiative where he focuses on the geopolitical impact of the internet, and he researches at the Tech, Law, & Security Program at American University Washington College of Law.

You can listen along to Human Voices right here or on Spotify, Apple Podcasts, Anchor, or ListenNotes!

The below teaser has been edited for clarity and brevity.

humanID: And just this morning, or I think yesterday, Facebook announced that in the upcoming elections we, I think, all are pretty nervous about, they will ban new ads in the seven days before the elections. What do you make of this?

Sherman: Yeah, I’m rolling my eyes here. You know, Facebook has a habit of sort of doing these very bad things and then saying, “oh, sorry, we’re going to make this little change” that maybe sounds nice on paper to some people, but in fact isn’t doing much. I think this is one example. Facebook has, for a while now, actively decided to run political ads that they know are false, where other platforms have not done that. And so for Facebook, so close to the election, when lots of ads are already up, when lots of ads have been up for a while, to suddenly come out now and say, “oh, well, now we’re gonna make some tiny change to how our platform works,” and I say a tiny change, because there are a lot of other things are not changing, I think, you know, we have to realize that that’s not really going to do much in the scheme of things relative to what they very much could have done a long time ago.

humanID: Yeah, it strikes me that four years after Facebook basically admitted that it played a role in the last elections, we’re still talking about this and there doesn’t seem to be much progress. What, over the last four years, over such a long timeframe, do you think Facebook should have done to prepare us for this election better?

Sherman: Well, I think Facebook could have done lots of things. Facebook could have been far more transparent with its data, to let researchers and folks on the Hill and things like this look into content moderation practices, content on the platform. Facebook, as I said, could have been fact checking political ads, and, you know, not running ads that are blatant disinformation. Facebook could be far more proactive against hate speech, against white supremacist content, which is rampant on the platform. There are lots of different things. But again, I think it speaks to a continued decision, again and again, by this company, to not act and to look away and to leave things up and to let things on the platform, like you said, after it’s been so clear just how much it’s abuse can impact actual election outcomes.

humanID: Yeah, and I generally agree with the statement, basically, it’s really good at PR, the right time to distract the attention. But having worked in Indonesia, which is a vibrant democracy with a very contested election cycle, I see Facebook’s point when they say that it’s really hard for them at scale, and that speed, and quickly, to fact check ads and effectively monitor and decide what is hate speech and what is not. Given that there is hundreds of other languages and political environments around the world that are not English, and they will not generate the headlines and international press that the Trump election did, well, what would you say about that? And what could Facebook maybe do for, like, a scalable and global approach?

Sherman: Yeah, no, that’s totally right. And that’s a good point, because we shouldn’t get black and white, and I don’t mean to be black and white in saying there’s a lot more they could do, right? It is complicated to do these things at scale, like you said, when you get into other countries and other languages. And I think that’s something we don’t talk about enough in the US, right? We get so fixated on the problems that, say, Facebook has in the US because the list is so long, you could fill a book. But we forget, like you said, so many challenges in other countries, whether it’s Myanmar and the horrific genocide that occurred there, and the role Facebook posts played in inciting hate, or to your point, elections in contested democracies around the world where there’s lots of mis- and disinformation, or even something like the recent case in India, where just this morning, I think it was, they’re announcing that they’re suspending the account of a politician for hate speech. So yeah, right, these are global platforms, the moderation decisions are complex. And so I think, to your point, that’s where we really have to be really analytical and say, “okay, where is this actually difficult? Where are there nuances?” Like with doing this at scale, like with the problems of using an algorithm to try and do hate speech or something like that, versus, “where’s this just PR? And where are they spinning and saying, ‘oh, we can’t do anything,’ when in fact they actually could?”

humanID: But so generally you do argue for Facebook deciding on specifics, if a certain post is hate speech or not, you argue they should be stricter. When it comes to speech, getting legislation involved is extremely difficult. How do you think a functioning US government could help to improve the situation?

Sherman: So, I don’t think a government coming in and telling the platform what to post is the right answer. I think that is straight out of, you know, what the Kremlin likes to do, for example. And, you know, maybe this is the transition into the report and the article I did. But in the US, right, for example, we have Section 230 of the Communications Decency Act, which means that platforms like Twitter, like Facebook, like Google Search, like Yelp — we can go on — can suspend users, delete posts, modify posts, flag posts, whenever they want, basically, without being liable for harmful content left up on that platform. With some exceptions, right? You can’t have child pornography, for example, for obvious reasons. But all to say, right, there is moderation that already occurs. So that’s the first thing, that Facebook in particular, but some of the other platforms like to say, “we’re not arbiters of truth.” And that’s sort of an attempt to not have responsibility for the stuff that is there, when they do make decisions every day about what to leave up, what to take down, and how to curate the feeds. So, they do make moderation decisions. So that’s the first thing.

And the second thing, to the point of Section 230, is in the US we’ve historically liked the platforms to make these decisions themselves, because then it’s not someone who’s in office telling them what to do. And we can imagine the real risks of what that would look like under this president, with the executive order Trump passed on social media, which was basically an authoritarian looking attempt to, you know, censor speech on Twitter because people say mean things about him, for example. And so we’ve seen the value of having platforms make the decisions independently. The question is, how transparent is it? And how accountable are they to the people, not just in the US, but around the world who are actually impacted by those decisions?