Tyrants on Twitter

Episode 11 cover for Borderlines Podcast

How can Western democracies defend themselves against the weaponization of social media by authoritarian states? Episode #11 of Borderlines welcomes Santa Clara Law Professor David Sloss, author of Tyrants on Twitter, a new book examining Russia and China’s manipulation of digital platforms like Facebook, Twitter, YouTube, and Instagram to wage information warfare. His analysis includes innovative proposals for transnational cooperation to counter this modern threat while still protecting privacy and free speech rights. Listeners will take away fresh ideas about combatting foreign influence operations in the U.S. and Europe, and regulating the internet in the age of disinformation.


Episode Transcript

Katerina Linos (00:03):

Russia and China are manipulating digital platforms like Facebook, Twitter, YouTube, and Instagram to wage information warfare. How can Western democracies defend themselves against these threats? I’m Katerina Linos, your host, Tragen Professor of Law at UC Berkeley, and with me today is Santa Clara Law professor, David Sloss. He’s written a new book, Tyrants on Twitter, which not only describes the weaponization of social media by our adversaries, but also contains innovative proposals for transnational cooperation to counter this modern threat. David, in addition to being a free speech expert, an international law expert, and expert on misinformation also has extensive experience with Russia and the USSR and can tell us a lot about the current crises and efforts. Let me start with a basic question, which is who are the tyrants on Twitter? What is the problem? What is China and Russia in particularly doing? And then we can talk about your proposal.

David Sloss (01:13):

The book really is divided into two parts. Part one is assessing the problem and part two is coming up with a proposed solution. And the problem I’m focusing on is information warfare, and I think the best way to describe it is information warfare is sort of a modern form of foreign influence operations. So states have been engaging in foreign influence operations for decades if not centuries, but the new information environment and particularly the advent of digital information technology makes foreign influence operations a lot more powerful in some ways. And so I’m defining information warfare to basically being the use of modern digital tools to conduct foreign influence operations. And the book goes into a fair amount of detail about both Russia and China separately. But before I talk about them separately, I want to put this in a bigger strategic context, if you will. One of the key trends in international affairs over the last 10 or 15 years has been rising autocratization and democratic decay or democratic decline, right?

(02:26):

The percentage of democratic states in the world has been going down, the percentage of authoritarian states in the world has been going up. When we think about information warfare, it’s really asymmetric between democracies and autocracies for two primary reasons. One is democracies have open information environments, autocracies don’t, so it’s a whole lot easier for them to penetrate our information environment than it is in reverse. The second is we at least purport to have or try to have free and fair elections, whereas they don’t really in a autocratic state. So if you think about particularly election interference, there’s a lot more opportunities for autocracies to engage in election interference to subvert democracy than there is the other way around, and it’s this asymmetry that I think is particularly problematic that we need to think about how to address.

Katerina Linos (03:24):

Let me ask you about a term you use in the book. So you talk about useful idiots. With the January 6th reports coming out, with the midterms coming out, you characterize Donald Trump as the example of a useful idiot. Should we be focused on Russia and China or should we be focused first and foremost on domestic actors?

David Sloss (03:48):

I think we need to focus on both, but they’re different. So let me just say the term useful idiot is a translation of a Russian term. It’s not my translation and I can’t tell you the Russian term, but this has actually been a part of Russian strategy for a long time. So Russia’s been engaged in what are often termed active measures, going back at least to Lenin and Russian use of active measures or their sort of strategy of active measures has always tried to take advantage of what they call useful idiots, at least in translation, which is people within the country they’re trying to influence who will essentially help them carry out their agenda. I do think Donald Trump served very much as a useful idiot for Russia. Trump’s goals aligned very much with Russian goals. I doubt that that was because Russia was sort of actively using him as an agent.

(04:43):

I don’t think they recruited him as an agent, but I think that just because of his agenda aligned to a large extent with their agenda, he served as a useful idiot. So you asked about, “Well, should we be focusing on domestic versus foreign?” I think both. The book focuses very much on the information warfare conducted by foreign actors. I actually completed the manuscript for the book in November of 2020, right around the time of the election. Stanford University Press the publisher — for whatever they publish, they send it out for external review before it goes final. While it was out for external reviews, January 6th happened. And so I got a lot of feedback from external reviewers about, “Well, what about January 6th? What about domestic disinformation?” I think domestic disinformation is a huge problem, but it requires a very different kind of solution for a couple of reasons.

(05:40):

One is that domestic sources of disinformation get First Amendment protection in a way that foreign agents do not. As a doctrinal matter, the First Amendment does not protect the free speech rights of Russian and Chinese nationals, at least if they’re operating from Russia or China. First Amendment does protect our right to hear what they have to say but still, from a First Amendment standpoint, we have a lot more flexibility to go after foreign agents than we do to go after domestic sources of disinformation. The other thing is that I think the foreigners, what they’re doing to us, they’re doing to Western Europe and to democracies around the world. So when we’re dealing with foreign sources, there are real opportunities there for international collaboration among democracies. Whereas some of the domestic stuff is really a uniquely American problem that I think we have to deal with ourselves, that doesn’t lend itself as well to international cooperation.

Katerina Linos (06:39):

David, I’m completely convinced that the threat is real, that we need to do a lot both about China and Russia and about domestic, useful idiots. I also found the last chapter of the book in which you discuss the First Amendment issues very compelling. But before we get there, could you, for our listeners, talk about the core of the proposal? I think it’s amazing that you have a proposal rather than merely describing a problem. So what’s in that proposal?

David Sloss (07:07):

I propose a new international grouping, which I call an Alliance for Democracy. And in light of some of your recent work, I’ve been thinking a little more about is this a new international organization or is it a network of states? But leaving aside that distinction, the Alliance for Democracy would be a grouping of 35 or 40 states that are essentially solid, liberal democracies who would collaborate to deal with this problem of initially Chinese and Russian information warfare, although the group of bad actors could be extended to include Iran, North Korea, and some others potentially. And the proposal really divides states and the world into three groups. There are the states that are members of the Alliance for Democracy, like I said, about 35 to 40 liberal democratic states. There’s China and Russia, who are the primary bad actors, and then there’s the rest of the world. And there are different speech rights for different groups, basically that citizens or nationals of the Alliance member states have unrestricted free speech on social media, period.

(08:12):

For China and Russia, I distinguish between people who are state agents and people who are not state agents. My proposal — and some people think this is radical and we can come back and talk about that more — is, I propose banning Chinese and Russian state agents from social media platforms, so don’t let them on at all. Then everybody else from China and Russia who’s not a state agent falls into the same group as citizens and nationals of non-member states, right? So take Venezuela as an example. Venezuela would not be considered a liberal democracy, but they would not be subject to the ban the way China and Russia are. So people from Venezuela get essentially unrestricted free speech on social media with one significant caveat. That is, if they want to comment on elections in states that are members of the Alliance for Democracy, those comments would be subject to disclaimers or warnings.

(09:06):

The social media companies would attach a warning label to election related speech from somebody from Venezuela who’s commenting on an election in Canada, sort of warning Canadians, “This is coming from a person who’s not from a democratic state.” That raises a question though. How do you determine the nationality of the people who are speaking on social media? And to do that, I propose, and this is the other part of my proposal that I think is controversial, a registration system where anyone who claims to be a citizen or national of an Alliance member state would have to be checked out by some government entity to confirm, “Yes, this is a real person.”

(09:48):

And the reason for that really is that what Russia did during the 2016 US election that was most effective, they created a lot of what I call fictitious user accounts where they essentially pretended to be, let’s say, an African American, and then they’re commenting on stuff where they’re trying to appeal to an African American audience or they pretend to be a Second Amendment advocate, and they’re commenting on stuff where they’re trying to appeal to that group and they’re doing this under aliases, right? So in order to basically get at covert Chinese and Russian agents, we need to make sure that somebody who’s setting up an account on social media is not a covert Chinese and Russian agent. And so effectively the way to do that is if somebody setting up an account says, “I’m a US citizen,” the US government would say, “Yes, indeed, there’s a US citizen with this name and this identifying information. This is a real person.” They claim to be a Canadian citizen, same thing with the Canadian government. That’s the basic idea.

Katerina Linos (10:51):

I’m totally with you, that we need to have different rules for China and Russia and for Borderlines’ listeners, perhaps the episode with Tom Ginsburg on why the future might involve different rules for authoritarian states and democratic states is a good episode. I’m not a hundred percent sure about the registration system. In particular, I see why it would be effective for lobbyists, I see why it would be effective for arms dealers and sellers of microchips. I’m wondering for a social media account, aren’t there easy ways to log on that don’t involve so much governmental involvement, easy ways if one platform followed the rules, for other platforms to appear and become dominant that didn’t require this cumbersome registration system? Would this be practical and effective?

David Sloss (11:43):

The problem is that for somebody who lacks technical capacity like me, it’s very easy for Facebook or Google to say, “Oh, I can tell this person’s logging on from the US.” But for somebody who’s got greater technical sophistication than I do, they can easily hide their location so it’s much more difficult for the companies to identify the location of the account, which is one reason for the registration system. I don’t think the registration system is unduly cumbersome. And the one thing that I like to point out here is that a lot of people worry about, “Oh, this is going to have an adverse impact on user privacy,” I actually don’t think that’s true. And in some ways, it could actually help with user privacy because what we’ve got going on right now, there’s lots and lots of behind-the-scenes cooperation between the social media companies and governments, in particular the US government.

(12:40):

So we think the US government doesn’t know what we’re up to on social media, but actually you’ve got a whole team in Facebook that they call their Information Warfare Team. That Information Warfare Team is staffed with people who used to work in the Department of Justice, the Department of Homeland Security, these kinds of things and they’re having lots and lots of back channel communications with people in the government to help them deal with some of these problems. So the registration system would actually create rules for this, regulate that communication between the social media companies and the government, and in some ways limit that back channel communication and essentially force that communication more out into the open in a transparent way. So in that respect, it might actually help with user privacy. The problem is we just have very, very little information about what’s happening right now in that back channel communication and how much information the companies are turning over to the government about us, and how much back and forth there is between the companies and the government about us.

Katerina Linos (13:46):

So it was totally fascinating for me to hear of exactly the same thoughts from students of mine who’ve worked for Google in different countries. And the students who’d worked for Google in Latin America said, “No, we have no identifying information.” And students who’d worked for Google here in California said, “Actually, we have everything. And I as a Google employee had so much access to all of your data, so don’t think that the government won’t get it to us.” The student there said the big concern was when the government says, “Give us all of your data within a particular perimeter,” and the rules aren’t clear. Could you talk a little bit about what is actually politically feasible in this sphere? My sense is that American regulators acknowledge the problem but are not quick to pass new rules, whereas European regulators are happier to try to comprehensively regulate privacy plus the internet, plus the internet giants in particular.

David Sloss (14:49):

It’s certainly true that Europe has been a lot more aggressive in regulating the internet generally and social media particularly, whereas Congress has just been sort of at a standstill on this. So there are a lot of political hurdles that would have to be overcome in order to implement a kind of proposal that I’m pushing here. I don’t think it’s really possible without US leadership, and I think what it really takes is somebody within the executive branch in the United States who’s given this portfolio and told, “Take it and run with it.” Early in the Biden administration, I was doing a little bit of lobbying to try and get the Biden administration to create a new ambassador-at-large position within the State Department for information warfare. That proposal was met with, let’s say, not a lot of enthusiasm for a variety of reasons. Again, not because they don’t think it’s a problem, but for other kinds of reasons.

(15:48):

I don’t think you’re going to get initiative from Congress on this in the US. I think it’s going to have to take leadership from the executive to make it happen. And the Europeans frankly are, with some good reasons, skeptical of the US in this area. The Europeans are like, “Well, why should we enter into agreements with the US when you could have a change of administration and you’ll just tear up whatever agreement we do?” So the Europeans are great about doing their own regulations of the internet and social media, and they’re doing a lot in that area. But in terms of taking the lead to create what I’m calling this new Alliance for Democracy, I don’t think there’s anyone in Europe who really has diplomatic clout to take the lead and make that happen. It’s going to have to come from the US and it’s going to have to come from basically a senior official within the executive branch who’s given the responsibility of, “Go make something happen here.”

Katerina Linos (16:41):

What about unilateral efforts by European regulators? I am amazed, every year we get these European officials who have some proposal that to me seems very far fetched. So they come in and say like, “We want to do affirmative action in bidding. We want to regulate artificial intelligence. I’m a senior commission official. I’m in California for a year,” and then a year later, their proposal is passed. And now the European Union decided, “Oh, we need to send a senior level to deal with the tech companies.” And they have these two amazing new rules, the Digital Services Act and the Digital Markets Act. It must be that all the tech companies are working with them behind the scenes just to figure out what these onerous obligations are like. Why not try lobbying these regulation-happy Europeans who say, “We have authority over anyone who sells in the European market, so we’re using that as a basis to fill in the gaps that California and federal regulators are not filling.”

David Sloss (17:51):

The focus of the Europeans has been very much on privacy, transparency, what they call competition law, what we call antitrust law. They do have a small EU office on disinformation that is primarily in the business of trying to identify disinformation when it gets out there, flag it for people so that they can see that it’s disinformation. I really see this as a broader geopolitical problem. I don’t think the Europeans are looking at it in that way. I haven’t seen a whole lot coming out of Europe that says they’re framing it as a geopolitical problem. It’s interesting that in light of the invasion of Ukraine, maybe they would be a lot more open to seeing things in that way. For a long time, what you had with Europe was they wanted to be friendly with Russia, they wanted to be friendly with China. That friendliness towards Russia is gone and the skepticism towards China is growing. There may be opportunities that I’m not sufficiently plugged into for the European regulators to have a good sense of what’s possible there.

Katerina Linos (19:00):

I’ll say a couple more things about Europe and then move on to Russia and China. So my sense is that they’re framing everything as antitrust problems because that’s what they’re allowed to do. What they’re actually doing goes way outside of antitrust. For example, they want Google to pay more taxes and Apple to pay more taxes. They’re framing that as an antitrust problem and putting billions in escrow. They want to regulate Google, Facebook, Apple, and Microsoft comprehensively. They’re calling that a gatekeeper rule and they’d have these really high thresholds and they’re saying, “Well, we don’t want Apple to restrict the iPhone to Apple apps and Apple payments. This is an antitrust problem.” But it’s really comprehensive regulation.

(19:43):

Then they have this other new set of rules that will kick in for smaller players and your exempt if you have fewer than 50 employees, but it really hits a lot of actors who are going to be in the European market. You can avoid the European market altogether. But as Anu Bradford, who was also on a prior episode said, “That’s not really possible.” Her “Brussels Effect” theory suggests that many of the medium and large players will want to be compliant with extraordinary European rules.

David Sloss (20:17):

Actually, this is similar to some of the stuff that Elizabeth Warren has been pushing here in the US. I had a really interesting conversation with one of her staffers a while ago who was basically saying to me, “Well, we think we can get at the disinformation problem through antitrust law, use antitrust as a tool to help regulate disinformation.” I’m a little skeptical of that, but I don’t want to write it off. I mean, I think we got to be creative about thinking of ways to do this because this really is a significant problem and I don’t want to claim that I have all the answers. There are a couple of different proposals out there actually, but I think it’s a good thing for people to think about coming at it from different angles and maybe we can use antitrust law to get at it. I’m not sure.

Katerina Linos (21:02):

I think the other tool that the Europeans have that we don’t is the hate speech regulation, that they can just take down all of the content we’re most concerned about really rapidly on that basis. And I know you’re an expert on free speech and hate speech. Is that a useful tool?

David Sloss (21:21):

If you think about Europeans regulating domestically within the European Union and the US regulating domestically within the US, the fact that for them prohibitions on hate speech are perfectly fine, for us, prohibitions on hate speech violate the First Amendment mean it’s easier actually for them to regulate domestically than it is for us to regulate domestically if you compare existing US First Amendment doctrine with existing sort of European doctrine on free speech. This came up in a class I was teaching yesterday, I have a student in the class from Germany ,and she was saying, “Well, we don’t protect free speech as free speech. We protect freedom of opinion,” and that’s different. So they have a lot more flexibility to think about ways to regulate speech whereas under current First Amendment doctrine, I think Congress is a little more hamstrung than European regulators are on that.

Katerina Linos (22:15):

Let’s talk about the First Amendment. Why are speaker-based restrictions consistent with the First Amendment? How is this registration system legal domestically?

David Sloss (22:29):

I spend a lot of time on this in the book and I start by laying out these two different visions of the First Amendment, which I refer to as the Madisonian view and the Libertarian. There’s no question that the Supreme Court recently has been trending more in the libertarian direction in its First Amendment jurisprudence. I tend towards a more what I call Madisonian view, and I think that the Madisonian view is probably, as the name suggests, more consistent with the original understanding. So the proposal that I’ve come up with here, I think from a Madisonian standpoint is not terribly problematic because what are two of the key goals of the First Amendment? Two of the key goals of the First Amendment are one, to ensure that truth prevails over lies in the marketplace of ideas. And number two, to basically strengthen a healthy democracy. These are two of the key things that I’m actually trying to accomplish with this proposal, to make sure that truth prevails over lies in the marketplace of ideas and to strengthen or reinforce our democracy, not just here in the United States, but in other democracies as well.

(23:38):

So from that standpoint and from a Madisonian standpoint, the basic proposals are consistent with the goals of the First Amendment. Now, what Libertarians emphasize more, and this is also a part of First Amendment theory and doctrine, is a basic distrust of government. This is deeply ingrained in American culture, there’s no question about this. We have a distrust of government. So if you approach the First Amendment from that standpoint, it looks more suspicious, particularly the registration requirement I think looks suspicious. And one of the problems here, and I’m just going to be very frank about this, is that what you don’t want to do is chill too much speech.

(24:18):

There’s a real risk here that — I should say one other thing about how the registration requirement is set up for the US and other western democracies, which is essentially every social media user has a choice. I can set up a private account, in which case I’m exempt from the registration requirement, but I’m limited in terms of the size of the audience I can reach or I can set up a public account, in which case I can reach an unlimited audience, but I have to register and I have to have the government check that out and say, “Yes, David Sloss is a real person. There is a real American citizen named David Sloss,” etc. So there’s a real concern here that a lot of people fearing that driven by that distrust of government will say, “Fine, I’m just going to set up a private account, not a public account. That way I avoid the registration requirement.” And then we end up with a very widespread chilling of speech, which I think would not be a good thing. That I don’t want. That’s not the goal here, but that’s a risk with this.

(25:17):

I do think part of the issue here is, “How do you sell it to the public? How do you sell it to social media users? And can you give people enough assurances that by registering, they’re not actually exposing themself to government surveillance?” And I think if you can do that, you can roll it out in the way in which people are satisfied that, “No, this is not a sort of a backdoor for government surveillance,” then you don’t end up chilling a lot of speech. You don’t have a whole lot of people shifting from public accounts to private accounts. And then I think it should be generally broadly consistent with the First Amendment. Libertarians will certainly be more skeptical than Madisonians when it comes to this proposal.

Katerina Linos (25:57):

I must say that it’s not only libertarians, but minority communities and others who’ve been surveilled by the government who might be very hesitant.

David Sloss (26:05):

Yes, I acknowledge that point.

Katerina Linos (26:07):

Fair enough. And I think it was really critical that you clarified the option of private versus public accounts. I think that’s something that makes the proposal much, much stronger than it otherwise would be. How much does domestic versus foreign matter in the internet era? If the Europeans persuade Twitter or Facebook or a smaller company, that, “If you want to operate in Europe, you need to sign up for this code of conduct which says no hate speech the way we define it. And if we, the Europeans, declare there’s a crisis, you need to shut everything down within 24 hours,” which is actually what’s in the new rules they’ve passed — is that not a mechanism that allows the Europeans to circumvent First Amendment problems for the US?

David Sloss (26:57):

Certainly, it’s true that regulations get passed in Europe that regulate in so far as they apply to the big internet companies. This is what they did with the GDPR. They said, “Look, it’s easier for us to do this globally than it is to have one set of in-house rules for Europe and a different set of in-house rules for the US.” So GDPR comes along and the companies are basically doing GDPR everywhere because that’s easier for them to do, and I assume it’s going to be the same for the Digital Services Act, that they’re going to take the same approach to that, which makes a lot of sense for them. If you think about disinformation, you can think about either content-based restrictions or speaker-based restrictions. Part of my argument in the book is that content-based restrictions for disinformation don’t really work very well because you have what a lot of people refer to as the Whack-a-Troll problem.

(27:50):

One thing goes out there, you slap it down, another thing comes out, and they can basically keep putting out more and more stuff faster than the companies or the governments can slap it down. So this is partly why I think you need speaker-based restrictions to deal with the problem of disinformation and information warfare. The Europeans have been very reluctant to go that route of speaker-based restrictions. Now, in response to the war in Ukraine, they did shut off Russian state media companies. They said, “We’re not letting RT out there anymore because of Ukraine.” So there may be an opening here to lobby Europe on speaker-based restrictions, but they have been pretty reluctant to go that route. And I really think that for Russian and Chinese disinformation in particular, you need a speaker-based approach rather than a content-based approach because otherwise you’ve got the Whack-a-Troll problem that is, I think, insurmountable.

Katerina Linos (28:46):

Let me point listeners to an essay by David Kaye on the RT restrictions in the American Journal of International Law where he criticizes the European restrictions from a human rights perspective. I’m comfortable with that restriction as a normative and as a legal matter. What I’m worried about and where I wanted to push you is the practicality. So I can see how you can shut off RT. I don’t see how you can shut off every individual fake Russian account that has a presence on a platform like Facebook or another platform, which you discuss a lot, TikTok.

David Sloss (29:20):

You can’t shut them all off. That’s an impossible goal. But here I go back to my former background as an arms control negotiator, and I really approached thinking about this problem in part from, “Okay, I’ve worked on designing arms control verification regimes,” and when I was doing that, the goal was never a verification regime that’s perfect, right? You can’t achieve that. What you want is a verification regime that’s good enough that you essentially raise the cost for the other side, you make it sufficiently costly and difficult for the other side that that essentially rectifies this asymmetry that I was talking about earlier. That’s fundamentally what the registration system is designed to do. The registration system won’t get rid of every fake account out there from China and Russia. They’ll still be able to put out fake accounts, but it’s going to be a lot more costly and a lot more difficult.

Katerina Linos (30:13):

Thank you so much for that, suggesting that the perfect should not be the enemy of the good, that we need to improve the system. I think that’s a really good place to end the interview and to strongly recommend to people who want to hear more that they buy the book, Tyrants on Twitter: Protecting Democracies from Information Warfare. Or if like me, you love listening to the book, you can also get it on Audible and in other formats. Thank you so much.

(30:42):

I hope you enjoyed this episode of Borderlines and got some new ideas on how we can regulate the internet in the age of disinformation. If you want to read more, check out the episode’s show notes where you can get links to buy a physical copy of the book or an audio version. And join us for the next episode when Kal Raustiala, UCLA law professor will discuss his biography of Ralph Bunche, a Nobel Laureate, groundbreaking scholar, and civil rights advocate.