
By Gwyneth K. Shaw
From deep fake images to viral memes hyping wild conspiracy theories, the 2020 internet is rife with misleading — and often outright false — information. As the presidential election came and went, the problem grew like a hurricane over warm waters. Yet as the biggest platforms, including Facebook, Twitter, and YouTube struggled to curb the lies and threatening content, fresh material popped up to replace it.
The problem has prompted calls from both ends of the political spectrum for tighter control of these wildly popular platforms. The critical question is how the government could do that without running afoul of existing laws, as well as the First Amendment’s free speech protections.
Professor Pamela Samuelson, a faculty co-director of the Berkeley Center for Law & Technology, is a pioneer in digital copyright law, intellectual property, cyberlaw, and information policy. A Berkeley Law and UC Berkeley School of Information faculty member since 1996, she’s on the board of directors for the Electronic Frontier Foundation and the advisory boards for the Electronic Privacy Information Center, the Center for Democracy & Technology, Public Knowledge, and the Berkeley Center for New Media.
Samuelson taught a new fall course, Regulating Internet Platforms, that she will teach again in spring 2021. She takes stock of what’s happened, and where President-elect Joe Biden’s administration might take this issue, below.
Q: The spread of false or misleading information, particularly on social media, was a major issue during the 2020 campaign and many of the larger platforms took steps to impede that spread. How successful were those efforts?
Samuelson: Some types of disinformation (the term of art for deliberately misleading or false information) are easier to detect and root out than others. Twitter and Facebook were the most proactive in either taking down disinformation about the election or about COVID-19 and sometimes by flagging it as contested, sometimes referring people to another site where more accurate and vetted information could be found.

One of the things that makes content moderation so difficult is that people who are determined to spread this kind of information get clever in how they express their views so as to “fool” the automated content recognition technologies, which are looking for the presence of certain words. One interesting technique that Twitter used was to prevent people from retweeting content, which is one way that disinformation can go viral. So if you wanted to retweet a message, you had to quote retweet it, which involved making a comment on it. That made the retweeter have to engage in some reflection about what the retweeted message was and why they thought it was worth a retweet.
Q: What is Section 230 of the Communications Decency Act and why is it important?
Samuelson: In 1996 as part of the overhaul of the Communications Act, which mainly concerned regulation of the telecom infrastructure, a separate bill got tacked on which says that no one who provides interactive computer services will be held as the speaker or publisher of content provided by others. (Speakers of libel can be held liable; publishers too, as spreaders of the libel). This means that if a user posts a defamatory message on a web 2.0 site, that user can be sued by the victim, but the platform cannot — or, more accurately, the platform will be able to get out of a case charging it with some wrongdoing through a motion to dismiss.
As I say in a forthcoming paper on some of the proposals, which will be published in the March issue of Communications of the ACM, a computing professionals journal, there’s no bill yet in Congress that would repeal Section 230 outright. However, numerous bills would give it a significant haircut.
Q: What are some of those policy proposals to change the section?
Samuelson: Members of Congress have taken several different approaches to amending Section 230, including widening the categories of harmful conduct for which the immunity is unavailable. Right now, Section 230 does not apply to user-posted content that violates federal criminal law, infringes intellectual property rights, or facilitates sex trafficking. One proposal would add to this list violations of federal civil laws.
Other bills would make immunity dependent on compliance with certain conditions, or make companies spell out their content moderation policies with particularity in their terms of service and would limit immunity to violation of those terms. Still others would allow users whose content was taken down in “bad faith” to bring a lawsuit to challenge this and be awarded $5,000 if the challenge was successful.
Some bills would impose due process requirements on platforms concerning removal of user-posted content. Other bills seek to regulate platform algorithms in the hope of stopping the spread of extremist content or in the hope of eliminating biases. So there are a lot of possibilities, including action from the Federal Communications Commission, separate from what Congress might consider.
Q: Are there other ways to alter the landscape?
Samuelson: Neither legislation nor rule-making may be necessary to significantly curtail 230 as a shield from liability. U.S. Supreme Court Justice Clarence Thomas has recently suggested a reinterpretation of 230 that would support imposing liability on Internet platforms as “distributors” of harmful content. Applying Smith v. California, a key precedent on distributor liability, to platforms under 230 could result in them being considered “distributors” of unlawful content once on notice of such content. Section 230, after all, shields these services from liability as “speakers” and “publishers,” but is silent about possible “distributor” liability.
Endorsing this interpretation would be similar to adopting the notice-and-takedown rules that apply when platforms host user-uploaded files that infringe copyrights. These have long been problematic because false or mistaken notices are common and platforms often quickly remove the content, even if it is lawful, to avoid liability.
Q: How do broader First Amendment concerns complicate the effort to curtail the spread of lies online?
Samuelson: There are many things that the government can’t do regarding information content that platforms can. If the government tried to force a platform to take down disinformation about COVID, for example, the host of that information could raise the First Amendment as a defense. But the platform as a private entity can take down pretty much whatever it wants for pretty much any reason. There have been efforts to claim that major platforms are “public forums” so that they are constrained like the government is, but courts haven’t found this persuasive.
The platforms get to decide what they think the truth is.
Q: What are some possible ways to resolve that tension in favor of the truth?
Samuelson: There is a growing literature about platforms as private governance entities and about what if any ways can their choices be constrained by law and public policy. The EU is way ahead of us on these measures, partly because they have a different conception about what freedom of expression means than we do in the U.S. (Hate speech, for example, is mostly First Amendment-protected speech in the U.S., but not in the EU.) Several of the proposals pending in Congress right now would, if adopted, be struck down as violative of the First Amendment.
More disclosure about what the platform’s content moderation policies are, what they take down and why, and providing processes to allow aggrieved users to appeal takedown decisions are measures that are probably constitutional and would improve the too-much-harmful-content situation we are experiencing now.
Q: Do you expect different moves from a Biden administration than we might have seen with a second Trump term?
Samuelson: Biden, like Trump, has said he thinks 230 should be repealed. This is not a well-thought-through proposal on his part. I expect him to make a legislative proposal at some point in his first two years. One problem with getting legislative consensus is that the conservatives are upset that the platforms take down too much (like hate speech, disinformation, Holocaust denial), while the liberals and progressives want the platforms to take down more harmful content.
I don’t think Biden would send the Federal Communications Commission a proposal to regulate platforms, so there is this difference from what Trump has proposed.
Q: This was a hot topic before you decided to teach this course, but now it’s even more timely. What were some highlights of your discussions with the students and what do you plan to explore in the spring course?
Samuelson: The best sessions were those in which we discussed hate speech, terrorist content, and disinformation. We spent time trying to define those terms, then tried to work through specific examples that were borderline cases in breakout sessions, followed by exchanges by the students of each group’s points of view, and talked through also what options the platforms had. In the last class, before the students gave short talks about their paper topics, I invited two guest speakers who have been involved in challenges to the Trump executive order.
I expect sessions like these again in the spring. Maybe we’ll spend the last class discussing what Biden should do.