Section 230 turns 25 today, and it’s never been more important
On February 8th, 1996, President Bill Clinton signed the Communications Decency Act, a sweeping regulatory framework for the internet. Almost all of the CDA was found unconstitutional, but what remained allowed the nascent web to flourish. It’s called Section 230 — and for the past quarter-century, it’s shaped the internet for better and worse.
Section 230 protects any owner or user of an “interactive computer service” — typically an app or website — from liability for content that someone else posted. Over the past few years, it’s drawn the ire of conservative politicians who want to punish “Big Tech” for banning users, but also lawmakers and activists who say it lets web services knowingly allow harassment, nonconsensual sexual imagery, and other illegal material. Congress has introduced several major reform proposals, and at least one will likely advance in the near future.
The debate over Section 230, though, is a lot more complicated than one law or a few giant social networks. In the coming weeks, we’ll be hosting a live event with some of the people that Congress’ fight over Big Tech has left out — trying to reset the conversation to be less about Facebook and YouTube and more about the internet at large. But for now, on the 25th anniversary of Section 230 becoming law, I want to look at a few of the big questions that any proposal will have to grapple with.
Section 230 versus the First Amendment
One of the pithiest tech policy commentaries is a 2019 correction in The New York Times. On August 6th, the Times published a long article about “why hate speech on the internet is a never-ending problem,” explaining in a subheader that it was “because [Section 230] protects it.” The next day, the Times soberly noted that hate speech is actually shielded by a different law: the First Amendment. It’s a fact that gets frequently, bizarrely glossed over in Section 230 debates — but is unavoidable in any debate about dangerous content online.
Section 230 shields sites from liability for illegal content, but a lot of bad speech simply isn’t illegal under current laws. It’s legal to claim COVID-19 is a hoax or the election was stolen, unless that claim involves something like selling a scam cure or libeling a voting machine company. It’s legal to leave a racist Facebook comment or wish cancer on somebody in an email, unless you’ve been persistently harassing them. And it’s legal to ban users from a website for doing any of that.
This doesn’t make online abuse or disinformation less potentially harmful, especially when it’s happening at a huge scale. But courts have interpreted the First Amendment as a broad shield against the government banning or punishing speech. Section 230 can’t overrule those protections to make companies remove offensive but legal content.
Some critics explicitly acknowledge this fact. That includes legal scholar Mary Anne Franks, who has called for a narrower interpretation of the First Amendment, as well as Tim Wu, who has argued that current speech law doesn’t account for how modern censorship works online. Many politicians haven’t, though — including President Joe Biden, who wants to repeal Section 230 to stop Facebook disinformation, or his deputy chief of staff Bruce Reed, who appears to think repeal will make YouTube axe creepy Peppa Pig parodies.
Changing Section 230 would affect (for better or worse) how sites handle potential libel, child sexual abuse material, or illegal gun sales. But for hateful content, disinformation, and tough-to-define offenses like inciting violence, the question isn’t when people can sue Facebook. It’s whether the American conception of free speech itself needs to change. And that’s a much riskier, more complicated proposition.
Big Tech versus Dark Corners
To repurpose a couple of massively overused buzzwords, bad content online is concentrated in a couple of places: “Big Tech” and “the dark corners of the internet.”
Big web services — particularly giants like Facebook and YouTube, but also midsized platforms like Grindr and Tripadvisor — have a specific set of problems. While most of their content is innocuous, they’re too big to moderate perfectly, granting a megaphone to a fraction of abusive, dangerous users. They’re often highly automated and include recommendation and sorting features that shape what people see online, sometimes in unpredictably negative ways. And being banned from the biggest networks can seriously impact your ability to connect with friends, family members, and businesses. At worst, the platforms are confusingly moderated and unresponsive to complaints about harassment or illegal material.
Small, toxic sites pose different issues. These might be neo-Nazi forums that encourage racist abuse, niche communities that solicit nonconsensual pornography, or blogs devoted to abusive and libelous rumors. They’re openly hostile to content moderation and host small but devotedly vicious communities. Most people will never visit these sites, but their actions bleed across the web through defamatory search results, organized harassment campaigns, or the swapping of “revenge porn” and child sexual abuse material. Sometimes Big Tech and the Dark Corners collide — as with the violent conspiracy movement QAnon, which launched on 4chan but gained mainstream reach through Facebook. (Again, as discussed above, not all this content is illegal.)
Lawmakers have framed Section 230 as almost exclusively a “Big Tech” issue so far, introducing bills clearly aimed at fixing specific problems with Facebook, YouTube, and Twitter. Section 230 supporters have validly argued that this could hurt smaller, well-meaning businesses. But this framing is also bad for people who support changing the law because it ignores an important piece of the online ecosystem.
“Lock it up” versus “burn it down”
If Congress successfully changes Section 230 so websites are held liable for harmful content posted by users, there are two very broad possible outcomes for the internet. In one case, our current online giants get more power and tighter restrictions; in the other, Silicon Valley’s basic business model no longer works.
The first is a world even more dominated by a few big, familiar sites and apps. Amazon, Facebook, and Google will have the resources to hire giant moderation teams and run complicated automated takedown systems or to fight lawsuits over their content. If Facebook or another company comes up with a functional system, it could offer it as a service to smaller sites — which could either adopt that (likely strict) framework or simply shut down their forums and comment sections.
That’s a double-edged sword. Twitter might be more likely to act on borderline threats or harassment, for instance, but movements like #MeToo would be nearly impossible if abusers could silence accusations with a simple complaint form. (Pirated content gets limited Section 230 protections already, and the result has been an unwieldy and exploitable automated takedown system.) Sites that thrive on unverified gossip could be shut down, but so could legitimate venues for consumer complaints.
Despite plenty of often deserved criticism, people still have overwhelmingly favorable opinions of major social media companies. And a more buttoned-up online landscape would almost certainly mean a lot more content getting taken down by accident — almost inevitably including a lot of silly but joyful stuff like sea shanties or the Ratatouille musical.
In the end, though, it would look roughly like the internet we know today. But there’s a second, much weirder option, even if Silicon Valley’s lobbying power makes it unlikely. Legal scholar Jeff Kosseff has called Section 230 “the 26 words that created the internet,” and it’s an apt description. Section 230 created a web where sites (both commercial and nonprofit) can rapidly scale based on user-generated content, drawing millions or billions of users, then fleshing out a moderation policy and enforcement team as they grow. If mega-services are deemed essentially “unsafe at any speed,” that changes some basic assumptions about how you launch and grow a website.
Whether you think that’s good or bad, it’s the kind of stakes that changing Section 230 deals in — and Congress needs to spend more time talking about what that means.