Future Tense

The Lawsuit Against America Online That Set Up Today’s Internet Battles

The plaintiff, who sued because of tasteless comments about the Oklahoma City Bombing made under his name, lost—but he still wants to fix the internet.

The American Online logo is seen in the center, above a judge's gavel. The Google, Twitter, and YouTube logos are seen on the left side, and the Instagram, WhatsApp, and Facebook logos are seen on the right side.
Photo illustration by Slate. Photos by CHIARI_VFX/iStock/Getty Images Plus, Twitter, Google, Facebook, YouTube, Instagram and WhatsApp.

If you’re a member of a certain nerdy part of the internet, the phrase “Section 230” may make your blood pressure spike. Section 230 of the Communications Decency Act, which says that internet platforms are not liable for much of the content posted by their users, became law nearly 25 years ago, but recently, it has become a lightning rod for criticism both from those who believe that platforms do not adequately block harmful content and those who argue that platforms overmoderate particular political viewpoints.

These heated debates over Section 230 can be traced to one moment in history: Ken Zeran’s Nov. 12, 1997 loss in a lawsuit against America Online.

Zeran had sued America Online after an anonymous troll repeatedly posted vile jokes about the Oklahoma City bombing using Zeran’s first name and home phone number. The U.S. Court of Appeals for the 4th Circuit affirmed a lower court’s dismissal of his case, not under the First Amendment, but under the recently passed Section 230.

Until that point, Section 230 had received little attention. But the appellate court concluded that a 26-word provision in the law provides sweeping protections to online services for the words that their users post. Because it was the first federal appellate court to apply the obscure new law, the 4th Circuit’s interpretation quickly prevailed in federal and state courts nationwide.

Ken Zeran’s loss meant that platforms generally are not liable for their decision to keep up—or take down—user content. This legal protection allowed Facebook, Twitter, Yelp, Wikipedia, and so many other platforms to base their business models on user-generated content.

Last year, I published a book about the history of Section 230, The Twenty-Six Words That Created the Internet. I devoted nearly an entire chapter to Zeran’s landmark case, as it is undoubtedly the most important court opinion involving Section 230, and perhaps U.S. internet law in general. But I could not track him down to interview him for the book. Being unable to speak with Zeran was among my biggest disappointments in researching and writing the book, as his case was so fundamental to the current understanding we have of Section 230. That understanding of Section 230 is being debated in Congress, the Justice Department, and courts nationwide.

I was stunned in June when, during a brief break from a pillow fight with my 6-year-old daughter, I saw a new voicemail message from a Seattle phone number.

“Jeff, this is Ken Zeran. If that name sounds familiar, you’ve written a lot about my case through the years, and I’d love to speak to you.”

I immediately called him back and had a few long conversations with him over the following week. Zeran reached out to me because he wants to help fix the internet that his case created. He wants to share his story—and thoughts—with policymakers who are considering whether to change Section 230. Zeran rarely spoke publicly about his case over the past quarter century but now thinks it is vital that his voice be heard.

The phone calls started on April 25, 1995, when Zeran, then in his 40s, was a video producer and artist based in Seattle. The callers were furious about something that they thought Zeran did on America Online. This was perplexing, because he never had used the online service.

At first, Zeran thought that the caller had the wrong number. “But then I get another call and then another call,” Zeran recalled. “And many of them wouldn’t even wait to hear what I said. They just vent themselves and hang up.”

He soon heard from an Army Times reporter who clued him in on what had caused all of the calls: an America Online bulletin board post  under the screen name Ken ZZ03, signed with Zeran’s first name and phone number. The reporter said that the post, found in America Online’s Michigan Military Movement forum, purported to sell T-shirts with tasteless jokes about the Oklahoma City bombing, which had occurred less than a week earlier. The bombing of the Alfred P. Murrah Federal Building killed 168 people, including 19 children who had been at a day care center.

Zeran quickly contacted America Online to complain, insisting to a staff member that he was not behind the post. The staffer assured him that America Online would remove the fake ad, but the angry calls continued. A new Oklahoma City T-shirt ad had appeared on America Online, this time from the user “Ken ZZ033.” A reporter from Michigan contacted Zeran and faxed him a copy of the ad.

Among the slogans on the T-shirts that the advertisement purported to sell were “Visit Oklahoma … It’s a BLAST!!!” and “Finally, a day care center that keeps the kids quiet—Oklahoma 1995.” The calls escalated, particularly after a radio show host in Oklahoma City read the America Online post, including Zeran’s first name and phone number, on the air.

Some of the callers had lost relatives in the Oklahoma City bombing. Zeran continued to call and fax America Online, pleading for help with the messages. He received some assurances, and a lot of stonewalling. “They were basically giving me a stiff arm,” he told me.

Zeran contacted a New York lawyer, Leo Kayser, who a few years earlier had unsuccessfully represented a plaintiff who sued America Online competitor CompuServe. Kayser wrote a six-page letter to America Online, outlining Zeran’s repeated pleas for help from America Online and the loss of business that he’d suffered due to the constant phone calls.

In January 1996, Zeran sued the Oklahoma radio station, and in April he sued America Online for negligence, alleging that the company failed to exercise reasonable care after he notified it of the posts. Zeran said that he sued primarily because of America Online’s response to his complaints. “They were not helpful at all,” Zeran said. “They were not talking to me in good faith. And I thought, ‘That’s not right.’ ”

To this day, Zeran does not know who posted the ads. And he blames America Online for creating a system that allowed people to remain anonymous even after creating such harmful posts. “If you’re a company that has a fleet of cars, do you not know who’s driving them?” Zeran asked.

America Online claimed that it was immune from Zeran’s lawsuit under an obscure new law, known as Section 230 of the Communications Decency Act. Section 230 is part of a massive overhaul of U.S. telecommunications laws that President Bill Clinton signed into law less than three months before Zeran sued America Online.

To understand why Congress passed Section 230, you first need to understand two court cases that led to its passage. The first was Cubby v. CompuServe, which Kayser had litigated in the early ’90s. In that case, the plaintiff sued CompuServe over an allegedly defamatory newsletter article that was posted to a CompuServe forum, accusing him of being fired from a previous job.

The court attempted to apply brick-and-mortar liability rules to these new online services. Under the common law and the First Amendment, a distributor of third-party content, such as a bookstore, is liable only if it knew or had reason to know of the illegal content. A federal judge concluded that CompuServe was a distributor that had no knowledge or reason to know of the alleged defamation and dismissed the lawsuit. “CompuServe has no more editorial control over such a publication than does a public library, book store, or newsstand, and it would be no more feasible for CompuServe to examine every publication it carries for potentially defamatory statements than it would be for any other distributor to do so,” the judge wrote.

CompuServe’s victory did not mean that all online services would receive this liability protection. A few years later, a New York state court judge, in Stratton Oakmont v. Prodigy, concluded that CompuServe’s competitor Prodigy was a publisher, not a distributor, and therefore was strictly liable for any defamation that its users posted. The judge based his 1995 ruling on the fact that Prodigy had promoted its editorial control over user content,] as an effort to make the service more family-friendly. “Prodigy’s conscious choice, to gain the benefits of editorial control, has opened it up to a greater liability than CompuServe and other computer networks that make no such choice,” the judge wrote.

The Prodigy decision received nationwide attention, as it could discourage online services from moderating user content. This ruling came as public attention focused on the accessibility of online pornography to minors; so, many policymakers did not want to provide a disincentive for blocking indecent material.

At the time, Congress was overhauling U.S. telecom laws for the first time in 60 years. To address online indecency, the Senate attached to its version of the telecommunications bill a provision known as the Communications Decency Act, which imposed criminal penalties for the online transmission of indecent material.

Many members of the more tech-savvy House were concerned about the First Amendment problems with the Senate’s bill. They also wanted to prevent overregulation of the internet and remove the moderation disincentive created by the Stratton Oakmont case. So they added to their version of the telecommunications bill a provision that would become Section 230. The core of Section 230 is what I call the 26 words that created the internet: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” The law has exceptions for federal criminal law and intellectual property law. Section 230 also states that online services shall not be liable for good-faith actions to block objectionable content.

“It will protect computer good Samaritans, online service providers, anyone who provides a front end to the internet, let us say, who takes steps to screen indecency and offensive material for their customers,” then-Rep. Chris Cox, who wrote Section 230 with then-Rep. Ron Wyden, said on the House floor in August 1995. “It will protect them from taking on liability such as occurred in the Prodigy case in New York that they should not face for helping us and for helping us solve this problem.”

Cox also said that he wanted to avoid creating a “Federal Computer Commission” to regulate the internet and instead encourage “the most energetic technological revolution that any of us has ever witnessed.”

In February 1996, Clinton signed the final version of the telecom law, which included both the Senate and House provisions. Section 230 received barely any attention in media coverage, as most of the focus was on the Senate’s indecency law (which the Supreme Court would strike down as unconstitutional a year later).

Few people within the general public—or even the legal community—fully contemplated the potential breadth of the 26 words in Section 230. There were, broadly, two possible ways to answer that question. By prohibiting online services from being “treated as” the publishers or speakers of third-party content, Section 230 could merely mean that all services are treated as distributors and are liable if they know or have reason to know of illegal content. Alternatively, Section 230 could offer broader immunity, prohibiting any liability for third-party content unless an exception applied.

If the judge in Zeran’s case adopted the first of those readings, America Online probably would not have received Section 230 protection. Zeran had repeatedly contacted America Online about the postings. But the district judge in Virginia adopted the second, broader view, and dismissed Zeran’s lawsuit against America Online. Zeran appealed to the U.S. Court of Appeals for the 4th Circuit. The presiding judge on the panel was J. Harvie Wilkinson, a Reagan appointee who had been a newspaper editor and has a long history of writing pro–free speech opinions.

Wilkinson’s November 1997 opinion not only affirmed the dismissal of Zeran’s case but presented a broad reading of Section 230 that hundreds of other judges would adopt largely without question. Wilkinson wrote that it was “not difficult” to understand Section 230’s purpose. “Congress recognized the threat that tort-based lawsuits pose to freedom of speech in the new and burgeoning Internet medium,” he wrote. “The imposition of tort liability on service providers for the communications of others represented, for Congress, simply another form of intrusive government regulation of speech.”

Wilkinson wrote of the “staggering” amount of information that online services transmitted. “It would be impossible for service providers to screen each of their millions of postings for possible problems,” he wrote. “Faced with potential liability for each message republished by their services, interactive computer service providers might choose to severely restrict the number and type of messages posted.”

Wilkinson’s reading of Congress’ desire for free speech drove his broad interpretation of Section 230. The statute did not merely mean that all online platforms are treated as distributors, he reasoned, as distributors are merely a “subset” or “species” of publisher.

This finding was particularly crucial for Section 230 to have a sweeping impact on the internet. If Section 230 only meant that all platforms were treated as distributors, then they might be liable if they receive a complaint about a user post but fail to remove it. Under Wilkinson’s broad interpretation, a platform is free to leave content up—or take it down—after receiving a complaint.

The Supreme Court denied Zeran’s petition to review Wilkinson’s opinion. A 1998 New York Times article about Zeran’s case quoted Stanford legal historian Lawrence M. Friedman, comparing Zeran’s case to an 1842 court opinion that limited railroads’ tort liability. “I think the 19th century was dazzled by railroads and new technology, and the courts today are dazzled by new technology,” Friedman told the Times.

That quote has stuck with Zeran for more than 20 years. “I believe that Judge Wilkinson was dazzled,” Zeran said. “He clearly not only was dazzled, but he dazzled himself.”

This dazzlement, Zeran said, is reflected in Wilkinson’s belief that it was “impossible” to remove problematic posts. “The error that he made there is he was coming from the analog era,” Zeran said. “He was unaware of the power of computing and servers.” Zeran says that he was not dazzled by the technology in 1997. He had worked with digital technology since the 1970s, when CBS Sports hired his production company to create a new look for its programming. Wilkinson “should have viewed the new technology as another phase in communications,” Zeran said. “He should have honored the spirit of the law that had been there for a very, very long time, and that law was there for very specific reasons.”

Zeran said Wilkinson should have adopted what he believes to be the proper reading of Section 230: treating all platforms as distributors and providing them with protections until they are notified of the allegedly illegal user content.

Zeran raises some good points about the inequities that he—and others—have faced under the internet that Section 230 created. Particularly, his critique of America Online’s inability to identify his tormentor brings to mind a proposal that Boston University law professor Danielle Citron articulated more than a decade ago: tying Section 230 to a standard of reasonable care that includes a requirement for websites to provide “traceable anonymity” if commenters break the law. For instance, traceable anonymity would require such websites to retain IP addresses, which could be subpoenaed in defamation suits and used to track down posters.

As to Zeran’s criticism of Wilkinson’s belief that it was impossible for services to moderate user content—that is more complicated. Platforms can and should do better. Zeran’s case is an early example of a platform that could have done more to mitigate the harms caused by the continued posts, particularly after Zeran’s persistent attempts to get the company to take his problem seriously. However, online services today do in fact use automated tools such as PhotoDNA to screen for illegal material, and this technology is helpful in filtering some harmful content. But technology is not a panacea for all online harm; indeed, large platforms have hired thousands of human moderators to determine whether user content violates their policies, and even with those investments there are high-profile failures. Section 230—as broadly interpreted by Wilkinson—has provided platforms with the breathing room to succeed and fail.

Zeran said he never would suggest that platforms have an obligation to proactively detect illegal content; rather, they should be able to handle complaints and remove harmful content after being notified. “The operative word isn’t monitoring or filtering,” Zeran said. “It’s response.”

Did Wilkinson misinterpret Section 230 when he ruled against Zeran? Both of its authors, Cox and Wyden, told me that Wilkinson got it right (though Cox has gripes about a few subsequent opinions that liberally applied Wilkinson’s opinion). Still, it is not unfathomable to suggest that another judge would have read Section 230 more narrowly. Because Zeran’s lawsuit was the first Section 230 case to be decided in both the trial and appellate courts—and the author of the appellate opinion is widely respected—it quickly became the prevailing reading of the statute.

To see how Zeran’s loss affected the future of Section 230—and the potential for an alternative reading to take hold—consider a case filed against America Online in a Florida state court a month before the district judge dismissed Zeran’s case. A mother alleged that a man who recorded the sexual abuse of her 11-year-old son had marketed images and videos of the abuse in America Online chatrooms, despite complaints that the company had received about the abuser. The state trial court, intermediate appellate court, and Florida Supreme Court all concluded that Section 230 barred her claims, and all three opinions relied heavily on the rulings against Zeran.

Yet the Florida Supreme Court’s opinion was split 4–3. The dissenters wrote that “it is inconceivable that Congress intended the CDA to shield from potential liability an ISP alleged to have taken absolutely no actions to curtail illicit activities in furtherance of conduct defined as criminal, despite actual knowledge that a source of child pornography was being advertised and delivered through contact information provided on its service by an identified customer, while profiting from its customer’s continued use of the service.”

Had Zeran’s case not yet been decided, it is possible that at least one more Florida Supreme Court justice would have adopted this narrower, notice-based reading of Section 230. Likewise, other judges might have concluded that Section 230 does not apply if platforms are on notice of the illegal content.

But we are well over two decades past the “what-ifs” for Zeran’s case. He lost, and his loss created the legal system on which so many platforms built their operations.

For instance, it is difficult to conceive of a site like Yelp existing with its current moderation policies under a narrower construction of Section 230. Let’s say a consumer posts on Yelp that a plumber charged $2,000 but did not fix the problem. If Yelp were to face liability upon notice that the review was defamatory, Yelp might feel great pressure to remove the review after receiving a complaint from the plumber. Otherwise, Yelp could face tremendous liability.

I posed this problem to Zeran. He said that he wants to foster free speech, but he also wants to prevent harms such as what he suffered. Zeran would like platforms to face liability for anonymous speech that they fail to remove after receiving a complaint of defamation or other illegality. If the user content is signed with a real name, he said, the platform should at least temporarily remove content that is alleged to be illegal but provide the poster with the opportunity to demonstrate that it is not defamatory or illegal. In the plumber example, he said the consumer could prove this by litigating against the plumber in small claims court.

Zeran asked me what I thought of the proposal, and I pointed out that, as with so much related to Section 230, there are trade-offs. While a notice-based system may only target illegal speech, I say, it could have a significant chilling effect on other speech. “Who needs to live in a society of illegal speech?” he asked in response. “Who does that help?”

I asked Zeran a question that I’ve asked myself many times over the years: What would the internet look like if he had won his case?

“Well, I think it would be much higher-quality,” he said. “I think we’d be living in a much smarter world.”

Zeran, a video producer for decades, said he has great reverence for free speech. But he looks at the current state of online discourse and is not happy. And he believes that this ultimately poses a threat to free speech in the long run.

“It’s a cacophony of a lot of invective speech,” he said, “and look where it’s taken us.”

The views expressed in this article are only the author’s and do not represent the U.S. Naval Academy, Department of the Navy, or Defense Department.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.