Recapping Recent Section 230 Proposals

Jess Miers
14 min readJul 1, 2020

The following is a series of overviews of three recent proposals to amend Section 230. Each overview will give you the TL;DR on what each bill proposes. I also highlight some of my major concerns, especially those regarding content moderation efforts.

I’m housing these bill proposals and my attempts at Section 230 redlines here. I’ll try to keep this updated — so feel free to check back as more proposals surface.

For a deeper-dive into any of these bills, I suggest following the Technology and Marketing Law Blog for Prof. Eric Goldman’s famous 4,000+ word insights.

Limiting Section 230 to Good Samaritans Act (Hawley)
Bill text | Section 230 redline

Requirements

The bill creates a new entity, “edge provider.” An edge provider is an interactive computer service (website, online application, mobile application), that in any month within the most recently completed 12-month period:

  • was accessed by more than 30,000,000 users in the U.S. regardless of how they accessed the service; OR
  • was accessed by more than 300,000,000 users worldwide, regardless of how they accessed the service.

AND

  • had more than $1,500,000,000 in global revenue (during the most recently completed tax year)

The bill modifies Section 230(c)(1) in the following way:

In exchange for Section 230 immunity, the bill requires edge providers to promise they will operate their services in “good faith.” Edge providers must include the promise in their terms of service agreements along with a description of the edge provider’s content removal policies.

An edge provider does not act in good faith when it (or its algorithms) selectively enforce its terms of service or removal policies. Additionally the bill provides a catch-all for any other intentional actions taken by the edge provider in bad faith such as an action taken with an illegitimate purpose, a violation of fair dealing standards, or with fraudulent intent.

The bill empowers users to bring claims against edge providers that breach their promises to operate in good faith. If the edge provider is found to have breached their promise, it can be liable for $5000 or more in damages plus attorneys fees and other costs.

Implications

In true Hawley fashion, this is primarily another temper tantrum directed at big tech. The $1.5 billion global revenue threshold seems to place smaller services outside its scope — obviously targeting Facebook and Google. However, non-social media companies are not quite off the hook. The bill’s users-accessed threshold will cover any large website that attracts visitors in any capacity. Specifically, the bill doesn’t require these users to be “active.” So if at any point a website receives a random visitor (including repeat visitors), that visitor is included in the users-accessed count. Hence while the bill covers Facebook, it also covers The New York Times, which had $1.74 billion in global revenue in 2018 with more than 30 million visitors from the U.S.

It’s typical to see Section 230 proposals that attempt to wall-off startup companies from the tech giants like Facebook and Google. While many will point out that bills like these might appear to not directly impact startup companies, they do severely affect competition. Essentially, Hawley creates a ceiling for market entrants to sell-out to their big tech competitors before they get too big and incur liability costs. These startup “safe harbors” only further cement Google and Facebook’s place in the market.

I’m also unclear as to whether the government can force private entities to embed “promises” into their terms of service in the first place. This seems like a contract interference issue, especially since the alternative is for these websites to forgo Section 230 protection entirely (which would be a death sentence).

Regardless, the real payload here is the prohibition on “selective enforcement” of a website’s terms. This is nonsensical. Every single decision about content, whether that be to remove it, leave it up, fact-check it, etc., is a selective enforcement of some rule or policy imposed by the website. It’s bills like these that further solidify my theory that regulators really think websites are controlled by a little man behind a curtain with two buttons: remove, don’t remove…and that every piece content comes with a flag stating whether it’s “illegal” or not.

In reality, the decision to moderate involves complex workflows, processes, and variables, all of which are created and (sometimes) carried out by humans with natural human biases. Users create boundary-pushing content minute by minute. Sometimes policies don’t even exist for certain types of content and decisions have to be made on the fly. Some decisions create intricate edge-cases. Some decisions have to be escalated. Some content is obviously rule-breaking, but most of it is usually not. This makes it impossible to have one broad set of procedures that perfectly and equally apply content moderation decisions to pre-determined subsets of content. This makes it doubly impossible for services to promise to their users that they will always remove content the same way for each instance. Content moderation is inherently subjective. So, every content moderation decision, under Hawley’s bill, is technically bad-faith.

In response, a user that has had their content removed (or restricted) in any capacity, now has the ability to launch a bad faith lawsuit against the website for “selectively” enforcing their terms. Multiplied by hundreds of thousands of users that feel personally victimized, the potential costs are astronomical.

To cope, websites might consider forgoing moderation efforts — except this might be considered bad faith as well under the catch-all provision. Plus, it opens the doors for a flood of “lawful but awful” content (like graphic pornography); an ironic side-effect of a bill created to encourage services to be “Good Samaritans” of the web.

Alternatively, websites might try to anticipate every single type of content they might have to make decisions about and build that decision making process into their terms of service; an unrealistic and impossible approach.

Platform Accountability and Consumer Transparency (PACT) Act (Thune/Schatz)
Bill Text | Section 230 redline

Requirements

The bill starts by requiring providers of interactive computer services (websites) to publish an “acceptable use” policy. The policy must inform users about the types of content that the service allows. Additionally the policy must explain the steps and processes taken by the service to moderate policy-violating content. Lastly, the policy must provide the user with a means to report “potentially policy-violating” content as well as “illegal content” or “illegal activity.”

Potentially policy-violating content is defined as any content that might violate the acceptable use policy created by the service. Illegal content is defined as any content that, as determined by a Federal or State court, violates Federal criminal or civil law, or State defamation law. Illegal activity is defined as any conduct by a user that, as determined by a Federal or State court, violates Federal criminal or civil law.

Accordingly, the service must implement a call-center with a live company representative to take user complaints about said content violations. Additionally, the user must be provided with an email address or “relevant in-take mechanism” to handle the complaints. Lastly, the service must also provide the user with an easily accessible method to submit and track “good faith” complaints about potentially policy-violating content, illegal content, illegal activity; or a moderation decision the service makes about the user’s content.

With regard to illegal content or illegal activity, the bill requires the service, upon notice from a Federal or State court order, to remove the illegal content or stop the illegal activity within 24 hours of receiving the notice (unless the service reasonably believes the notice is illegitimate).

A service will not be required to monitor or affirmatively seek out illegal content or illegal activity. Additionally the service will not be required to monitor for additional instances of previously notified illegal content.

With regard to potentially policy-violating content, the bill requires the service, upon notice, to review the content, make a determination as to whether the content violates the acceptable use policy, and take appropriate action, not later than 14 days from receiving the notice.

The bill also creates two separate procedures for a service to follow after content is removed depending on the reason for the service’s removal decision:

If content was removed based on a complaint: the service must notify the content creator and explain the removal decision. The content creator must be given an opportunity to appeal and the complainant (and creator)must be notified regarding the outcome of the appeal along with an explanation if the decision to remove is reversed.

If content was removed based on an internal moderation decision: The service must review the content, make a determination as to whether the content violates the acceptable use policy, take appropriate action, and notify the complainant of the final decision along with an explanation, no later than 14 days after receiving a complaint from the content creator.

Additionally, the bill imposes several requirements on the service to publish a quarterly transparency report. The report must include the following:

  • the total number of instances in which illegal content, illegal activity, or potentially policy-violating content was flagged;
  • the total number of instances in which the service took action on the above content (such as removal, demonetization, de-prioritization, appended counter-speech (like fact-checks), account suspension, account removal, or any other moderation action taken in accordance with the acceptable use policy). Additional reporting requirements are described for each action taken;
  • the total number of appeals; AND
  • a description of each tool, practice, action, or technique used to enforce the acceptable use policy.

The bill creates the entity “small business provider.” A small business provider is a provider of an interactive computer service (website) that, during the most recent 24 month period (emphasis added):

  • had less than 1,000,000 monthly active users or monthly visitors; AND
  • had less than $25,000,000 in accrued revenue

A “small business provider” is exempt from the call-center requirements. Additionally, a small business provider can process complaints within a reasonable period of time based on the size and capacity of the service.

Additionally, the bill amends Section 230 to allow for the enforcement of Federal civil laws by State AGs.

Implications

This bill is a Trust & Safety nightmare, which is why I’m sort of shocked to see several folks claiming that these requirements are “not as bad” or “less extreme” than other bills. Perhaps, it’s a reading issue — most are jumping directly to the Section 230 amendments. But the payload is found at the beginning of the bill.

First, it will be impossible for a service to create a complete and comprehensive “acceptable use” policy. The best a service will be able to do is attempt to anticipate every edge-case created by the service’s user generated content. The service must then describe how they might respond to these future edge-cases. But again, this is the impossible challenge of content moderation. The edge-cases are not obvious. Some edge-cases might require the service to create a new workflow entirely from scratch if that type of content has never been acted upon before. Plus, one of the great aspects of innovation in this field is that content moderation flows, tooling, and processes naturally get better over time. So every time those methods are tweaked, the acceptable use policy would require an update. This will not be sustainable, so the acceptable use policy is a fool’s errand.

But what I find most concerning about this bill is the broadly defined “potentially policy-violating” category. Almost all content might be considered “potentially policy-violating” depending on how you spin it. That’s why the large Internet companies dedicate an exorbitant amount of resources towards legal and policy specialists and experts that can noodle through the gray matter. A slighted user will always be able to make a case for why the content they’re complaining about is potentially policy-violating. And regardless of how tedious or ridiculous the complaint might be, resources have to be expended towards reviewing and responding to every single complaint lodged. On top of that, a complainant can also lodge an appeal — which means even more wasted time, energy, and clock cycles.

Illegal content and illegal activity are defined broadly as well, covering violations of both federal and state criminal and civil law. The notice is at least tempered by the requirement that it must be conveyed via a court order, however, the 24 hour turnaround is still incredibly short. Additionally, the process completely denies the Internet company an opportunity to respond, especially with regard to edge-case content. For example, some large Internet companies strive to be conservative about their removals (in an effort to promote speech). Here, a service may no longer have that choice.

While the bill exempts services from having to affirmatively seek out illegal content/activity or monitor for additional instances of the violating content, it’s unclear whether the service would be required to remove subsequent content that in some way implicates the illegal content or activity it was required to remove. For example, if a user is criticizing or commenting on the illegal content, would that user’s posts need to be removed as well?

If the DMCA has taught us anything, these types of notice and takedown regimes are absolutely ripe for abuse. Services should be just as prepared to handle potentially policy-violating abuse as they are for Copyright abuse. Additionally, with regard to potentially policy violating claims, the bill only requires that the notice be submitted in “good faith.” This leaves it up to the service to decide for themselves whether the notice is legitimate. Because the cost of an incorrect calculus is skewed sharply against the service, the likely outcome will be for services to treat all complaints as legitimate. Unlike the DMCA, the bill provides no penalty for bad faith complaints. This leaves the door wide open for bots and other abusive takedown strategies.

Regarding the transparency requirements, I do believe Internet companies could do a much better job with being more transparent with users (and the government) about their moderation practices. Perhaps, the more these Senators understand the complex nature of content moderation, the less tone-deaf Internet regulations we’ll see (though I doubt it).

However, regulating transparency, especially the way this bill attempts to do it, will not help regulators realize their transparency goals. Worse, what might result are more obfuscated transparency reports, cluttered by mostly uninteresting and unsophisticated stats created by illegitimate takedown requests.

Furthermore, there are many competitive reasons why Internet companies are hesitant to share all of the inner-workings of their moderation strategies. While collaboration is important, competition is really what motivates Internet companies to do better or “nerd harder.” So while it’s important that these services consider being more open about their processes in general, some moderation trade secrets are worth keeping. Plus, just as the acceptable use policy would require so many tweaks as to make it totally unsustainable, so would any report regarding ever-changing moderation techniques.

Stopping Big Tech’s Censorship Act (Loeffler)
Bill Text | Section 230 redline

Requirements

The bill amends Section 230(c)(1) by reserving the immunity only for websites that take reasonable steps to prevent or address “unlawful” uses of the service. Unlawful use of a website is defined to include cyber-stalking, sex trafficking, trafficking in illegal products or activities, child sexual exploitation, and anything else proscribed by Federal law.

The bill also amends Section 230(c)(2) by constricting content moderation decisions to the confines of the First Amendment. Accordingly, websites that restrict access to “constitutionally protected material” lose protection under 230(c)(2) unless the restriction is “view-point neutral,” limited to “time, place, or manner, in which the material is available,” and the website has a compelling reason to restrict or remove the content in question.

Constitutionally protected material is defined as any material protected by a right under the Constitution (like the First Amendment). The definition further requires that a website adhere to the right regardless of whether they would be required to as a non-state actor.

To be eligible for protections under 230(c)(1) or 230(c)(2), a website must detail their procedures and practices regarding content removal and restriction decisions. Additionally, a website must provide a detailed explanation of their decision-making processes to a user that had their content removed or restricted.

The bill creates an exception for civil enforcement actions brought by a Federal agency, office, or other establishment arising out of any violation of a Federal statute or regulation. Additionally, it explicitly places the burden of proof on any party alleging a violation under 230(c)(1) or 230(c)(2). Lastly, it prohibits punitive damage awards against the website for moderation decisions that essentially violate the First Amendment.

Implications

It feels ridiculous spending time outlining why this proposal is terrible. In a stroke of what can only be described as blatant incompetency, Loeffler perfectly illustrates what happens when someone who doesn’t understand Section 230 or content moderation, tries to regulate the Internet.

For starters, the bill attempts to only provide Section 230 immunity for websites that take reasonable steps to eliminate unlawful content. As Mike Masnick pointed out, this could have severe implications for services that employ end-to-end encryption. Consider, for example, the DOJ concludes that encryption creates a barrier for law enforcement to take action against the distribution of unlawful material (like CSAM).

Additionally, because 230 protection would only shield decisions to remove or restrict unlawful content, the bill now creates a significant liability risk any time a service removes or restricts lawful content. Anyone that has ever worked in Trust and Safety (or knows anything about the First Amendment) knows that there are many categories for “lawful but awful” content (like graphic pornography).

What a lot of folks don’t quite understand is that unlawful content usually makes-up the “easy” and automatable cases. Section 230 was designed, really, to handle the harder edge-cases — the majority of user generated content. “Lawful but awful” content almost always involves incredibly challenging decision-making. And though technically protected by the First Amendment, lawful but awful content drives away parents, users, and advertisers — which is why websites work so hard to suppress it in the first place.

Not only does the bill create a perverse incentive for services to ignore lawful but awful content but it also severely penalizes services that risk filtering it anyway. The bill mandates that a service must give a detailed response for every removal decision. Imagine how many trolls, spammers, and even just regular users, now become entitled to a response for every piece of content they’ve created that’s ever been blocked, removed, filtered, or restricted.

This is a migraine if a service generates pre-written responses for removal request. Depending on the service, and the types of takedown requests the service usually handles, this could mean redoing, recreating, and re-approving, hundreds if not thousands of these responses — another immense waste of resources and time better spent improving other aspects of the service.

Not to mention, the service may not even know or have a reason to know why certain types of content were restricted or removed. Services that rely primarily on community moderation and AI, might not have the necessary context or details to convey a sufficient reason to the content provider for the removal. This deeply discourages the use of community moderation and AI, shifting the burden back to manual, employee moderation. Ironically, community moderation might result in less biased moderation overall given the larger and more diverse pool of decision-makers available. Isn’t that what conservatives want?

Like the Executive Order and most other proposals to force Internet companies to comply with the First Amendment, the bill houses most of its limitations under Section 230(c)(2) — the rarely used safe harbor.

Most people incorrectly attempt to compartmentalize Section 230 protection into two boxes; the first box for “publication” decisions under 230(c)(1) and the second box for content moderation decisions under Section 230(c)(2). But that’s not really how it works. Instead, courts will usually encompass content moderation under 230(c)(1) as it pertains to decisions about third party content. But Section 230 opponents still, for some reason, view Section 230(c)(2) as the immunity’s engine. So, these bizarre amendments are typical.

Regardless, the bill predicates Section 230(c)(2) protection on “viewpoint neutral” moderation efforts — which is impossible. To this day, I admittedly still don’t understand what “viewpoint neutral” even means regarding online content. For example, if a service removes a forum or group dedicated to #MAGA supporters, do they simultaneously have to find a liberal or progressive group to remove too? Of course, that’s absurd. Technically, the removal or restriction of any kind of content could be considered viewpoint biased. If a service removes all instances of accounts created with the word “cow” in their handle, is the service anti-cow (or anti-Devin Nunes)? Unfortunately, that would be a convoluted and expensive question for the courts to decide on a case by case basis (and there will likely be A LOT of cases).

Never mind the massive First Amendment violations implicated by the government dictating to websites what kind of content they can and can’t carry, Loeffler’s bill will ensure we encounter a lot more porn, spam, and all sorts of other garbage content, in our future. Because again, the best move here is to do nothing and host it all.

Here’s the irony in all of this: conservatives and Section 230 opponents don’t actually want that reality…so it’s not really “free speech” that they want. If anything, they want the selective enforcement of free speech. They want some content to be removed but not all of it. They want some content to stay up but not all of it. What they don’t realize is that Section 230, in its current form, already gives them everything they want.

--

--

Jess Miers

Senior Counsel, Legal Advocacy at Chamber of Progress