• Seventh Gear Communications

Explaining Section 230 and Its Importance for Online Content


Last month, a Presidential Executive Order targeted social media companies for what the White House considered to be online censorship of conservative views. Specifically, the Executive Order targeted Section 230 of the Communications Decency Act of 1996 and sought to remove general protections for social media companies if they are found to be suppressing free speech.

There’s a whole lot to unpack here, and without getting into any political bias, we’re going to dive into what Section 230 is, why reforms to Section 230 have been in discussion for awhile, which reforms could be considered, and what might happen next. Since a good portion of our business here at Seventh Gear Communications is focused on handling clients’ social media, this is a timely topic and one where it’s easy to lose focus on what’s actually going on.

Let’s get started.

Section 230

The Communications Decency Act was passed in an era when the Internet was in its infancy. In 1996, “social media” as it was known then consisted of ICQ, followed by SixDegrees a year later, and LiveJournal in 1999. It would be another four years before LinkedIn and the now infamous MySpace would be launched. The world of online interactions looked much different, was totally new, and developing rapidly.

When Congress passed the Communications Decency Act of 1996, their main goal was to shield minors from inappropriate pornographic content.

Section 230 of that law basically said that online platforms should not be construed as publishers and are therefore not liable for the content that individual users share.

There are some notable exceptions; more on that next. ISPs were also included in the liability protections.

It’s not a free-for-all; Section 230 states that online platforms and users can still be held liable for copyright infringement or federal criminal laws committed on the platform, whether done by the platform itself or its users. This means that social media sites can still be held responsible for user-generated pirated content, illegal content, or child pornography – though these liabilities are not exclusive to Section 230.

Section 230 is a big part of the reason why online platforms were able to grow quickly, because the law was designed “to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation.” Essentially, Section 230 allowed online sites to moderate their users’ content without fear of lawsuits or excessive regulation. Without Section 230, online platforms had little incentive to step in and regulate anything – including clearly illegal content.

Prior to Section 230, online sites were considered distributors of content, not publishers; a big distinction when it comes to what gets posted.

This post does a great job of going into detail about how Section 230 came about.

You can read the full text of Section 230 here and the full Communications Decency Act here (scroll down to Title V of the Telecommunications Act).

The Good Samaritan Provision

Part (c) of Section 230 refers to the “Good Samaritan” provision and in part protects Internet providers and users from liability for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”

What this means is that online publishing platforms, like Facebook but not limited to Facebook, are permitted to regulate user content. They are fully permitted, under current law, to edit, remove, flag, or otherwise alter user content if the platform performs such actions in good faith. This is both good and bad, depending on your viewpoint. Definitely not so great from the point of view of content moderators, whose job it is to sift through tons of data to find and remove illegal content. They won a $52 million settlement in 2018.

In any case, it’s fair to say that while Section 230 paved the way for the internet as we know it, it’s doubtful that legislators envisioned how our lives would be impacted by social media almost 25 years in the future.



Reforms and Modifications to Section 230

The reason that Section 230 is in the news lately is a political one, to be sure. But it’s a disservice to its intent and the world it helped create to keep it a political issue. Think for a minute all the ways that the internet has evolved since 1996 … and how the social media of today can give rise to unintended consequences.

Let’s look at algorithms. It can certainly be problematic when algorithms make it possible for platforms to promote and amplify certain types of content that may be misleading or altogether false. Hosting content is one thing; promoting it is another.

The type of content matters more, too; social media sites collect millions of dollars in revenue from advertising dollars. But there’s currently no real regulation on what the ads can say; ad content doesn’t necessarily have to be truthful. It’s also a known fact that social media ads can infect users with malware or originate from bad actors.

This post lays out some possible reforms to Section 230, or new legislation entirely. To summarize their main points:

  • Holding platforms more accountable for their advertising, or monetized content. For example, “…if YouTube was liable as a distributor for content posted by business partners it pays hundreds of thousands of dollars to, it might have the incentive to figure out just who it is doing business with.”

  • More control over the ads themselves.

  • “Duty of care” for amplified content, because “…when platforms amplify content, instead of merely hosting it, the harm that damaging content can do is increased.” As the article points out, where would you draw the line between hosted and amplified content?

  • A more robust notice-and-takedown provision where there is a means of reporting potentially harmful content. This already exists to some degree but one can argue it is not actually that effective.

  • More clarity regarding platforms’ use of tools – like ad targeting – that could contain discriminatory demographics information.

  • More guidelines for safe, accessible platform design; as in, avoid designing a “defective platform … with a pattern of harms…”

  • Differentiating liability between different types of platforms, since Twitter and a Google search query treat content very differently.

We would also be remiss if we didn’t mention the set of suggested reforms brought about earlier this year when the DOJ convened a social media summit of sorts in February 2020. Their four areas of targeted reform are as follows.

  1. Incentivize platforms to address illicit content.

  2. Clarify the federal government’s role in enforcing what it considers to be unlawful content.

  3. Promote competition.

  4. Promote open discourse and transparency.

In the first item, the DOJ is calling for exemptions from liability for platforms that knowingly distribute, host, or solicit illegal content. It also aims to allow victims to pursue civil damages in cases of terrorism, cyberstalking, or child abuse. The second item would give more power to the federal government to regulate online platforms, and the third item aims to prevent online platforms from using Section 230 as immunity in antitrust cases. The fourth item is really about clarifying definitions of existing language in Section 230, like what constitutes “objectionable” content or “good faith.” The last bit of #4 purports to do away with “moderator’s dilemma,” a scenario in which platforms aren’t liable if they don’t know the content is there, OR they try to moderate and can then be held liable.

There seem to be decent elements of the DOJ’s proposal, and also parts that don’t seem to make a whole lot of sense from a practical perspective. Read for yourself this viewpoint over at TechDirt, which calls the DOJ proposal “preposterous” and this more moderate reporting of the proposal on sociable.co.

The most interesting reading on the DOJ proposal isn’t the proposal at all; it’s the participant submissions after the February summit. If you want to get lost in a rabbit hole of really interesting, really knowledgeable viewpoints on the good and not so good elements of Section 230, this is what you want to read.

Looking Ahead …

One thing has become clear: between the DOJ’s summit, building case law both for and against Section 230, and the President’s Executive Order signed on May 28, 2020, this is far from over. There are those who believe that Section 230 should be left alone completely, as well as those who want to abolish it and start with something new. Coming off the May 28 EO, federal agencies are supposed to have a report of how much advertising dollars are being spent on each social media platform by June 28, and by July 28 a petition to the FCC to clarify and/or propose changing regulations. Experts say that the President's EO won't hold up in Court, so there's that, too.

For our part, we hope that a more moderate, nuanced approach is the eventual solution, and that cooler minds prevail when it comes time to write a new law and/or modify the existing one. The internet should not be a partisan issue, and the more we as internet and social media users understand how things like Section 230 work, the more informed we can be when it comes to our own content and publishing strategies.

We’ll dive deeper into legal trouble spots of using social media in the not-so-distant future. Until then, reach out if you have questions about Section 230 (we’ll do our best to answer, or direct you to the appropriate sources) or your business’s social media presence.

© 2020

Seventh Gear Communications LLC

350 Towne Square Way

#97867

Pittsburgh, PA  15227

412-228-0698

info@seventhgearcommunications.com

  • Facebook Social Icon
  • LinkedIn Social Icon
  • Twitter Social Icon
  • Instagram Social Icon