Over two decades ago, Section 230 of the Communications Decency Act was created to enable the internet to “offer a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity.”
Since then, technology has advanced, new powerful companies have emerged and many have already seen unintended consequences of the law — such as the monopoly status granted to a handful of companies, which, in turn, abuse the law. Like many laws, Congress must revisit Section 230 and strip it of the protections that only a handful of companies enjoy.
To understand the changes that are needed for Section 230, we should first revisit its creation. The internet was a relatively new concept when Section 230 was enacted as law. The United States Congress intended to promote internet technology as Americans increasingly started “relying on interactive media for a variety of political, educational, cultural, and entertainment services.”
The controversial law states that a provider shall not be held liable if action is taken in good faith to restrict access to, or limit availability of, material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable, whether or not such material is constitutionally protected.
A key section of the law distinguishes between a publisher and speaker. Who counts as a publisher, however, is the subject of great controversy. That section states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” For example, if Fox News runs a story that is false, you could sue Fox News. But technology companies such as YouTube, Twitter and Facebook are shielded from lawsuits despite their editorial practices — removing user accounts, posts, mislabeling, suppressing other users and prioritizing their own content — among many other serious issues.
Providers must be just support providers. That means just a platform. With the exception of pornography, terrorism or actions specifically prohibited by law, providers should not have the ability to remove or edit others’ content no matter how competitive, critical, or unpleasant it is.
They also must not promote their own content in a way that disadvantages other users. This means a marketplace e-commerce platform cannot promote its own private-label product that puts another business on their platform at a disadvantage; social media platforms cannot create artificial trends, mislabel posts, fact-check posts, promote their own content or impose discriminatory sets of rules for different users.
If a Washington Post writer reports a story and shares the link on social media, platforms must not be able to dispute that content. The writer owns the responsibility for what she writes. As a practical matter, social media companies do not have the ability to police all content created by billions of users every second. It is the job of the platform users to determine whether to engage with a content or not. Selective policing of content will inevitably erode trust on the platforms.
As Congress intended, platform users — not the provider — should be the only ones with the ability to block, filter and choose what they want to see or not see on their screens.
Nearly all social networks have two types of content: One, content created by the technology company. And Two, content created by the users. For example, LinkedIn “editors” write news that they highlight on the homepage. This content is created by LinkedIn, not a LinkedIn user. Similarly, Facebook and Twitter create, promote, label and editorialize content. In essence, they act both as a provider as well as a publisher.
To put this into perspective, imagine you visit Foxnews.com and they allow you to create a news story. You are a user, the content creator. Foxnews.com acts as a provider. However, if Fox News edits your story, they become the editor or publisher. Your writing could be suppressed, changed or given a different meaning. Hence, they are subjected to liability.
While creating a new set of rules, Congress should also be mindful of the innovation that comes from small startups. While large providers could be granted no 230 protection, there should be limited exceptions for small companies. For example, a resource-constrained startup of fewer than 10 million users would not have enough cash or employees to enforce content policies on their platform.
The provider’s role should be limited to providing the technology platform and not manage the users or the content. Since many social media networks actively create, publish and edit content appearing on the platform, they could not be classified as only providers. Hence, the protection should be removed. Instead of Congress allowing platforms to create their own content or operating policies that enable them to impose discriminatory sets of rules, Congress should create universal rules and protections that would apply to all companies and their users. This uniformity will benefit the technology industry.