What he reads as a benefit to commerce, some of us read the same way we do a warning about government power.
Conservatives warn against voting for benefits from the government because any government powerful enough to give you those benefits is also powerful enough to take them away. (And in more partisan terms, it's a power-building technique in that one political party can—exceeding the purposes of government—give things and then threaten that another political party will take them away.)
The same warning could apply to commerce. Anyone who does not own their own property and is dependent on access to the property of others is also susceptible to the other party withdrawing that access.
Right now we're seeing this play out on a large scale with the Section 230 debate.
Section 230 was originally passed to protect companies that hosted user-generated content. They would not be held responsible for the content of others, and they would also not be held responsible if they determined content to be harmful and removed it.
As the internet has matured, its power has grown, and the biases of those in commercial power have become known, some unforeseen consequences of this law have unfolded.
Both conservatives and liberals freely post content. While both sides also have content removed, of late this has been more noticeable and keenly felt by the right. The most prominent example, of course, is President Trump himself.
The President has raised concerns about voter fraud. The left finds this harmful, and its tech company allies slap a label on those concerns to directly and immediately challenge them. Stricter measures can be deployed on those for whom discipline would be less noticeable.
Tech companies are acting well within their rights under Section 230. They are a forum, but as tech policy freedom organizations point out, they're not a forum that is legally bound to honor free speech.
Proposals suggested and offered to remedy this situation do not yet have much teeth. One mostly talks about the companies enforcing their own terms of service. That would mean tech company lawyers add a paragraph or two to the terms of service about the company retaining the right to remove content it deems harmful, and they can proceed exactly as they are now.
There is a possible nuance that is easier to see by hypothetically narrowing the focus of a tech company platform.
Years ago I noticed that of the many breakfast foods I like, a lot of them are yellow or close to that color. Let's say I create a platform where anyone could start their own discussion about all the different types of yellow breakfast foods. I establish at the outset that all forums shall stick to this intended topic, and I may remove unrelated content or threads. If someone comes along and starts talking about red lunch foods, I'm going to remove that content, maybe even if I agree, because it corrupts the integrity of my yellow breakfast food discussion service.
In this hypothetical, I've defined the agreement for specific discussion at the outset. Social media companies have not. They have put their product out there as a general purpose forum first. Even though they all have rules for users now, none of those define what's allowed, only what is not allowed.
The issue with social media companies has now crossed into new territory of how they define harmful content to remove or otherwise thwart. Are they removing speech about actual harm? An actual threat of harm? An idea of harm? Asking questions about voter fraud doesn't harm anyone, and yet social media companies are acting as if it does. “This claim about election fraud is disputed,” Twitter repeatedly declares.
If Twitter wanted to be consistent, they'd put a label like that on every Tweet from every user ever: “This claim about [controversial issue] is disputed.” That's the nature of free speech. That's the very point of protecting free speech. The whole reason to have Twitter, social media, and a forum for free speech is to dispute ideas.
Reform of Section 230 may be most effectively modeled on the Civil Rights Act of 1964.
Anyone is free to be selective with both what and whom they serve, if they are a private-serving institution/business. If they are a public-serving institution/business, they may only be selective in what they serve, not in whom they serve. A public restaurant may limit the choices they offer based on the color of the foods, but not based on the color of the customer's skin.
If a social media company is not going to define what they, their users, and their users' discussions shall be known for, then they should be known for not having limits. As their power, influence, and options for control have grown, that is no longer the case, and why many are considering possible reform.
As of now, until such time as reform is agreed upon, access to a forum someone else owns isn't necessarily superior.
No comments:
Post a Comment