At a social media company, I would start with clear terms of service. See yesterday’s post.
I would have two types of user registration. One is “true identity” and the other is “anonymous.” I would require each anonymous user to pay a $10 non-refundable registration fee, because I want to limit the number of anonymous accounts. Both types of registration would be required to abide by the terms of service. Content by anonymous users would be labeled as such.
Human moderators would be the heart of the system. To economize on these resources:
All content would be run through an AI system that would assign a score to the content, with 0 for “apparently totally safe” up to 100 for “apparently in clear violation of the terms of service.” When the score is above a certain level, say 50, the content would be referred to a human moderator for auditing.
The scoring system would be updated regularly based on trends in the results of human audits. But I would not trust the AI system entirely. I would add random samples of the following types:
–regular random samples of content uploaded by users who have many followers.
–a tiny but purely random sample of all content uploaded. The point of this sample would be to make sure that humans agree when the AI system assigns low scores. If a human audit sees a randomly chosen item as being close to a violation of terms of service, this is a sign that the AI algorithm needs to be improved. Most users would never have any content picked up by the purely random sample.
All samples would be sent to the human employees for moderation.
When humans find content that violates terms of service or “comes close,” this would trigger various actions.
–If it definitely violates the terms of service, the user would be asked to remove the content within 24 hours. The user would be informed of the specific way in which the content violates the terms of service.
–The user would be put on a “watch list” that would involve increasing the rate at which that user’s content is sampled for auditing by humans. One way to do this is to lower the threshold for triggering an audit. Suppose that for ordinary content the trigger for an audit is a score of 50 or higher. For someone on the watch list, the content might be audited if the score is 20 or higher.
–The user’s network of followed/followers also would have their content sampled at an increased rate for auditing by humans. Maybe if the score is 40 or higher.
Users who repeatedly violate terms of service would be given a warning. Those who fail to heed a warning would have their accounts terminated.
Maybe I’m just an electronic payments idiot, but couldn’t your identity be determined through the electronic payment? If not by users, then at least by the service provider.
Also, the current human review of content has come under a lot of criticism (see Project Veritas) for bias. The service should ensure that the human reviewers are actually unbiased, and also that the humans in the AI feedback loop do not lead to bias in the AI.
Another possibility is a clear process of contesting warnings, bans, throttling, and other penalties, with adjudication by a neutral party.
Another option is to pass legislation allowing for lawsuits and damages, particularly when a ban is found to impact a business or other significant interest of the user.
“but couldn’t your identity be determined through the electronic payment?”
Good point. Require all users, including true identity users, to post credit card data so that:
1. Their identities can be confirmed
2. They can be fined per the terms of the contract for failure to adhere to the rules.
Not necessarily. Prepaid credit could be used.
any limitations on banning would run afoul of the first amendment.
Only if those limitations are imposed by goverment.
I’m not a lawyer, but it seems that all sorts of government restrictions exist on the types of contracts one consummate. Have the correct political party enact it, and it will pass judicial muster.
For anonymous accounts, you might include not just a nominal fee, but also a bit of a lag — if you sign up for your account today, you can start posting next week. Maybe you can start reading right away, but someone who gets kicked off (and hasn’t already registered an extra anonymous account) has a bit of a speed bump.
I strongly endorse explaining why content is considered a violation.
Appeals and training the moderators seems like something worth thinking about, too. I don’t have a lot of my own ideas for this, but I think having some posts evaluated by multiple moderators would be a good thing, with a manager being notified of content that two moderators give wildly different scores to, partly to produce a conclusive evaluation of that content, but mostly to evaluate/re-train moderators to ensure consistency. The manager might well see a disagreement and understand why each moderator assessed it they way they did, and include it in the moderator training. Okay, maybe I do have some of my own ideas, they just aren’t ideas I have spent a long time thinking about.
I want to limit the number of anonymous accounts.
Your best commenter, based on the number of comments you highlight as standalone posts, is an anonymous account.
not all anonymous accounts are created equal. some are high quality, many are just troll bots. the problem isn’t anonymity, its the easy and scale in which they can generate trash.
In this era of “the decentralized religion that persecutes heretics,” the better, more interesting accounts are disproportionately anons.
I’ve seen multiple Twitter blue-checks say they get more hysterical abuse from named accounts than from anons, which makes sense when you realize that the holiness spiral is driven by status-seeking.
I did not say abolish anonymous accounts. I just don’t want it to be costless to set up hundreds of phony accounts.
This can be dealt with by requiring anonymous users to privately give their identity under the (perhaps legally binding) assurance that it won’t be shared. And/or you could have agree to terms whereby you can publish their identity if they violate the rules and refuse to remove the content that runs afoul of the rules.
I’m not convinced penalizing anonymity itself is necessary or ultimately good. Doing so will get rid of a few nutty commenters, but it’ll get rid of many thoughtful commenters who have every reason to conceal their identities, and a the effect of discouraging anonymity would not necessarily be to make conversations more moderate or civil, as one can freely be as immoderate and uncivil as one pleases without fear of public reprisal as long as one does so from a certain vantage point. IOW, you may end up with more Paul Krugmans and fewer Charles Murrays in your forum.
I was thinking about this too. I know I personally would be very hesitant to spend money and get an anonymous account just to post. Hell, Disquss and its annoyances is often enough for me to decide posting isn’t worth it. Granted, I am way behind the curve on online payment etc., and a bit cheap, so I might not be indicative of most people.
If I were still in academia I would definitely never use my real name for anything remotely sensitive, and so would probably never comment ever due to price issues. Even outside I just don’t talk much.
So, what’s changed and why?
We had a fairly stable equilibrium from 1995 through about 2018.
Not implying that it was perfect by any means, but for sure preferable to where we are headed.
For one thing, Moore’s law says that troll bots are many times better now as they were in 2016, as is the back-end software that improves your social media “engagement” by putting content in your feed that will get a reaction out of you.
[We had a fairly stable equilibrium from 1995 through about 2018.]
Erm, no. There has been a steadily worsening condition over the past few decades. Most people simply didn’t perceive the gradual change.
I have spoken to many family, friends, and associates about the gradual changes and that they should be aware of it and ideally speak out.
If only. The changes started a long time ago. And have been going on under most peoples’ “perception” radar.
Been on the www since 1994. Comment sections have always been lackluster (this blog absolutely excepted of course), but very much pro free speech. As far as I can tell, the only thing that’s changed is a drive to marginalize alternative points of view in line with “safe spaces,” “micro aggressions” and related anti-free speech themes from the various college campuses.
That sounds like a reasonable approach.
As someone deeply concerned about the increase in Chinese Communist Party influence throughout the USA and the ongoing war on constitutional liberties, rather than distinguish between anonymous and non-anonymous accounts, if I were in charge of content management I would designate content as either CCP affiliated or non-CCP affiliated.
I would also encourage non-CCP affiliated individuals to identify themselves as such. I have begun using the following tag so that readers know I am not a journalist, academic, or functionary in a DC think tank or tech tyranny business:
*A non-CCP affiliated comment. The author of this comment attests that he is not affiliated with nor has he accepted anything of value from the Chinese Communist Party or its affiliates or from anyone who has or their affiliates.
Rather than charge CCP affiliated users more, I would offer discounts to individuals who use Indian made cell phones or a Kobo rather than a Kindle.
Why is it necessary to police anyone’s speech?
If some people are offended, give those people the software tools to filter out anything or anyone they might find objectionable.
+1
For the US, using existing rules for prohibited speech is the best idea.
I’d add these:
1) No total bans. Rate-limit progressively (6/d, 3/d, 1/d, 2/w, 1/w, 2/m… 1/y). No reason why we need to be binary, which only creates thresholds and pain.
2) Consider the tolerance of the user. I might not want to see any posts with the word ‘abortion’ in it. You might want to hide posts from users who post only about election fraud. This all should be extremely easy for big tech who know boxers vs. briefs and what color.
Big tech seems to like the power of ‘bans’. The greater public would prefer the hit a button a get ‘Democratic Party Mix’ or ‘Wall Street Journal Filter’ — with some transparency to see that some things are hidden, click here to expand your bubble.
+1
But the left wants to silence the MAGA/Q people and can do so via corporate power. Letting progressives mute MAGA/Q people doesn’t really do anything for them (I’m sure they rarely if ever see MAGA/Q content already as people can already block/unfollow).
How do we get from here to there?
Progressives believe that the only reason anyone disagrees with them (besides evil) is that they are “misinformed”. Thus, if they can ban “misinformation”, then everyone will be a progressive.
It’s not unlike how Communists thought everyone would be a communist if they could only understand the capital T Truth.
The thing is, Tyler linked an interview with bearskin guy and he sounded like many a person I used to see at libertarian meetups or philosophy groups like 15 years ago. I don’t think someone like that really needs the internet or social media to get into conspiracy theories, they found ways a long time ago. They also found ways to go to political rallies.
I have only read about the backgrounds of three people from the riot. Bearskin guy that Tyler linked, the podium guy (that apparently hasn’t voted in a long long time), and the woman who was shot and killed (who voted for Obama and apparently loved him as much as Trump).
The real problem with “misinformation” isn’t people latching onto conspiracy theories (which they’ve always done) but that what should be trusted sources have shredded their credibility by lying constantly to fit their agenda. Wide swaths of otherwise normie people can’t even trust, say, what the CDC says during a pandemic.
This summer I noticed tech companies were banning or otherwise censoring people for presenting accurate (but officially frowned upon) information about the pandemic, leading lots of normie people to just not trust them or their moderation.
https://pbs.twimg.com/media/EmpktL9XcAEHmKc?format=jpg&name=small
I think a lot of people believe that misinformation leads to others taking the wrong view. I certainly do!
I have a friend who was a supporter of BLM and she was deprogrammed by someone telling her to look into the details of the highly publicized cases BLM was protesting.
Also I remember seeing Dave Rubin’s discussion with Larry Elder, and Elder was able to clear out the propaganda in Dave’s head by asking Dave for evidence of systemic racism and then refuting what little he had to offer.
I agree that public health officials, the media and other authorities have only their own poor decisions to blame for the lack of public trust.
+1
I think there is a good tie in here with Arnold’s idea of content packaging. You could have a market place of sorts of filters.
There really has to be a better answer than discussion only happening in the IDW, which apparently isn’t even safe from its hosting services these days.
One of many reasons to stick to the well-established content rules of exceptions to the First Amendment is that it it 100% foreseeable that if the state wants to censor some content and is legally prohibited from doing so, it will find it quite easy to launder such state action through highly regulated or highly government-contract-dependent “private” parties and lean on them to boycott the speakers to achieve what are effectively the same outcomes.
What about in the ‘hypothetical’ case where the both the leadership and rank-and-file of the dominant Internet firms were strongly allied politically with the party and administration in power (and employees moved back and forth between positions with these firms and positions in government) such that “who’ll rid me of that troublesome priest” messages aren’t even necessary?
OT thought of the day: groups identified as right-wing or Trump backers use Bitcoin to transfer money. Left-wingers outlaw Bitcoin.
Arnold, wow! The barbarians are crushing your door to dictate the terms of your grandchildren’s lives, and you are concerned about dictating the terms of service for the barbarians’ social media!
BTW, I hope that in your high school’s econ, you read D.H. Robertson’s “What does the economist economize? (1954).
Lol. Spit my coffee out whilst reading this. Thank you.
Off topic request if you have time: planning a family trip to Central or South America during 2021. Where should we go and when? Any insights on Costa Rica? Traveling with a 7yo and flying from North Texas.
Sorry, Hans. Right now it’s impossible to predict what places you can visit in most SouthA countries (I don’t know about CentralA). Although some locals are vacationing in old places, there are too many restrictions that are “fine-tuned” daily (here in Chile, if you live in Santiago, you need a special permit to go to your Summer residence and you can go to a hotel but only in some places; also, foreign tourists have to spend at least 10 days in quarantine).
Thank you! I guess we are sticking to Florida then.
#DeSantis 2024
It seems like Amazon had such an arrangement minus anonymous accounts with their AWS customers. There court filing below outlines the examples and steps they took before booting parler. But this is a monetary relationship with a long legal history of contract law. We haven’t had enough history with Facebook or Twitter to establish such norms and rules
https://www.courtlistener.com/recap/gov.uscourts.wawd.294664/gov.uscourts.wawd.294664.10.0_1.pdf
Thanks for providing the link. As I skim the legal brief, its description of what Amazon did with the reports it received about bad content on Parler is vague. Did they forward the information as an FYI, or did they forward it with a warning that this violated Amazon’s AUP.
Absent more information, one could argue that Amazon would have been happy to continue hosting Parler, abusive rhetoric and all, if the riot had not taken place.
Given the way discourse usually runs on the Internet, the chances that Parler is the only AWS hosted site with abusive rhetoric I would rate as nil.
If I were Amazon, I would not be in the business of policing content. I would put all of the legal responsibility on the site being hosted, and not take on any of it on myself.
Of course, I don’t know what the law says. If the law makes it impossible for hosting services to avoid responsibility for content on the sites that they host, then this is a significant burden on the web hosting business model.
Sorry, Arnold. You’re saying you don’t know both the facts relevant to understand the conflict and the laws relevant to adjudicate the conflict. Yes, it’s costly to know all that for any particular judicial case. That’s the problem with 99% of people’s opinions on judicial cases, and that’s why I refrain to give opinions on them. It’s my impression, however, that Amazon, other social media, and the press have been applying their own rules “very differently” depending on the client’s political affiliation, or if you prefer ignoring their own rules to serve their comrades. It’s not a double standard, it’s no standard at all.
The conflict can be resolved in the courts only by destroying the judiciary, a “very conservative” institution (exactly what the barbarians want). If the courts ruled in favor of the anti-barbarians, the barbarians would not enforce the ruling. If they ruled in favor of the barbarians (as it appears to be happening), then the barbarians would continue extorting the judges (the same way they are extorting bureaucrats, including university professors).
It will be resolved by a political negotiation before or after “a big social outbreak”, and I bet it will be after it.