This is not technical content. It’s “culture” only in the broadest sense of “There’s a culture of people on the internet with whom I disagree”–e.g., not culture at all worth discussing here.
If article were to talk about (as one did, a few days ago) automatically detecting patterns of trolling and abuse, say by way of referral origin, then it would be interesting. If article were to talk about implementing an automatic anti-troll bot, and how to tune it to kitten things properly, then it would be interesting. Instead, the article is basically gloating about admin abuse on a blog. C'mon.
I’d think shadowbanning, hellbanning, or just silently deleting their posts would be more effective. Putting them onto a pedestal of any sort is a bad idea.
I put this in the same category of evil as shadowbanning/hellbanning. If anything more so. Changing someone’s words is extremely easy to abuse. Imagine if a newspaper did this with a letter to the editor.
Isn’t there a contradiction here? Either our words have power, and it’s important to let words be free to support opposing viewpoints, so that we don’t allow for echo chambers, or words don’t have power, so it’s easy (or even possible) to ignore words you don’t like.
I think words have power, so it’s more coherent to accept that and also accept that it’s valuable to be able to protect ourselves and others from those words. Away from your keyboard, we have other protections, because people have persistent identities, so if you try to be a bad actor in a community, it’s easy to punish them. This is harder on the internet, where people’s identities are often transient, or difficult to track across communities–people also have the option to use throwaway identities for abuse.
When we don’t have natural controls for those kinds of people, it’s valuable to remove them from your community. Bars have the same kind of problem, which is why they have bouncers. Mods are sort of like the bouncers of online communities, and shadowbanning, kittening, and banning, are ways to remove identities or speech you don’t want from your community.
There isn’t any law that says you need to suffer people being assholes in your online community–you don’t even need to suffer assholes in your workplace, in your store, in your restaurant, or in your home. Free speech (at least in America) doesn’t extend to private spaces. So if the purpose of your online community is to provide a space for civilized discussion, and someone is an impediment to that, it makes sense, and indeed, it is probably the best thing for your users to silence them. If folks from one subreddit are “raiding” another subreddit, it is probably in the interest of reddit to silence the raiders.
Words are powerful, and hence they are a privilege. This is fundamental–the echo chamber problem is secondary, and cannot and must not be subordinated to making your community safe for the people it’s designed for.
We must have had very different experiences. I have never, ever been part of a community where bans were applied to people simply because their opinion differed.
I have however often seen conflicts sparked by differences of opinion yield behavior that (deservedly) resulted in a ban.
This is not technical content. It’s “culture” only in the broadest sense of “There’s a culture of people on the internet with whom I disagree”–e.g., not culture at all worth discussing here.
If article were to talk about (as one did, a few days ago) automatically detecting patterns of trolling and abuse, say by way of referral origin, then it would be interesting. If article were to talk about implementing an automatic anti-troll bot, and how to tune it to kitten things properly, then it would be interesting. Instead, the article is basically gloating about admin abuse on a blog. C'mon.
I’d think shadowbanning, hellbanning, or just silently deleting their posts would be more effective. Putting them onto a pedestal of any sort is a bad idea.
I liked the deviousness of changing their words - loosing control is dis-empowering - a useful tactic against trolling in my view.
I put this in the same category of evil as shadowbanning/hellbanning. If anything more so. Changing someone’s words is extremely easy to abuse. Imagine if a newspaper did this with a letter to the editor.
It’s an interesting idea. As the author notes, it’s for a very specific type of person.
[Comment from banned user removed]
Shadowbanning basically forces them to be ignored - in case someone in your community takes the bait easily
[Comment from banned user removed]
Isn’t there a contradiction here? Either our words have power, and it’s important to let words be free to support opposing viewpoints, so that we don’t allow for echo chambers, or words don’t have power, so it’s easy (or even possible) to ignore words you don’t like.
I think words have power, so it’s more coherent to accept that and also accept that it’s valuable to be able to protect ourselves and others from those words. Away from your keyboard, we have other protections, because people have persistent identities, so if you try to be a bad actor in a community, it’s easy to punish them. This is harder on the internet, where people’s identities are often transient, or difficult to track across communities–people also have the option to use throwaway identities for abuse.
When we don’t have natural controls for those kinds of people, it’s valuable to remove them from your community. Bars have the same kind of problem, which is why they have bouncers. Mods are sort of like the bouncers of online communities, and shadowbanning, kittening, and banning, are ways to remove identities or speech you don’t want from your community.
There isn’t any law that says you need to suffer people being assholes in your online community–you don’t even need to suffer assholes in your workplace, in your store, in your restaurant, or in your home. Free speech (at least in America) doesn’t extend to private spaces. So if the purpose of your online community is to provide a space for civilized discussion, and someone is an impediment to that, it makes sense, and indeed, it is probably the best thing for your users to silence them. If folks from one subreddit are “raiding” another subreddit, it is probably in the interest of reddit to silence the raiders.
Words are powerful, and hence they are a privilege. This is fundamental–the echo chamber problem is secondary, and cannot and must not be subordinated to making your community safe for the people it’s designed for.
We must have had very different experiences. I have never, ever been part of a community where bans were applied to people simply because their opinion differed.
I have however often seen conflicts sparked by differences of opinion yield behavior that (deservedly) resulted in a ban.