I am aware of anecdotes, which I believe, about humans arguing with Markov chains. The humans in question were trying to push political positions, and not exactly engaging with the responses in good faith to begin with, so really, both sides deserved each other…
I’m undecided on whether it’s a problem at all. If it works out like that, it could even be a feature.
It’s a nice piece of writing from a fiction author, but subjective reputation measurement solves it pretty easily.
Could you elaborate on what you see working? Reducing the cost of building a positively-reviewed sockpuppet seems like a problem for any system, but I don’t know what you’re thinking of.
A subjective trust network:
Your trust network allows you to take a subjective reading of how much you trust any node, according above ideas. A trollbot network could have a huge number of trolls, but unless you or your friend accidentally trust a troll, there is no way for the trollbot to artificially increase their trust rating with you. If you have a friend who is not careful with their outgoing trust and is at risk of giving trust to a trollnet, you control their ability to screw up your subjective trust by lowering your weight to them.
And what about people who befriend trolls?
unless you or your friend accidentally trust a troll
Ah, I think this is the crux of it. The point of the article is that these bots are becoming indistinguishable from humans by repurposing existing content, and they’re especially good at it where bandwidth is really limited like Twitter and communities where brief comments are the norm (Instagram, many blog comments, etc). The environment isn’t helped by humans increasingly automating their involvement. If they’re cheap to operate (and I expect their cost scales below (log n) because most of the computation is in collection and analysis that can be shared between bots), someone could start up thousands and run them for years before switching them into attack mode.
Yeah, some friends and I have kicked around this exact idea for a couple of years now.
There’s very little that would prevent it from working and in fact it’s decently likely that something like it is already going on online today.
There may well come a point where any online discourse is just assumed to be with bots. :(
This could end social media as we know it.
Poison the well enough and people will leave.
Tangentially related: https://stratechery.com/2016/the-real-problem-with-facebook-and-the-news/