1. 4
  1.  

  2. 1

    I am aware of anecdotes, which I believe, about humans arguing with Markov chains. The humans in question were trying to push political positions, and not exactly engaging with the responses in good faith to begin with, so really, both sides deserved each other…

    I’m undecided on whether it’s a problem at all. If it works out like that, it could even be a feature.

    1. 1

      It’s a nice piece of writing from a fiction author, but subjective reputation measurement solves it pretty easily.

      1. 2

        Could you elaborate on what you see working? Reducing the cost of building a positively-reviewed sockpuppet seems like a problem for any system, but I don’t know what you’re thinking of.

        1. 2

          Assumptions:

          • There’s some stream/feed of information (like a twitter / facebook wall) or comment section
          • Anyone can write to the stream to push stuff to you without a previous connection

          A subjective trust network:

          • Each node has many public or private edges that denote a trust rating to other nodes.
          • Trust increases exponentially. Trusting your best friend is worth 1,000 or 1,000,000 compared to trusting someone you just met 1 or 10.
          • A trust rating between any two nodes along one path is taken by some function of the weights at each step, weighting earlier connections higher
          • A trust rating between any two nodes is some function of all reasonable length paths

          Your trust network allows you to take a subjective reading of how much you trust any node, according above ideas. A trollbot network could have a huge number of trolls, but unless you or your friend accidentally trust a troll, there is no way for the trollbot to artificially increase their trust rating with you. If you have a friend who is not careful with their outgoing trust and is at risk of giving trust to a trollnet, you control their ability to screw up your subjective trust by lowering your weight to them.

          1. 1

            And what about people who befriend trolls?

            1. 1

              unless you or your friend accidentally trust a troll

              Ah, I think this is the crux of it. The point of the article is that these bots are becoming indistinguishable from humans by repurposing existing content, and they’re especially good at it where bandwidth is really limited like Twitter and communities where brief comments are the norm (Instagram, many blog comments, etc). The environment isn’t helped by humans increasingly automating their involvement. If they’re cheap to operate (and I expect their cost scales below (log n) because most of the computation is in collection and analysis that can be shared between bots), someone could start up thousands and run them for years before switching them into attack mode.

        2. 1

          Yeah, some friends and I have kicked around this exact idea for a couple of years now.

          There’s very little that would prevent it from working and in fact it’s decently likely that something like it is already going on online today.

          There may well come a point where any online discourse is just assumed to be with bots. :(

          1. 1

            This could end social media as we know it.

            Poison the well enough and people will leave.