1. 14
  1. 4

    I’m so tired of reading this Jurassic Park meme. There must be another quote that internet writers can pick on that was actually spoken by a non-fictional person. Great irony though using a fictional character’s dialog to warn about the evils of false information.

    As to the actual content, I think we will see a rise in local media sources. People don’t want content generated out of other content by machines. They want human produced content in order to relate to human struggles. So what happens when a large conglomerate weaponizes it like Facebook? Well the Facebook users are harmed, but any non-Facebook user will be ideally unharmed. Facebook and other large social media hosts and creators will lose credibility.

    As the article states machine learning is really bad at context. This is because aggregating content and outputting new content removes context. With humans, context is always shifting.

    The way I see humanity losing their mind is when we start treating machines as some sort of prophet that predicts the future. Given enough input anything can guess what might happen next. The insanity will start when society allows limiting a person’s options (legal, daily, etc) to those generated by a machine. AKA when we stop giving valuable input to the generative output or….context.

    1. 2

      For all the bullshit about sentient AI being an existential threat to humanity, this, right here, is how existing AI is actually dangerous.

      Might the only bright side of this whole thing, that the discourse around AI dangers is finally getting realistic. Better late than never.

      1. 4

        For all the bullshit about sentient AI being an existential threat to humanity, this, right here, is how existing AI is actually dangerous.

        The Terminator franchise becomes a lot more plausible if you remove the 30 second segment that says that SkyNet became self-aware. Various things in the training data set interacting in unexpected and unexplained ways to make it either decide all humans were the enemy or that the most efficient path to victory for the US is to kill all humans are both entirely plausible with current machine learning systems.

        1. 2

          That’s how the world ends, not with a bang, but with a bug.

      2. 2

        I don’t want to draw the entire olog, but the analogy with Jurassic Park does not work.

        Suppose that we are analogizing machine learning to genetic engineering. Language models are the author’s version of synthetic dinosaurs. In the classic story, the dinosaurs achieve intellectual equivalence with their human keepers, rebel during an operational lapse, and take over a local ecosystem. By analogy, we would imagine a time where large language models achieve some sort of agency or sapience, escape from OpenAI (our version of InGen), and permanently occupy…a datacenter? A Web service? A protocol?

        More generally, I think that we should consider Jurassic Park as an easy well-known rich classic example of the Monster in the House film structure. As the link mentions, we need a monster, a house, and a sin. For AI alignment and ethics, the monsters are language models or other transformers; the houses are our current interrelated and complex societies, and our sin is allowing our societies to embrace machine learning.

        But now we can see the general problem with this line of reasoning: society accepts machine-learning results by degrees, and larger advances in technology seem to correspond with ever-more-bitter acceptance by an ever-grumpier society. People seem to not worry that machine learning can solve many small easy board games, but are deeply moved by successes in popular games like chess, go, and Starcraft. The sin required for the trope is variable, depending on the opinions of the surrounding society. The monster is in the house, yes, but it’s only a monster because society calls it a monster; and society only does that because the monster’s mere existence interferes with societal expectations.

        1. 1

          I really hope Neal Stephenson’s term, Artificial Inanity, catches on.