1. 12
  1.  

  2. 0

    models still have many deficiencies: they can generate false information, propagate social stereotypes, and produce toxic language

    Honestly I don’t really understand what’s the issue with an AI generating !woke language. What if I want it to help me write the dialogs of a 17th century slave owner? Why are they imposing such limitations, it seems silly. Who will it hurt?

    1. 6

      Minority groups, probably. The issue is not that the model is capable of generating such text, but that humans are using the text. If humans didn’t enjoy spreading false information and ignorant stereotypes, there would be little need to mitigate the generation of that information.

      1. 6

        There’s clearly a distinguishing factor between “17th century slave owner” dialogue and the more general “toxic language”. It’s odd that you see the opposite of “toxic” as “woke”, as opposed to something more like “polite”.

        1. 3

          models like these can be used to strengthen nationalist/fascist movements through spam, and manufacture consent for harmful policy

          c.f. Hindu nationalist movements in india