1. 39
  1.  

  2. 24

    Can we have a snarky version, to replace n-gate.com?

    1. 36

      I tried few-shot training GPT-3 by copying 4-5 HN headlines and their n-gate summaries into the prompt. Then I pasted some headlines from HN’s frontpage today:


      Microsoft Windows is prohibited at Gitlab

      Gitlab (business model: “Uber for Git”) has decided that it will no longer support Microsoft Windows (business model: “Uber for proprietary software”). Hackernews, who have all been using Macs for the past ten years and have never even heard of Windows, are shocked and appalled that anyone would use such a terrible operating system. They spend the next few hours patting themselves on the back for making the right choice, and then get into a heated argument over which text editor is the best.


      Bringing back the golden days of Bell Labs

      A New York Times article (business model: “Uber for articles about the good old days”) about an attempt to recreate Bell Labs (business model: “Uber for the telephone”) in Silicon Valley. Hackernews, most of whom are too young to remember the good old days, bemoan the fact that everything was better back then and that the world is going to shit. A few Hackernews try to explain to the others that the world has always been going to shit, and that things were never as good as they seem in retrospect.


      Alex Jones must pay $4m in damages for Sandy Hook hoax claims

      Alex Jones (business model: “Uber for bullshit”) has been ordered to pay $4 million in damages for claiming that the Sandy Hook massacre was a hoax. Hackernews, who have never heard of Alex Jones or Sandy Hook, spend the next few hours trying to figure out what a hoax is.


      Is DALL-E 2 ‘gluing things together’ without understanding their relationships?

      OpenAI (business model: “Uber for AI”) releases a new version of its DALL-E image-generation algorithm, which is able to generate images from textual descriptions. Hackernews, who are not sure what an algorithm is, spends the next few hours trying to figure out what an image is.

      1. 20

        A New York Times article (business model: “Uber for articles about the good old days”)

        Hackernews, most of whom are too young to remember the good old days, bemoan the fact that everything was better back then and that the world is going to shit.

        Alex Jones (business model: “Uber for bullshit”)

        Okay, I think it’s safe to say that GPT-3 just passed the Turing test with flying colours :-D

        1. 9

          This is pretty much perfect!

          1. 5

            This would be the only weekly newsletter I would subscribe to, awesome.

            1. 1

              These are so good that I’m almost tempted to ask for evidence that you actually ran these through GPT-3 instead of writing them yourself.

              I wonder how far are we from a Netflix comedy special that’s entirely performed live by an AI…

              1. 2

                I wonder how far are we from a Netflix comedy special that’s entirely performed live by an AI…

                We’re already there!

                Seriously though, these systems seem pretty nifty for brainstorming, but the extreme cost of re-training makes it difficult to generate something topical. Even OpenAI’s newest, biggest, most expensive text model (text-davinci-002) is running a year behind on training data.

            2. 2

              I tried replacing the GPT-3 prompt with:

              "Write a snarky one-sentence summary of the following article: %s\n\n"
              

              Here is the result. Not particularly snarky. Maybe GPT-3 needs to be shown an example of the tone in order to maintain it.

              1. 10

                Tried “Write a dark-humorous one-sentence summary of the following article:\n%s\n\n” on gpt-3 playground (using text-davinci-002) for the first few lines of https://github.com/hackergrrl/art-of-readme, and I got this:

                A dark-humorous one-sentence summary of the article would be: “The README file is a long-standing tradition of developers trying to be helpful, but really just annoying users.”

                1. 2

                  Please @nathell provide different toned variants like this!

            3. 13

              https://danieljanus.pl/autosummarized-hn/previously/2022-08-02T08%3A00%3A25.227279Z.html

              “The article is informing the reader that they need to have JavaScript enabled in order to use twitter.com.”

              At least it made an attempt to summarize it?

              1. 1

                How would it do at summarizing the HN comments as well as the articles? (I guess the challenge there is that there are no comments to summarize when something is first posted.)

                1. 4

                  I’m not sure summarizing the comments would be particularly necessary. At least if you’re familiar with hacker news. ;)

                  1. 3

                    Auto-collapsing would be interesting, though! Like take all the posts about Microsoft or Apple, identify the off-topic vote-bait tropes that always appear on those threads, and have an extension(?) to auto-collapse them.

                    Maybe not a GPT-3 thing (I don’t really understand what GPT-3 is TBH…is it like a super-advanced Markov chain for writing text?)