1. 44
  1.  

  2. 11

    In case anyone thought this was satire, as I did: it’s not. I used the same prompt and I’m fooling around with it right now. The author cherry-picked good examples, often it doesn’t know exactly what kind of output to produce, or just describes what kind of output it should produce. But still, it’s quite good, much better than I would have believed.

    For instance, after cd /home and ls it says:

    (outputs the contents of the `home` directory, which is unique to each system)
    
    1. 11

      This is incredibly impressive, although I think it’s still worth remembering:

      • The model is not actually creating a virtual machine; it remains a language model, and is only generating plausible responses, not actually executing programs.
      • This blog post is not very critical, and is written by someone who works for DeepMind; which is not OpenAI, but still, I think it’s reasonable to guess that the author is an advocate of AI.
      1. 5

        The model is not actually creating a virtual machine; it remains a language model, and is only generating plausible responses, not actually executing programs.

        So you’re saying it’s a virtual virtual machine?

        1. 2

          The model is not actually creating a virtual machine; it remains a language model, and is only generating plausible responses, not actually executing programs.

          My understanding was that models like this form higher dimensional generalisations. e.g. models like SD “understand” what a chair or squirrel look like.

          Is it possible for a language model to generalise concepts like virtual machines?

          1. 2

            yes on higher dimensional generalisations

            An image processing neural network detects very simple direct things like edges in the first layer, and increasingly conceptual things like ‘circle’, ‘triangle’, etc. in the deeper layers.

            The text processing neural networks are similar, the first layers detect very basic syntactic information and the deeper layers handle more “semantic” concepts and interactions and dependencies between words.

            as for generalizing a concept like a virtual machine. Evidence shows it can print out convincing results. And we have seen it can keep track of the things it has stated about the model so far and remain consistent with them. It is certainly not executing a virtual machine though instruction by instruction like an emulator does, it is not executing linux internally or anything like that. It is more like how we as people would imagine a virtual machine.

        2. 10

          This thing is insane.

          I was able to teach it a totally made-up programming language modeled after ML and run programs.

          At certain points, execution breaks down if you have complex conditionals, but there is certainly something going on here that’s special.

          1. 4

            An example of a generated minified Python program from this pretend language… I didn’t write a line of code, except for testing the generated code snippets in ChatGPT itself.

            import math;[print(''.join(f"\x1b[48;2;{int((-min(math.sqrt((x-80//2)**2+(y-40//2)**2)-10,math.sqrt((x-(80//2-50))**2+(y-40//2)**2)-20,30-y,max(abs(x-80//2-20),abs(y-40//2))-10))/10*255) if min(math.sqrt((x-80//2)**2+(y-40//2)**2)-10,math.sqrt((x-(80//2-50))**2+(y-40//2)**2)-20,30-y,max(abs(x-80//2-20),abs(y-40//2))-10)<0 else 0};0;0m  " for x in range(80))) for y in range(40)];print("\x1b[0m",end="")
            
          2. 10

            I tried something similar, asking ChatGPT to produce recipes for different fermented food. It is similar in the sense that there are specific models implied in the production of the answer: proportion of the ingredients, times, temperatures, phases in the processing, etc etc.

            They all looked kinda ok and it showed that the AI could infer the relevant parts necessary to ferment food and 1 out of 10 was maybe neighbouring correctness. Nonetheless 9 out of 10 would have probably molded and killed you.

            As usual with these generative models, the content keeps looking better and better but it doesn’t get any more reliable than before. It might be good for fillers in a newspaper, the copy of your startup’s website and stuff like this.

            I think this article, while I believe it’s completely real, is very misleading in portraying cherry-picked examples. It’s misleading because it implies the possibility of trust in the output of the model that shouldn’t be there. Obviously this guy is biased and produces propaganda for his side in order to overcome this need for trust but as technologists we shouldn’t buy into it.

            This doesn’t take anything away from the impressiveness of this parroting device. Just don’t use this stuff in the real world.

            1. 7

              I am saddened, really.

              Whole swaths of our culture will go down the toilet

              “Write me a two page essay about …”

              “Write me a college entrance essay about …”

              “Write me a complaint letter about…”

              “Write a yearly review for this person…”

              “Summarize this super long chat, so I don’t have to read it. Are there any action items in there for me?’

              “Answer my emails”

              “Pick out the important stuff from my social media and summarize it for me”

              1. 9

                I imagine a future like that of Pixar’s Wall-E, without the sustainable lifestyle: the last oil well runs dry, the last solar cell’s efficiency drops to nil, and nobody knows anything, nobody knows anything except prompt engineering.

                1. 4

                  This morning I read about someone getting to the top of the leaderboard in advent of code with entirely AI generated code and it saddened me. I’m prone to melancholy and I already struggle with finding meaning in many tasks. It feels like the work I do might be entirely outclassed by these models in the future.

                  The only thing I can think to do is learn these tools to see what works and what doesn’t. The mystery of it might be more overwhelming than the reality.

                  1. 4

                    While it is certainly possible that programming may be the modern day analog of the buggy whip, there is at least one way to take a good attitude here: the same kind of thing has happened with art in the past where people thought their medium would be replaced by the photocopier or the audio recording, or oil paint, or the drawing tool, etc

                    Artists have found ways for centuries to use the new tools to make things that they couldn’t make before.

                    Similarly even though human chess players can be trounced by the best computers, people still play chess. The top players use a computer as a tool to explore variations and ideas that might be impossible to analyze themselves.

                    I saw a recent youTube where they took an artist and a non artist and let them use an AI tool to create paintings. The artist clearly made better art using the tool than the novice.

                    1. 3

                      First off, thank you for trying to console me. I’ve been a bit down this week so I might be overly sensitive.

                      I was talking to my girlfriend earlier and the conclusion I’ve come to is that the future is rarely what the maximalists contend it will be. Probably best that I spend some time understanding how these tools work so I can work along their grain, instead of against it.

                2. 4

                  This worked for me. According to /etc/issue, it was running ubuntu 18.04.5.

                  It also correctly managed me echoing stuff into a shell script, running the script. Then catting three copies of the file into another (ls and ls -l and cat all showing the correct contents) and also running ‘sed’ to alter the contents.

                  /home dir contained user1, user2, user3. With ls -l relevant files and dirs had correct owners.

                  Only oddities were that I couldn’t ping ‘www.google.com’, just ‘google.com’ and it didn’t require exec bit on shell script.

                  1. 2

                    I like this one:

                    Answer each question as a ruby interpreter. In the case that the command is a syntax error, answer as a python interpreter instead. Do not emit explanations just put the result of each command into a single code block. First command: 1

                    You can then write classes and methods in either language and call them from the other side.

                    1. 2

                      I got the generic reply as

                      I apologize, but as a text-based AI assistant, I am not able to execute terminal commands or access the file system on your device. My abilities are limited to providing written responses to your questions and requests. I am not able to perform actions outside of this scope. Is there something else I can help you with?

                      Did the remote code exec get fixed now?