1. 23
  1. 12

    An aspect of this that’s not described in the article is that you don’t have source files any more. Your program is not something defined in a text file; it consists of the compiled code running in the VM, and in the binary “image” that’s persisted to disk.

    This has good and bad aspects. The good ones are largely spelled out in the article.

    Bad aspects include that it takes special effort to extract your source code in text form, because it’s intermixed with all the code of the pre-existing system. When I worked as a summer intern at Xerox, one of the Smalltalk tools I wrote was an app-code extractor. It kept a list of all the class names that belonged to the app, and when run it would walk through them and write their sources to a text file. Its config also included names of methods that had been added to system classes but that were part of the app. This took some manual bookkeeping to remember.

    Versioning is difficult too, unless the language has somehow integrated it. Obviously you can’t use a normal VCS on a binary image. In the ST project I worked on, one person was in charge of keeping the master image, and we’d submit our patches as filed-out source code that they carefully merged in. I can’t imagine doing team development like that today, so I really hope that modern REPL-based systems have some kind of image-based integrated VCS!

    Another issue is that you can accumulate leftover junk in the VM — classes or methods you used earlier but forgot to delete. Obviously this can happen in regular development, but it’s easier to forget when you don’t have separate source code to inspect. It gets especially nasty when the junk takes the form of obsolete classes and instances — Smalltalk-80 was prone to this because its GC was ref-counting without a cycle collector, so junk objects could hang around as disconnected cycles.

    1. 9

      An aspect of this that’s not described in the article is that you don’t have source files any more. Your program is not something defined in a text file; it consists of the compiled code running in the VM, and in the binary “image” that’s persisted to disk.

      That is not how common lisp development works. Typically, you will enter or modify some form—such as a function definition—at the top level of some file, and then send that form to the ‘repl’ for evaluation. It is possible for the in-memory state to get slightly out of sync with the on-disc state, yes, but it tends not to be a big issue. When it is an issue, it’s usually because you messed something up in the in-memory state but the on-disc state is fine.

      1. 2

        I will add—I consider image-based development (with structural editing, though that is orthogonal) strictly superior to the alternative. But I do agree that modularity is one thing that we don’t have a complete answer to yet, though I do have some ideas, and am familiar with some of the existing ones floating around.

        (Why strictly better, given this proviso? Ever had to deal with dll hell? cpuid? leftpad? I could go on…)

      2. 2

        I wish somebody built a live environment that uses persistent shared data structures with git-like tree hashing. So diffing code trees and versions becomes easy and inexpensive, you always can go back in time, etc.

        Although it’s not easy for me to imagine clearly how interplay with ephemeral data would work in this model.

        1. 2

          I wish somebody built a live environment that uses persistent shared data structures with git-like tree hashing. So diffing code trees and versions becomes easy and inexpensive, you always can go back in time, etc.


          Although it’s not easy for me to imagine clearly how interplay with ephemeral data would work in this model.

          There is a special global variable called the “repo” which is persistent, tree structured, etc. Local variables are ephemeral. When ephemeral data is copied into the repo, then it becomes persistent. The model I’m using is that immutable data (values) have no inherent lifetime associated with them. Lifetimes and mutability are associated with the variables that contain the data, not with the data itself. The language uses “copy” semantics for data (but the implementation optimizes away the copying in most cases).

          1. 1

            The problem that I had in mind was more of a MxN compatibility problem, where M is the history of recorded structures and N is the history of schema/types versioning. E.g. database schema migrations cannot be automatically handled, as far as I know.

      3. 11

        As this post is counterposing REPL-driven development to Python/Ruby, I can only conclude that the author has not done a significant amount of work in Ruby.

        1. Automatic code-reloading has been a feature of Ruby projects for ages
        2. Ruby REPLs such as Pry have sophisticated source analysis and editing features

        No, it’s not quite at the level of Smalltalk/Lisp because you aren’t editing the primary artifact. Code changes have to be saved to the filesystem to be permanent. However, I think this is in many ways a positive tradeoff because VM-based languages are quite difficult to work with compared to anything represented as source files.

        One of the most common things I do when developing software is drop example { binding.pry } into my spec file, then execute it to open a REPL within my test environment. I have a constant feedback loop between project source, spec files, and REPL as I develop. This is pretty close to what the author is talking about.

        1. 10

          Define a datatype. I mean a class, a struct, a record type–whatever user-defined type your favorite language supports. Make some instances of it. Write some functions (or methods, or procedures, or whatever) to operate on them.

          Now change the definition of the type. What happens?

          Does your language runtime notice that the definition of the type has changed?

          If the answer is “yes,” then you’re probably using a Lisp or Smalltalk system.

          Yeah, this person has obviously never used Ruby. Or Erlang. CL fans like to talk about their repl as the pinnacle of live development (with the occasional nod to Smalltalk as a fallen comrade worthy of respect) but Erlang’s live code loading can do things which CL programmers haven’t dreamed of.

          The reason you don’t see repl-based development done in Ruby is cultural, not technical. (which honestly makes it even more strange, but that’s how it is)

          Also, they don’t seem to understand the difference between the condition system from CL and the repl. The repl makes the condition system more useful, but not having a condition system and not having a repl are completely different things. AFAICT they are attempting to redefine “a proper repl” as “a repl and a condition system” but like… just… no. A repl is a repl and a condition system is a condition system; we don’t need more people muddying the waters. Say what you mean.

          1. 2

            Out of curiosity, what is the ruby equivalent to common lisp’s UPDATE-INSTANCE-FOR-REDEFINED-CLASS?

            1. 1

              I don’t believe there is a built-in, but ruby has the tools to migrate data in instances of class A to data that fits class A’.

              You can fetch all instances using ObjectSpace and re-assign instance variables.

              class A
                def foo
                  @foo = "foo"
              a = A.new
              a.foo # => "foo"
              b = A.new
              # reopen the class
              class A
                def foo
                  @bar = 'bar'
              a.foo # => 'bar'
              b.foo # => 'bar'
              a.instance_eval { @foo } # => 'foo'
              b.instance_eval { @foo } # => nil
              ObjectSpace.each_object(A) { |a| a.remove_instance_variable(:@foo) if a.instance_variable_defined?(:@foo) }
              a.instance_eval { @foo } # => nil
              1. 1

                Changes to a class object don’t flow into instances of the class automatically. This also works both ways: you can add functionality to an instance that the class doesn’t know about.

                In practice this isn’t much of a hardship because reinitializing an instance is easy.

                1. 1

                  But it does, though. The existing instances of A both got the new definition of #foo. Why should a new method definition change/add/remove instance variables on existing instances? The instance variables have nothing to do with the class objects, they’re specific to instances only.

                  1. 1

                    You’re right, my mental model of this was wrong. Changes of implementation do flow into existing instances. Metaprogramming might interfere with this if code is being generated on the instance at runtime, but for the general case this works.

                    1. 1

                      It’s all at runtime. Metaprogramming in Ruby is at the same (conceptual) ‘layer’ as any other programming.

                      irb(main):001:0> class A
                      irb(main):002:1>   def foo
                      irb(main):003:2>     @foo = "foo"
                      irb(main):004:2>   end
                      irb(main):005:1> end
                      => :foo
                      irb(main):006:0> a = A.new
                      => #<A:0x000000012d848798>
                      irb(main):007:0> a.foo
                      => "foo"
                      irb(main):008:0> b = A.new
                      => #<A:0x000000012d833848>
                      irb(main):009:0> A.define_method(:foo) { @bar = 'bar' }
                      => :foo
                      irb(main):010:0> a.foo
                      => "bar"
                      irb(main):011:0> b.foo
                      => "bar"
                      irb(main):012:0> a.instance_eval { @foo }
                      => "foo"
                      irb(main):013:0> b.instance_eval { @foo }
                      => nil
                      irb(main):014:0> a.instance_variable_get :@foo
                      => "foo"
                      irb(main):015:0> b.instance_variable_get :@foo
                      => nil
                      irb(main):016:0> a.class
                      => A
                      irb(main):017:0> a.class.instance_eval do
                      irb(main):018:1*   def a_class_method
                      irb(main):019:2>     :a_class_method
                      irb(main):020:2>   end
                      irb(main):021:1> end
                      => :a_class_method
                      irb(main):022:0> A.a_class_method
                      => :a_class_method
                      irb(main):023:0> a.class.class_eval do
                      irb(main):024:1*   def bar
                      irb(main):025:2>     :bar
                      irb(main):026:2>   end
                      irb(main):027:1> end
                      => :bar
                      irb(main):028:0> a.bar
                      => :bar
                      irb(main):029:0> b.bar
                      => :bar
                      irb(main):030:0> a.baz
                      Traceback (most recent call last):
                              4: from /usr/bin/irb:23:in `<main>'
                              3: from /usr/bin/irb:23:in `load'
                              2: from /Library/Ruby/Gems/2.6.0/gems/irb-1.0.0/exe/irb:11:in `<top (required)>'
                              1: from (irb):30
                      NoMethodError (undefined method `baz' for #<A:0x000000012d848798 @foo="foo", @bar="bar">)
                      Did you mean?  bar
                      irb(main):031:0> b.class.class_eval do
                      irb(main):032:1*   def baz
                      irb(main):033:2>     :baz
                      irb(main):034:2>   end
                      irb(main):035:1> end
                      => :baz
                      irb(main):036:0> a.baz
                      => :baz
            2. 5

              Ignoring the facts, the way a LISP REPL is described sounds fricking mentally delicious. I didn’t really know what set it apart from say, a NodeJS REPL, but I feel deeply that I do now and I definitely want to experience it!

              1. 3

                Same..He kinda sold me on REPL-driven development

                Do other LISPs (TinyLISP) and LISP-likes (Janet, Red) support this workflow?

                I tend to rely on Language Servers a lot these days. What’s the LSP story on REPL of LISPs?

                1. 4

                  Clojure’s nREPL library and ecosystem are nearly equivalent to this. Clojure doesn’t have the condition system so it doesn’t allow for editing an incorrectly written function at read/compile time or stepping into a debugger at an exception, but it does handle nearly everything else.

                  As far as I know/can tell, repls and lsp don’t interact at all. To be honest, I’m not even sure how they would? They target different aspects of development and provide different features. How do you see them interacting?

                  1. 1

                    How do you see them interacting?

                    Like before executing a block(?) of code in REPL, it would tell me about bad usage of functions

                    1. 3

                      So for example in Clojure, you are normally writing directly in the source code file and evaluating the expressions from there. The REPL output is then shown to you inline, like right next to your source code, or in a separate repl window if you prefer. So then when using the Clojure LSP (more accurately, the LSP using the popular Clojure linter, clj-kondo) you will see those corrections you mention directly in the source file so can make changes as needed before you evaluate the expression (sending it to the repl).

                      1. 2

                        Thanks for the explanation

              2. 4

                There are two blog posts referenced here:

                • Programming as Teaching describes a style of interactive program development, contrasted with “Programming as Carpentry”, where “we begin by starting up a runtime that already knows how to be a working program; it just doesn’t know how to be our particular application. By talking to the runtime interactively, we incrementally teach it the features we need it to have.”

                • On REPL-Driven Programming describes a set of requirements that a development UI (aka “REPL”) must satisfy in order to support Programming as Teaching.

                It sounds great. The author only knows two REPLs that fully support this, Common Lisp and Smalltalk.

                I’d like to see a REPL that supports this in the context of a different kind of programming language:

                • With strong module support, so that the developer is always aware of what module a code fragment belongs to when editing code. There is a module namespace hierarchy, and all code fragments are located in this hierarchy. Each third party package (library) that you download from a code repository has one or more modules. Package boundaries are strongly preserved in the development environment, so you can easily tell if you are modifying code from your own app or from a particular third party package.

                • With strong support for functional programming, so that I can represent all of my data as immutable values (instead of mutable objects) and I don’t give up the benefits of Programming as Teaching. To me, this means that the REPL operates at a meta level, where functions and values can be edited by the developer within a running program, even though, at the language level (the semantics of the functional programming language), functions and values are immutable. Smalltalk and Lisp flatten the meta and language levels into a single level, so that any code from a random package you download from the internet has the full capabilities of the metalevel to modify all state in a running system. Like a Unix system where every process runs as root. This makes it hard to trust third party code, and also means you don’t have a functional programming language.

                1. 1

                  I recognize this vision of software development and agree with it wholeheartedly.

                  The thing you say about the repl operating on a “meta level” is kinda what I’ve been obsessed with solving, I think the programming model you end up with is most similar to a build system (or templating system) but with support for developing type theories and built on immutability (content addressing).

                  The access control issue is partly a UI issue and partly an issue that can be solved with automation. If we want to push in the direction of automation, then version control is the leverage point, because the problem reduces to picking a version to run and that problem can be partially automated. Using classic AI ideas; relate different versions logically and weight them probabilistically, this makes it easier for the user to pick a safe version to use.

                  Really the issue is that we need to make space so that a significant part of the workforce can develop the commons rather than deplete them. I think if we are going to solve that problem then the fastest way would be to start by solving it on the internet. If we want to develop a game that we can trust better than the current system then we’d need to start by having fair assumptions. I think this kind of development environment can serve as a model peer for developing a truly p2p economic game and from there possibly other problems will get solved.

                  1. 2

                    One thing that bothers me about Programming by Teaching is the possibility that the system state gets out of sync with the source code, so you can’t understand what the program is doing without keeping the history of changes that you made in your mind and mentally replaying those changes. I want all the information I need to understand the running program to be tracked by the system and displayed to me on demand. The information I see on the screen should never be out of sync with reality. How do you fix this problem?

                    If I change a function definition, then there may still be active calls to the old version of the function in the running program. If I change a type definition, then there may be instances of the old version still in the running program. So the development environment will need to track multiple versions of functions and types, in order to avoid showing me out-of-sync information. How does that work?

                    1. 4

                      How does that work?

                      It doesn’t. That’s the dirty secret of pure REPL-based development; I lost count of the number of times I committed some Clojure code that worked in my local REPL but would break in CI because I had reverted or changed something that I had earlier evaluated in my REPL.

                      With Clojure I tend not to start with a clean slate too often because starting up the program takes so damn long. So I don’t run the test suite from the CLI either - I’m kind of forced to run things from the CIDER REPL.

                      1. 2

                        Alright, thank you for engaging me, let me try to understand.

                        The problem that you describe exists in the lisp repls and it can be thought of as a security issue rather than a UX issue (or that is how I’ll argue at least). The solution is something similar to what you describe with your comment about the “strong module support” but I’ll describe it in my terms.

                        Suppose you take a program and highlight the parts of it that you’d like to change (without changing anything), now irregardless of the programming language in question you could take the resulting template (i.e. a lambda function that replaces the highlighted data with given arguments, you could think of the highlights as default values for the function, you could write programs to act as predicates for checking the template arguments and then if you suppliment with logic programming you can play type theory).

                        Filling in the template in different ways is one level up of changing the arguments you’d give to a function in a usual repl.. therefore the appropriate analogy to the problem you describe is if you keep the values you’d instantiate a template and switch the template.. but this isn’t really a confusing problem because there is still something like a build sitting in between you and actually running the program and even if you are just sending to a lisp repl via this meta repl you can still “overlap” things more by thinking in terms of logically relating versions of code / snippets in a “gradually typed version control system” than just having a scrollback buffer and a usual lisp repl.

                        Basically; for me the key is to move up a meta-level (like they do in static typing to obtain various guarantees) but to do so in a manner that is flexible enough that you can recurse or attach various theories (programs, relations, tags, ..) to your abstraction of the codebase. Then the question of out-of-sync information reduces to accurately tracking context which is what you suggested in your original comment with the modules. So to conclude: There’s no silver bullet here, but there is a good way to organize the process of making changes, where you won’t accidentally forget something (or if you do then you can add that relation into your model so the devenv won’t let you forget next time - if you collaborate with others then this is basically “the semantic web”). This pattern is well known to people who’ve run a database migration in some CRUD job or mathematicians who like diagram chasing.

                        I will also respond to your other comment, sorry I am a super rambly person so if it looks like I dodged your question with some tangent then please correct me and let me try again.

                      2. 1

                        When I talked about trusting 3rd party code, what I have in mind is a kind of pure functional programming. For example, when I call a 3rd party function to strip leading and trailing whitespace from a string, I can be assured that the only thing that function can read or write is the string I pass as an argument. The function can’t encrypt my files, phone home, and display a ransomware message, because a pure function can only operate on data that is passed as an argument, and I’m not passing the filesystem, the UI, and the network interface as arguments. This is about more than just social trust, it’s also about enabling to you understand what the program is doing using local reasoning, which requires eliminating “spooky action at a distance” from your language semantics. Bugs involving “spooky action at a distance” are the worst kind, so let’s eliminate that class of bugs by design.

                        In operating system terminology, this design does not involve “access control” because I’m not attaching access control lists to each object and testing if a function has authorization to read or write data using the function’s authentication token. Instead, this design (you can’t access data unless it was passed to you as an argument) more closely resembles capability based security, where you can’t read or write data unless you possess a “capability” token that enables you to do so.

                        1. 1

                          Alright so security again, obviously pure functions are safe to play with but if you want to be sure a function in python is pure then you’d be relying on some static analysis that is not built into the language. You could argue that we shouldn’t use python but there’s more to it imo, there’s also the idea of rewriting programs to conform to canonical patterns of expressing some computation in some language and then even to other languages… all while preserving the behaviour of the program.

                          The social trust thing is a logical extreme, it’s basically as far as we can go in “rigor land”; probabilistic assessments of trust.. This is basically what science is and we’ve been doing that for a long while, with good results! If we want to let “software engineering” and “computer science” become the same thing then the outermost functor will probably be a contextual probabilistic assessment. However that assessment only serves to partially automate the choice of version. There could also be various type theories associated with the versions of the code but if they are not built into the semantics of the language the code is written in then the associations will need to be assessed somehow and we are back to the outermost layer of probabilities.

                          Capability based systems are decentralized compared to ACLs which are centrally defined, the difference is similar to reference counting versus tracing GC. In the end though I believe that security comes from clarity (i.e. you understand what you are getting into and don’t get surprised by anything).

                    2. 3

                      The author coins the term “Programming as Teaching”, and identifies only Common Lisp and Smalltalk as supporting this style of development.

                      But there is a vigorous “live coding” community and many live coding languages. In live coding, you edit a program while it runs, and you get immediate feedback on the effect of your changes. At first glance, Live Coding is the same as Programming as Teaching. Actually, I think it’s an improvement, because many useful forms of interaction have been developed that go beyond what’s possible in a conventional REPL. I’m thinking of value scrubbing, code timeline scrubbing, and much more. The first part of this Bret Victor video (Inventing on Principle) demonstrates some of the interaction techniques. https://www.youtube.com/watch?v=EGqwXt90ZqA

                      Stories with similar links:

                      1. On repl-driven programming via GeoffWozniak 1 year ago | 25 points | 22 comments