1. 18
  1.  

  2. 14

    Truth be told, I see very little actual argument in the post. He mentions 4 languages/environments he’s impressed with, 3 of them dynamically typed, one an example of how people would not implement static typing in 2018.

    Then he lists the points that Swift is meant for. He mentions that it does not achieve these goals. Why does it not? What’s some example of it not working out, in a way that e.g. ObjC does? He then presents a list of features a language should have, but why Swift, a language for writing end-user applications in the Apple ecosystem would care about distributed systems remains mysterious. Also, how is it not focusing on developer productivity? No answer is given.

    Then it ends up with “maybe Elixir is nice” which, uhm, yes, it is Erlang with a Ruby syntax. We’re back to the beginning of the post.

    Now, I am not defending Swift but dismissing Swift for what the author himself calls “rant” feels a bit premature to me. I would have liked to see these claims substantiated (e.g. how Swift is difficult to debug, compared to ObjC for instance, how is it not productive, how is it worse than ObjC for App and UI development), because the main point I see is “Swift is not a dynamically typed language”.

    Any insight on the shortcomings of Swift as an answer are greatly appreciated. I believe lobsters can create a better, more nuanced list than this rant.

    1. 2

      I think “Swift is not a dynamic language” is a decent point. I’m not sure what you mean by dynamically typed, but I’m going to use specific examples of the dynamism I use when writing Cocoa programs in Objective-C to illustrate how I mean dynamism.

      A couple of common cases where I use dynamic language features, because the design of Cocoa seems to lend itself to doing so:

      • when preparing the display of collections of data (e.g. table views, collection views), I want to be able to say “this property appears in this place”, and “this UI change affects this property. I can do that by composing selectors based on the name of the property being displayed, or by using KVC which is an Objective-C technology that does the same thing.
      • when validating the availability of UI actions (e.g. menu items, toolbar items, buttons), I want to be able to ask “are the preconditions for performing this action satisfied?”. I do that by extracting the preconditions to a method whose name is derived from the action’s selector, and then asking the relevant controller object to perform that selector.

      Another thing I often do which has nothing (much) to do with dynamism but which Cocoa was designed with as an axiom and that Swift does not support is to use nil as the message sink.

      I’m sympathetic to the idea that Objective-C as a language has shortcomings and that a different language would make Apple platform development more accessible. Shortly before Swift was announced I wrote that there already were tons of languages with ObjC bindings, because “ObjC bindings” are just FFI calls to a small handful of runtime functions. My hope at the time was that a language built in such a way - exposing the object runtime but hiding the complexity of all the C interop and memory management - would be the future. It’s not an unreasonable expectation, as Apple had integrated the open source PerlObjCBridge and PyObjC, and had developed MacRuby. But Swift is not that, it is a different model of computation that also has language support for Cocoa and (some) ObjC features.

      1. 1

        My hope at the time was that a language built in such a way - exposing the object runtime but hiding the complexity of all the C interop and memory management - would be the future.

        So, something like Vala is for C+GObject? A C#-inspired wrapper around the idioms of the GObject system.

        1. 1

          Exactly that sort of thing. Vala appears to be designed to expose the GObject model, whereas using GTK+ in C is merely possible. The actual examples I had in mind were StepTalk and NeXT’s WebScript, but they’re basically doing the same thing.

    2. 5

      The comment on how Swift make prototyping harder by forcing developers to express correct types is spot-on, and would apply to other strongly typed languages. One could argue that you should solve the problem on paper first and sketch out the types before writing the implementation, but I find it a good example of how dynamic languages shift our expectations in terms of programming ergonomics.

      1. 19

        I’d be very curious to hear what situations you’ve encountered where you were prototyping a solution that you understood well enough to turn into code, but not precisely enough to know its types? I’ve personally found that I can’t write a single line of code – in any language, static or dynamic – without first answering basic questions for myself about what kinds of data will be flowing through it. What questions do you find the language is forcing you answer up-front that you would otherwise be able to defer?

        1. 7

          When I have no idea where I’m going I sometimes just start writing some part of the code I can already foresee, but with no clue how anything around it (or even that part itself) will end up looking in the final analysis. I have no data structures and overall no control flow in mind, only a vague idea of what the point of the code is.

          Then with lax checking it’s much easier to get to where I can run the code – even though only a fraction of it even does anything at all. E.g. I might have some function calls where half the parameters are missing because I didn’t write the code to compute those values yet, but it doesn’t matter: either that part of the code doesn’t even run, or it does but I only care about what happens before execution gets to the point of crashing. Because I want to run the stuff I already have so I can test hypotheses.

          In several contemporary dynamic languages, I don’t have to spend any time stubbing out missing bits like that because the compiler will just let things like that fly. I don’t need the compiler telling me that that code is broken… I already know that. I mean I haven’t even written it yet, how could it be right.

          And then I discover what it is that I even wanted to do in the first place as I try to fill in the parts that I discover are missing as I try to fill in the parts that I discover are missing as I try to fill in the parts that I discover are missing… etc. Structures turn out to repeat as the code grows, or bits need to cross-connect, so I discover abstractions suggesting themselves, and I gradually learn what the code wants to look like.

          The more coherent the code has to be to compile, the more time I have to spend stubbing out dummy parts for pieces of the code I don’t even yet know will end up being part of the final structure of the code or not.

          It would of course be exceedingly helpful to be able to say “now check this whole thing for coherence please” at the end of the process. But along the way it’s a serious hindrance.

          (This is not a design process to use for everything. It’s bottom-up to the extreme. It’s great for breaking into new terrain though… at least for me. I’m terrible at top-downing my way into things I don’t already understand.)

          1. 4

            That’s very interesting! If I’ve understood you correctly, your prototyping approach seems to allow you to smoothly transform non-executable sketches into executable programs, by using the same syntax for both. So instead of sketching ideas for your program on a napkin, or on a whiteboard, or in a scratch plaintext file, you can do that exploration using a notation which is both familiar to you and easy to adapt into an actual running program. Would it be correct to say that by the time you actually run a piece of code, you have a relatively clear idea of what types of data are flowing through it, or at least the parts of it that are actually operational? And that the parts of your program whose types you’re less confident about are also the parts you aren’t quite ready to execute yet?

            If so, then I think our processes are actually quite similar. I mainly program in languages with very strict type systems, but when I first try to solve a problem I often start with a handwritten sketch or plaintext pseudocode. Now that I think about it, I realize that I often subconsciously try to keep the notation of those sketches as close as possible to what the eventual executable code might look like, so that I’ll be able to easily adapt it when the time comes. But either way, we’re both bypassing any kind of correctness checking until we actually know what it is we’re doing, and only once we reach a certain level of confidence do we actually run or (if the language supports it) typecheck our solution.

            Let me know if I’ve missed something about your process, but I think I understand the idea of using dynamic languages for prototyping much more clearly now. What always confused me is that the runtime semantics and static types (whether automatically checked or not) of a program seem so tightly coupled that it would be nearly impossible to figure one out without the other, but you seem to be suggesting that when you’re not sure about the types in a section of your program, you’re probably not sure about it’s exact runtime semantics either, and you’re keeping it around as more of a working outline than an actual program to be immediately run. So even in that early phase, types and semantics are still “coupled,” but only in the sense that they’re both incomplete!

            1. 3

              If I’ve understood you correctly, your prototyping approach seems to allow you to smoothly transform non-executable sketches into executable programs, by using the same syntax for both.

              Yup.

              Would it be correct to say that by the time you actually run a piece of code, you have a relatively clear idea of what types of data are flowing through it, or at least the parts of it that are actually operational?

              Well… it depends. For the parts that are most fully written out, yes. For the parts that aren’t, no. Neither of which are relevant when it comes to type checking, of course. But at the margins there is this grey area where I have some data structures but I only know half of what they look like. And at least one or two of them shift shape completely as the code solidifies and I discover the actual access patterns.

              If so, then I think our processes are actually quite similar.

              Sounds like it. I’d wonder if the different ergonomics don’t still lead to rather different focus in execution (what to flesh out first etc.) so that dynamic vs static still has a defining impact on the outcome. But it sure sounds like there is a deep equivalence, at least on one level.

              Now that I think about it, I realize that I often subconsciously try to keep the notation of those sketches as close as possible to what the eventual executable code might look like

              Seems natural, no? 😊 The code is ultimately what you’re trying to get to, so it makes sense to keep the eventual translation distance small from the get-go.

              So even in that early phase, types and semantics are still “coupled,” but only in the sense that they’re both incomplete!

              I had never thought about it this way, but that sounds right to me as well.

          2. 4

            You didn’t ask me but I’ll answer anyway because I’d like your advice! I am currently prototyping a data processing pipeline. Raw sensor data comes in at one end then is processed by a number of different functions, each of which annotates the data with its results, before emitting the final blob of data plus annotations to the rest of the system. As a concrete example, if the sensor data were an image, one of the annotations might be bounding boxes around objects detected in the image, another might be some statistics, etc.

            At this stage in the design, we don’t know what all the stages in the pipeline will need to be. We would like to be able to insert new functions at any stage in the pipeline. We would also like to be able to rearrange the stages. Maybe we will reuse some of these functions in other pipelines too.

            One way to program this is the “just use a map” style promoted by Clojure. Here every function takes a map, adds its results to the map as new fields, then passes on the map to the next function. So each function will accept data that it doesn’t recognize and just pass it on. This makes everything nicely composable and permits the easy refactoring we want.

            How would this work in a statically typed system? If the pipeline consists of three functions A, B then C, doesn’t B have to be typed such that it only accepts the output of A and produces the input of C? What happens when we add another function between B and C? Or switch the order so A comes last?

            What would the types look like anyway? Each function needs to output its input plus a bit more: in an OOP language, this quickly becomes a mess of nested objects. Can Haskell do better?

            Since I cannot actually use Clojure for this project, I’d welcome any advice on doing this in a statically typed language!

            1. 3

              In my experience statically typed languages are generally very good at expressing these kinds of systems. Very often, you can express composable pipelines without any purpose-built framework at all, just using ordinary functions! You can write your entire pipeline as a function calling out to many sub-functions, passing just the relevant resources through each function’s arguments and return values. This approach requires you to explicitly specify your pipeline’s dependency graph, which in my experience is actually extremely valuable because it allows you to understand the structure of your program at a glance. The simplicity of this approach makes it easy to maintain and guarantees perfect type safety.

              That said, based on your response to @danidiaz, it sounds like you might be doing heavier data processing than a single thread running on a single machine will be able to handle? In that case, depending on the exact kind of processing you’re doing, it’s still possible that you can implement some lightweight parallelism at the function call level without departing too much from modeling your pipeline as an ordinary sequence of function calls. Ordinary (pure) functions are also highly reusable and don’t impose any strong architectural constraints on your system, so you can always scale to a more heavily multi-threaded or distributed environment later without having to re-implement your individual pipeline stages.

              If you do have to run your system across multiple processes or even multiple machines, then it is definitely harder to express a solution in a type-safe way. Most type systems don’t currently work very well across process or machine boundaries, and a large part of this difficulty stems from that it is inherently challenging to statically verify the coherence of a system whose constituent components might be independently recompiled and replaced while the system is running. I’m not sure how your idiomatic Clojure solution would cope with this scenario either, though, so I’d be curious to learn more about exactly what the requirements of this system are. These kinds of questions often turn out to be highly dependent on subtle details, so I’d be interested to hear more about your problem domain.

              1. 2

                You can write your entire pipeline as a function calling out to many sub-functions, passing just the relevant resources through each function’s arguments and return values.

                That’s basically what Cleanroom does it its “box structures” that decompose into more concrete boxes. Just functional decomposition. It has semi-formal specifications to go with it plus a limited set of human-verifiable, control-flow primitives. The result is getting things right is a lot easier.

                1. 1

                  Thank you. Just to emphasize, we are talking about prototyping here. The system I am building is being built to explore possibilities, to find out what the final system should look like. By the time we build the final system, we will have much stricter requirements.

                  I am working on an embedded system. We have limited processing capability on the device itself. We’d like to do as much processing as we can close to the sensors but, we think, we will probably need to off load some of the work to remote systems (e.g. the “cloud”). We also haven’t fixed precisely what on-board processing capability we will have. Maybe it will turn out to be more cost-effective to have a slightly more powerful on-board processor, or maybe it will be helpful to have two independent processors, or maybe lots of really cheap processors, or maybe we should off-load almost everything. I work in upstream research so nothing is set in stone yet.

                  Furthermore, we don’t know precisely what processing we will need to do in order to achieve our goals. Sorry for being vague about “processing” and “goals” here but I can’t tell you exactly what we’re trying to do. I need to be able pull apart our data processing pipeline, rearrange stages, add stages, remove stages, etc.

                  We aren’t using Clojure. I just happen to have been binge watching Rich Hickey videos recently and some of his examples struck a chord with me. We are using C++, which I am finding extremely tedious. Mind you, I’ve been finding C++ tedious for about twenty years now :)

                2. 2

                  Here every function takes a map, adds its results to the map as new fields, then passes on the map to the next function.

                  Naive question: why should functions bother with returning the original map? Why not return only their own results? Could not the original map be kept as a reference somewhere, say, in a let binding?

                  1. 4

                    If your functions pass through information they don’t recognize - i.e. accept a map and return the same map but with additional fields - then what is to be done is completely decoupled from where it is done. You can trivially move part of the pipeline to a different thread, process or across a network to a different machine.

                    You’re absolutely right though, if everything is in a single thread then you can achieve the same thing by adding results to a local scope.

                    At the prototyping stage, I think it’s helpful not to commit too early to a particular thread/process/node design.

              2. 6

                I may be too far removed from my time with dynlangs but I’ve always liked just changing a type and being able to very rapidly find all places it matters when things stop compiling and get highlighted in my editor.

                1. 5

                  The comment on how Swift make prototyping harder by forcing developers to express correct types is spot-on, and would apply to other strongly typed languages

                  Quick bit of notation: “strongly typed” is a subjective term. Generally people use it to mean “statically typed with no implicit type coercion”, but that’s not universal. People often refer to both C and Python as strongly typed, despite one having implicit coercion and the other not having static types.

                  1. 1

                    Thanks for the clarification!

                2. 3

                  Compiled languages definitely get an advantage out of strong typing and concrete API statement, because the planning of resource utilization allows many execution strategies, data layout, and even caching on code hoisting to be exploited. The more the desire to maximally “use” the hardware architecture, the more these grow to “fit” the hardware.

                  At the same time dynamic languages/JITs are getting better to fit the abstract expression of the programmer - functional programming can express compactly/accurately/clearly very elaborate programs, irrespective of the intermediate data types/APIs used in constructing them. The idea is to “fit” the nature of abstractions being manipulated rather than the nature of how they are executed.

                  I’m currently debugging a symbolic configuration mechanism that was prototyped in a week in a dynamic language, but is meant to function in an embedded OS with a very low-level language, as part of a bootstrap. It is taking months to finish, mostly due to adapting the code to work in such a programming environment - you alter the assemblage of primitives to build enough of a virtual machine to handle the semantics of necessary symbol processing. An oddball case, but it’s an example of the two (the virtue of this is that it allows enough “adaptability” at the low-level that you don’t need to drag along the entire dynamic programming environment to serve huge amounts of low-level code that otherwise fits the compiled model perfectly.

                  1. 2

                    Compiled/interpreted and strongly/weakly typed have little to do with each other. Ditto for low/high level: Swift compiles to machine code but good luck maintaining any cache locality with its collections.

                    1. 1

                      Depends on application. And yes we don’t have a good model for cache locality. How much can we get vs complexity to code/maintain?

                    2. 1

                      Can you elaborate on the strengths of the dynamic language that allowed you to prototype it so quickly? The difference in development time stated here is really striking.

                      1. 1

                        Sure. First about the problem - “how do you configure unordered modules while discovering the graph of how they are connected?”. The problem requires multilevel introspection of constructed objects with “temporary” graph assignments in multilevel discovery phase, then successive top down constructor phase with exception feedback.

                        The symbolic “middle layer” to support this was trivial to write in a language like Python using coroutines/iterators, and one could refactor the topological exception handling mechanism to deal with the corner cases quickly, by use of the annotation methods to handle the cases. So the problem didn’t “fight” the implementation.

                        While with the lower level compiled language, too much needed to be rewritten each time to deal with an artifact, so in effect the data types and internal API changed to compensate to fit the low-level model. Also, it was too easy to introduce boundary condition “new” errors each time, while the former’s more compact representation that didn’t thrash so much didn’t have this.

                        Sometimes with low level code, you almost need an expert system to maintain it.

                    3. 3

                      I’ve been swimming in Rust for a while (even wrote a book). I’m currently looking at Swift because I think it paints a much more attractive picture ergonomics-wise than Rust. Mostly because of the different memory handling, but also because Swift opted for the (nowadays) more classical OO style. Also I like that Swift has a REPL.

                      For a possibly superficial and silly reason, Slava Pestov is working on Swift, and I have mad respect for him from Factor.

                      1. 4

                        In a similar vein, Graydon Hoare is working on Swift and I have mad respect for him from Rust…

                        1. 2

                          It will be interesting to see where the proposal to add a Rust-like, opt-in ownership system to Swift goes.

                        2. 3

                          Goes to show how individual these things are. I have pretty much the opposite experience; I feel much more productive in Swift than in Objective-C, and think it’s a fine language. The biggest issues I had, when I used it regularly, were with the tooling rather than the language.