1. 4

    Love the book! I ported the tree-walking interpreter to Swift and wrote a little bit on how it compared to the Java version in the README.

    I’m doing the same with the bytecode/VM interpreter. Will be interesting to see how far I can get with Swift without resorting to any unsafe pointer manipulation and how the performance will compare.


      Awesome! I love seeing how the code maps to other languages. I think Swift is surprisingly well-suited for language stuff with it’s nice ML-like enums.

    1. 0

      A lightning-to-headphone jack costs $9. I can’t see how that will stop someone venturing to make their own ECG monitor.

      1. 3

        You have to ensure the software supports that.

        If you plug that into an iphone with a 3.5mm jack you’ll get an error saying you can’t use it. Given the general feel of this post, and the way Apple has deprecated things in the past, it’s not unfathomable that future iterations of iOS might not allow the use of the lightning to headphone connector.

        1. 1

          If you plug that into an iphone with a 3.5mm jack you’ll get an error saying you can’t use it

          That’s not true. It works on various iPads and on an iPhone 6s, all of which predating the existence of the adapter. The only requirement is iOS 10 or later which, I think, runs on all lightning port equipped devices.

      1. 2

        Learning is a very good reason to do this.

        1. 1

          Great setup, thank you for sharing it. I like the transparency of it. I’m currently using dokku, but it keeps evolving in ways I don’t understand very well.

          1. 2

            Great to hear! I’m going to write more detailed notes and instructions soon(ish), maybe those will inspire you to try it out.

          1. 3

            Basically, they make developer tools and a programming language is the ultimate lock-in in that market.

            I don’t buy the argument that languages are complementary to IDEs. People don’t pay directly for languages. They pay for services around the language, which is what JetBrains sells already. OTOH, hardware and OS are complementary products. If people are tempted to switch to Macs and develop using Swift instead of Java, that might take some money away from IDE vendors. Notice how Apples gives you a “free” IDE and are pushing Swift on the server.

            He argues that developers might think Kotlin is beneficial but JetBrains are smarter than that. But I think from JetBrains point of view, it doesn’t matter if Kotlin (or its competetirors) have real advantages over Java or not. All it matters is what their customers think.

            So, yes, they were worried that their IDE business will go down the drain if everyone switches away from Java. They realized how vulnerable they are to shifts in programming language popularity. Their options were:

            1. Bet on one of the existing languages and be at the mercy of a young and small player
            2. Support all of the languages and remove your dependency on a specific language (which they might eventually chose to do)
            3. Create your own language and try to have more control over your destiny (which is what they did)
            1. 1

              They have been doing number 2 for basically half a decade already. There is pretty much no popular language they don’t support: C, C++, C#, F#, Groovy, Go, Java, JavaScript, Objective-C, PHP, Python, Ruby, Rust, Scala, SQL, TypeScript, VisualBasic, … plus dozens of language plugins made by language communities.

              1. 1

                Different languages need different approaches, but Idea is initially designed for Java. It will work for similar languages (C#), but its killer feature — autocomplete after typing . — will not work for most dynamic languages (js, python, ruby). I tried PyCharm and it works mostly like dumb editor (autocomplete sometimes works but very unreliably), but was quite slow for dumb editor (recent versions are probably much faster, especially compared to Electron-based IDEs).

                That’s also the reason why Microsoft created Typescript — not for type checker to catch your bugs, but for autocomplete in Visual Studio. Jetbrains designed Kotlin to be statically analyzable.

                Many other languages are not especially statically analyzable but there are opportunities for other IDE features for them. For example, Java completely lacks repl, but for Clojure repl is killer feature. It’s very convenient to write code while editor is connected to instance of program and being able to update its code on the fly and evaluate expressions. Cider, an emacs tooling for Clojure, even has autocomplete based on runtime information from live process, as opposed to static analysis of source code. Smalltalk IDEs use both static analysis and runtime information AFAIK (I don’t know details).

                And only Java (maybe C++ too) needs “code generation” feature (i.e. creation of getters, setters and hashCode).

                So, properly supporting multiple languages might be hard. One-size-fits-all approach might be “support java-like languages fully but only syntax highlighting for others”. Microsoft created Language Server Protocol which is cool, but again it’s designed for Java-like languages (C#, Typescript).

            1. 3

              I’m reading “I am a Strange Loop” by Douglas Hofstadter. I’ve always been interested in how mind and consciousness can emerge from an almost binary neuronal firings in the brain. The books answer is that it happens through self-referential structures and recursion. The author also wrote the famous “Gödel, Escher, Bach” (aka GEB), which I haven’t read.

              1. 4

                I’m trying to port the Swift parser from C++ to Swift and have been loving it so far.

                1. 1

                  Interesting, does Swift have safety guarantees, or is it “much more likely to be safe” like Go or C++?

                  1. 3

                    Swift is basically an ML variant, but has some backwards-compatibility stuff and some auto-unwrapping of optionals syntax that may decrease safety vs, say, SML. YMMV

                    1. 1

                      What safety guarantees exactly do you have in mind?

                      1. 5

                        I should have said memory safety: http://www.pl-enthusiast.net/2014/07/21/memory-safety/

                        Parsers in C are notorious for having memory safety issues. It’s basically guaranteed that any sufficiently complicated parser in C will have memory safety problems.

                        Here’s one I found in Brian Kernighan’s awk:


                        Java and Python are safe. C++ is not but it helps you more than C. Go helps you too, but I’m pretty sure there are some memory safety issues. So I was wondering where Swift stands.

                        EDIT: some info about Go and memory safety: https://insanitybit.github.io/2016/12/28/golang-and-rustlang-memory-safety

                        1. 3

                          Given this definition Java is also memory unsafe - you can crash jvm with data race. Since this is crash and not unhandled null pointer exception I would assume that given enough time it’s possible to exploit that in more interesting ways.

                          1. 2

                            It’s not fully safe because they still want to allow you to be able to do some fringe stuff but the main path and idiomatic code is memory safe by default using constructs like if let the_variable = some_optional {/*use the unwrapped the_variable here*/}

                            you can force memory unsafe by force unwrapping it with ! like let somevariable = some_optional_returning_function()! but that can crash if some_optional_returning_func is null.

                            1. 1

                              Isn’t that basically fromJust with an unfortunately convenient syntax?

                              1. 1

                                I don’t believe so. fromJust looks like it throws an error if the maybe item is nothing(i only played around with haskell years ago and i am no expert on it though). the if let ... { pattern is used all the time as a guard on values. you can also chain the if let values together to get one block with all the values you need guaranteed to be non null.


                                if let x = y,
                                   let z = x.something(),
                                   let w = someRandomoptional(),
                                   let stringrep = w as String? {
                                // only called if all the values above are non-null. 
                                // guard statements are useful too and are put at the top of the function 
                                // to early exit if the function can't deal with the null values. 

                                This is a conditional binding for the duration of the scope of the block. the nullability of objects in swift are very important. you can also call items conditionally too.

                                let foo = bar() // bar returns an optional object
                                foo?.setVal("xyz") // will not crash if foo is nil.
                                // syntactic sugar for
                                if foo {
                                1. 1

                                  Sorry, I should have quoted the part I was referring to:

                                  let somevariable = some_optional_returning_function()!

                                  The force unwrapping operator ! will crash if the value if nil, similar to how fromJust crashes on None.

                                  1. 1

                                    Gotcha. yeah that’s basically fromJust but more convenient.

                              2. 1

                                If it crashes on null, then that’s considered memory safe behavior. C is unsafe because dereferencing null is undefined. The program can use the value at address zero, or anything else.

                                I googled and found this:


                                A primary focus when designing Swift was improving the memory safety of the programming model. There are a lot of aspects of memory safety

                                So my takeaway that it’s like Go or C++ – more likely to be safe but not guaranteed like Java or Python.

                                1. 1

                                  Yes, it’s technically possible to use pointers and Swift is in fact fully interoperable with C, but it is not the path of least resistance. A pointer and its related operations are encapsulated in a struct of the type UnsafeMutablePointer, where:

                                  You are responsible for handling the life cycle of any memory you work with through unsafe pointers to avoid leaks or undefined behavior.

                                  To address your first comment, I didn’t use any of those unsafe pointers in the implementation I’m writing in Swift while the original C++ parser is a jumble of moving pointers, so yes I expect the Swift version to be safer.

                                  1. 1

                                    As a default the swift compiler will not let you do bad things unless you ask to.

                                    you can declare things as explicitly unwrapped from optional sources. so something like a link to an object in a window would be typically explicitly unwrapped which means that you are guaranteeing that the value will never be null and you are smarter than the compiler. (if the value of a linked storyboard component was ever null it would be a problem) another time they are used if you are sure that the value of it will not be null before using it but you don’t want to set the value of it at initialization.

                                    They designed swift so you can do anything that c can do including bit tweaks and pointer wrangling but it’s a much, much safer paradigm where you do have to go off the rails and make explicit choices to subvert your application. It does make parsing json data more annoying but more safe.

                            2. 1

                              it depends on what you mean. Their philosophy is to deliberate and make a decision on each potential safety issue that achieves a good balance between performance, convenience and safety.

                              E.g., it forces you to handle all potential nils explicitly in your code but it doesn’t do anything about array access at compile time. But if you go out of bounds your program will crash (I think it does bounds checking at runtime and deliberately crashes it to avoid undefined behaviour).

                              1. 1

                                Sorry I should have said memory safety (see sibling comment). As long as it crashes on null pointers and OOB, that is memory safe. Whereas a C program can just keep going and do whatever.

                          1. 3

                            Reminds me of the recent Noam Chomsky talk at Google

                            Why not do some of the serious things?

                            1. 1

                              Another happy Fastmail user. It even supports push email on the default iOS mail app, if that’s what you’re using. Junk filtering wasn’t great initially but it quickly caught it to what I had before after a few weeks of using their spam training system.

                              1. 2

                                I continue working on my Swift port of the Lox interpreter. Original interpreter is written in Java. In my project, I try to go beyond making a straight port to exploring ways in which I could take advantage of Swift’s features to improve the design of the interpreter. I’m documenting the lessons I learned in the readme as I go. The project has helped me get a great appreciation for Swift (and interpreters).

                                The Lox interpreter is written in Java as a demo accompanying the book, Crafting Interpreters by Bob Nystrom. Bob is publishing the book one chapter at a time as he completes them.

                                1. 2

                                  The graphs are fairly confusing because they compare, for each system, the performance ratio of SQLite versus the Filesystem, but they look like they compare the performance across filesystems. In particular, looking at the graphs, it looks like Windows 7 and 10 are much faster at reading files than all other systems, while in fact the result seem to be explained by the fact that (when antivirus protection is enabled as in these tests) the filesystem access overhead is sensibly larger.

                                  The point of the author is to talk about the time ratio between filesystem and database, so it is of course reasonable to emphasize that. But I still think that the very easy confusion suggests that this is not the best visualization approach. If the timescale of access on the various system are comparable enough, I think it would be best to have absolute time charts, plotting one bar for the filesystem and one bar for the database, on each system. Otherwise, no graphic plot need to be given: a numeric representation with one ratio per column would convey the same information and be less confusing.

                                  The key problem is that bar graphs like that are designed to make it easy to visually compare performance of the various measurements, which is non-sensical here (it is not interesting to know that the ratio between filesystem and database is 1.5X worse on Apple systems than on Ubuntu systems; you want to know that the ratio is surprisingly large on all systems). This is a case of a tool used for the wrong purpose.

                                  1. 1

                                    I agree that it is somewhat misleading.

                                    However, don’t you think that the difference in ratio across systems is indicative of OS/filesystem efficiency? Assuming that SQLite and direct read/write performance is optimal in all systems and that the different file systems are comparable in terms of features.

                                  1. 4

                                    What are the benefits of using something like Cello, when we have stuff like Rust, C++, Nim, D and so on? What I see from the examples looks quite interesting, but I’m not yet doing much C programming.

                                    1. 13

                                      Well, for one, if you don’t actually want to stop using C, this is not a new language, it’s just a library. For many that would be a benefit.

                                      1. 7

                                        Cello is a sort of dynamic object layer on top of C, and does dispatch at runtime. The other languages you list have more sophisticated type systems which are compiled statically.

                                        I’d have to say Cello’s biggest plus is that it’s conceptually lightweight.

                                        1. 5

                                          This is more like Objective-C than C++, Rust, etc.

                                        1. 2

                                          Very interesting approach. I can imagine it being fun but don’t know how much will “stick”.

                                          Algorithms are a pretty advanced topic for novices. When we cram so many subtleties into their heads they use up all of their working memory to hold the concepts/model. This leaves no room to process those concepts, reflect on them, and commit them to long term memory (aka, learning).

                                          Eriksson developed & popularized the concept of deliberate practice for honing a skill. A big part of it is that the learner starts with a tiny subset that enables a real-life performance. They quickly realize that practice leads to perceptible improvements in real-life performance and they keep at it until they master the skills one at a time. This is probably why guitar lessons start with holding a guitar and plucking a string as opposed to a fuzzy exercise rooted in music theory.

                                          Perhaps we look back at our first programming attempts and are embarrassed by the code we were writing then. We may vow to teach kids “what programming is really about”. I wonder if it’s best instead to just teach them the same way most of us learned it, by reading commands from a magazine/book/tutorial, faithfully typing them, executing them and seeing actual stuff happen on the screen. Theory can be introduced along the way, as it starts to become beneficial to their real world performance.