1. 2

    Talking about CPU-friendly code, is there a good example of a Go application written in a data-oriented design paradigm? Something that Mike Acton and Stoyan Nikolov described in their talks concerning C++.

    1. 6

      go-geom uses a linear layout for its coordinates, which makes it 30% faster than the equivalent C library in benchmarks, see https://github.com/twpayne/go-geom/blob/master/INTERNALS.md#efficient.

    1. 2

      https://github.com/twpayne/pake

      Pure Python implementation of make (note: Makefiles are written in Python too) written to support cross-platform development some time ago.

      1. 8

        chezmoi gives me the same CLI environment (shell aliases, editor configs, etc.) across all the machines I use, including work, home, laptops, desktops, and temporary VMs and containers.

        Disclaimer: I’m the author of chezmoi, but it’s popular and a lot of people have written about it.

        1. 3

          Chezmoi has been absolutely phenomenal, as has your support! I’m delighted to regularly evangelize it.

        1. 2

          I’m not sure what is the advantage of Chezmoi over more powerful tools like Ansible, Saltstack. I take a look at the Quick Start guide and not really convinced.

          1. 7

            I think they are intended for different use cases. Chezmoi is about setting the user config, by copying the config files to the right places under $HOME, while Ansible and Saltstack are for configuring the whole OS.

            This is a matter of a personal preference, but one should never “program” in YAML to configure anything, even the OS :)

            1. 3

              Exactly this. Chezmoi is aimed at a much smaller problem than “whole system” configuration. In fact, I use ansible to set up my Chezmoi initial configuration when bringing up a new computer, so for me at least the two are complementary.

            1. 4

              Great write-up!

              A few comments as the author of a filesystem abstraction library for Go (github.com/twpayne/go-vfs):

              • io/fs will surely be useful in some circumstances, but its lack of support for writes is a major omission that limits its usefulness.
              • Afero’s MemMapFs attempts to simulate a filesystem in memory but the simulation is buggy. One bug I encountered is that file system access is not correct: if you make chmod 000 a directory, Afero’s MemMapFs still allowed you to read subdirectories of that directory, whereas a real filesystem doesn’t.
              • Furthermore, the filesystem semantics vary from operating system to operating system and from filesystem to filesystem. For example, on Windows you cannot delete a file while another process has opened it. Different filesystems have different behavior when it comes to filename limitations (e.g. FAT), case sensitivity, and whether they are case preserving. MemMapFs - and generally any simulated filesystem - will invariably have bugs where it does not behave in the same way as the real filesystem and therefore is not useful for testing.

              For this reason, github.com/twpayne/go-vfs wraps Go’s os functions, rather than trying to simulate them. This means that for testing it behaves the same way as the underlying filesystem, and has significantly less code (and therefore hopefully fewer bugs).

              1. 2

                Afero’s MemMapFs attempts to simulate a filesystem in memory but the simulation is buggy. One bug I encountered is that file system access is not correct: if you make chmod 000 a directory, Afero’s MemMapFs still allowed you to read subdirectories of that directory, whereas a real filesystem doesn’t.

                This is exactly the kind of metadata finicking that made them stick to read-only and no writes :P

                Speaking from experience with strange bigcorp data storage systems, I have to say that specific writing interface + generic reading interface works surprisingly well. As a concrete example, append-only filesystems unlock a lot of performance boosts. Having separate PosixWritableFS and WindowsWritableFS interfaces seems prudent.

              1. 4

                This article seems to be based on a flawed understanding of floating point numbers. The author complains that “sin(x) gives me different results in different versions of Node”. What the author does not demonstrate in the article is the understanding that floating point numbers are approximations themselves, i.e. a given pattern of bits does not represent a precise number, but rather some range of numbers. Consequently, sin(x) is not well defined because x itself is not well defined.

                1. 2

                  a given pattern of bits does not represent a precise number, but rather some range of numbers.

                  Do you have a source for this? I don’t think this is correct - my understanding is that while every floating point number has an infinite number of numbers that will round to the same value, a given pattern of bits has a single canonical value that it represents.

                  As the author points out, while floating point operations like addition/etc can exhibit surprising results, they are well defined, in that any two floating point numbers have a specific floating point number that is their sum. The fact that addition/subtraction/etc have this property, but sin/etc do not is somewhat interesting.

                  1. 1

                    It’s not so much that the numbers themselves are approximations; it’s that the formulas used to compute trig functions are infinite series. The more terms you add up, the more accurate the result, but the longer it takes. I’m not an expert on current FP code, but I doubt it keeps grinding out terms until the result stops changing. Instead there’s a balance between speed and accuracy. And of course different implementations have different trade-offs.

                    1. 1

                      There used to be LUT accelerated solution in i387 and i486+ CPUs. Since SSE2 there are fast trigonometric functions (faster and simpler to call I think) than the i387 floating point solutions. Switching between these implementations may result in different results. Software solutions are available, based on the Taylor series of the functions, but those are multiple magnitudes slower, and I doubt that node would rely on those.

                      Also floating point numbers do have a canonical value per se, but floating point arithmetic is an approximation, and it is expected to be treated as such.

                      1. 2

                        For sine in particular, because a transcendental constant is involved, we have what Kahan called The Table-Maker’s Dilemma: How do we round the result?

                    2. 1

                      Even the sum may change in a way “unexpected” by the author:

                      Even the conversion between the binary and decimal representation of a floating point binary number is (as in printf/scanf) an approximation, and this may change with node version (if they ship their solution and don’t use libc). Theoretically even a microcode update may effect floating point operations (although not likely).

                      Thus even by having the same Math.js deployed everywhere there are a multitude of variables at play on lower levels. Here cross-platform differences have been generously ignored.

                  1. 1

                    Consider https://chezmoi.io - it’s a dotfile manager with many of the features that you want and is very quick and simple to get started with.