1. 3

    The JS/Haiku API bindings shown look really interesting. The Be API seems like it would pleasant to work in, and not needing to use C++ would lower the amount of effort needed to get started.

    I would love to hear from any devs who have worked with the API. Did you enjoy it? How did it compare with other contemporary APIs?

    1. 9

      Ruby: Mixins can make you feel like you are improving code, but I’ve seen some Ruby classes with a dozen modules included. It makes the class “smaller”, but adds a ton of misdirection. It leads to situations where 1000 lines of interrelated code are split over 100 files.

      1. 2

        I feel like I’ve had some of the articles opinions about OOP before, but a few years ago my employer sent me to one of Sandi Metz’ courses based on 99 Bottles of OOP. Hearing from someone who loves OOP and seeing why was kind of eye-opening.

        Another book that altered my views about OOP was Game Programming Patterns. I’m not a game developer, but the code examples and descriptions of why OOP (and the often-associated Gang of Four patterns) can be useful were enlightening.

        1. 2

          Would there have been these kinds of solutions if the original Mac OS had a command-line interface out of the box?

          These kinds of Mac OS-centric programming environments (I would lump HyperCard, Frontier, and AppleScript in the same category) seem so alien to me now. It’s like there was a burst of creativity because of the limitations of Mac OS, but then nothing really survived Unix getting cheaper and more ubiquitous.

          1. 1

            oooh. This might be really useful. We currently have a lot of very hand-done stuff and ghetto bash scripts, and are trying to figure out how to automate everything into Ansible or Puppet. But those systems have a harder time with things like “wait 5 seconds for hardware X to start, then confirm it’s writing data to logs before continuing”, so this might be a nice way to… well, as it says, gradually get there.

            1. 1

              Another way to gradually move from bash scripts could be to use the Fabric library. You can do orchestration of existing scripts across multiple machines with it pretty easily. It’s Python with a small API and a CLI.

            1. 10

              With the built-in container support in SystemD you don’t even need new tools:

              https://blog.selectel.com/systemd-containers-introduction-systemd-nspawn/

              …and with good security if you build your own containers with debootstrap instead of pulling stuff made by random strangers on docker hub.

              1. 8

                The conflict between the Docker and systemd developers is very interesting to me. Since all the Linux machines I administer already have systemd I tend to side with the Red Hat folks. If I had never really used systemd in earnest before maybe it wouldn’t be such a big deal.

                1. 5

                  …and with good security if you build your own containers with debootstrap instead of pulling stuff made by random strangers on docker hub.

                  I was glad to see this comment.

                  I have fun playing with Docker at home but I honestly don’t understand how anyone could use Docker Hub images in production and simultaneously claim to take security even quasi-seriously. It’s like using random npm modules on your crypto currency website but with even more opaqueness. Then I see people arguing over the relative security of whether or not the container runs as root but then no discussion of far more important security issues like using Watchtower to automatically pull new images.

                  I’m no security expert but the entire conversation around Docker and security seems absolutely insane.

                  1. 4

                    That’s the road we picked as well, after evaluating Docker for a while. We still use Docker to build and test our containers, but run them using systemd-nspawn.

                    To download and extract the containers into folders from the registry, we wrote a little go tool: https://github.com/seantis/roots

                    1. 2

                      From your link:

                      Inside these spaces, we can launch Linux-based operating systems.

                      This keeps confusing me. When I first saw containers, I saw them described as light weight VM’s. Then I saw people clarifying that they are really just sandboxed Linux processes. If they are just processes, then why do containers ship with different distros like Alpine or Debian? (I assume it’s to communicate with the process in the sandbox.) Can you just run a container with a standalone executable? Is that desirable?

                      EDIT

                      Does anyone know of any deep dives into different container systems? Not just Docker, but a survey of various types of containers and how they differ?

                      1. 4

                        Containers are usually Linux processes with their own filesystem. Sandboxing can be good or very poor.

                        Can you just run a container with a standalone executable? Is that desirable?

                        Not desirable. An advantage of containers over VMs is in how easily the host can inspect and modify the guest filesystem.

                        1. 5

                          Not desirable.

                          Minimally built containers reduce attack surface, bring down image size, serve as proof that your application builds in a sterile environment and act as a list with all runtime dependencies, which is always nice to have.

                          May I ask why isn’t it desirable?

                          1. 1

                            You can attach to a containerized process just fine from the host, if the container init code doesn’t go out of it’s way to prevent it.

                            gdb away.

                          2. 3

                            I’m not sure if it’s as deep as you’d like, but https://www.ianlewis.org/en/tag/container-runtime-series might be part of what you’re looking for.

                            1. 1

                              This looks great! Thank you for posting it.

                            2. 3

                              I saw them described as light weight VM’s.

                              This statement is false, indeed.

                              Then I saw people clarifying that they are really just sandboxed Linux processes.

                              This statement is kinda true (my experience is limited to Docker containers). Keep in mind more than one process can run on a container, as containers have their own PID namespace.

                              If they are just processes, then why do containers ship with different distros like Alpine or Debian?

                              Because containers are spun up based on a container image, which is essentially a tarball that gets extracted to the container process’ root filesystem.

                              Said filesystem contains stuff (tools, libraries, defaults) that represents a distribution, with one exception: the kernel itself, which is provided by the host machine (or a VM running on the host machine, à la Docker for Mac).

                              Can you just run a container with a standalone executable? Is that desirable?

                              Yes, see my prometheus image’s filesystem, it strictly contains the prometheus binary and a configuration file.

                              In my experience, minimising a container image’s contents is a good thing, but for some cases you may not want to. Applications written in interpreted languages (e.g. Python) are very hard to reduce down to a few files in the image, too.

                              I’ve had most success writing minimal container images (check out my GitHub profile) with packages that are either written in Go, or that have been around for a very long time and there’s some user group keeping the static building experience sane enough.

                              1. 3

                                I find the easier something is to put into a docker container, the less point there is. Go packages are the ideal example of this: building a binary requires 1 call to a toolchain which is easy to install, and the result has no library dependencies.

                              2. 2

                                They’re not just processes: they are isolated process trees.

                                Why Alpine: because the images are much smaller than others.

                                Why Debian: perhaps because reliable containers for a certain application happen to be available based on it?

                                1. 1

                                  Afaik: Yes, you can and yes, it would be desirable. I think dynamically linked libraries were the reason why people started to use full distributions in containers. For a Python environment you would probably have to collect quite a few different libraries from your OS to copy into the container so that Python can run.

                                  If my words are true then in the Go environment you should see containers with only the compiled binary? (I personally installed all my go projects without containers, because it’s so simple to just copy the binary around)

                                  1. 3

                                    If you build a pure Go project, this is true. If you use cgo, you’ll have to include the extra libraries you link to.

                                    In practice, for a Go project you might want a container with a few other bits: ca-certificates for TLS, /etc/passwd and /etc/group with the root user (for “os/user”), tzdata for timezone support, and /tmp. gcr.io/distroless/static packages this up pretty well.

                                    1. 1

                                      You can have very minimal containers. Eg. Nix’s buildLayeredImage builds layered Docker images from a package closure. I use it to distribute some NLP software, the container only contains glibc, libstdc++, libtensorflow, and the program binaries.