1. 31

    My position has essentially boiled down to “YAML is the worst config file format, except for all the other ones.”

    It gets pretty bad if your documents are large or if you need to collaborate (it’s possible to have a pretty good understanding of parts of YAML but that’s not always going to line up with what your collaborators understand).

    I keep wanting to say something along the lines of “oh, YAML is fine as long as you stick to a reasonable subset of it and avoid confusing constructs,” but I strongly believe that memory-unsafe languages like C/C++ should be abandoned for the same reason.

    JSON is unusable (no comments, easy to make mistakes) as a config file format. XML is incredibly annoying to read or write. TOML is much more complex than it appears… I wonder if the situation will improve at any point.

    1. 22

      I think TOML is better than YAML. Sure, it has the complex date stuff, but that has never caused big surprises for me (just small annoyances). The article seems to focus mostly on how TOML is not Python, which it indeed is not.

      1. 14

        It’s syntactically noisy.

        Human language is also syntactically noisy. It evolved that way for a reason: you can still recover the meaning even if some of the message was lost to inattention.

        I have a mixed feeling about TOML’s tables syntax. I would rather have explicit delimiters like curly braces. But, if the goal is to keep INI-like syntax, then it’s probably the best thing to do. The things I find really annoying is inline tables.

        As of user-typed values, I came to conclusion that everything that isn’t an array or a hash should just be treated as a string. If you take user input, you cannot just assume that the type is correct and need to check or convert it anyway, so why even bother having different types at the format level?

        Regardless, my experience with TOML has been better than with alternatives, despite its flaws.

        1. 6

          Human language is also syntactically noisy. It evolved that way for a reason: you can still recover the meaning even if some of the message was lost to inattention.

          I have a mixed feeling about TOML’s tables syntax. I would rather have explicit delimiters like curly braces. But, if the goal is to keep INI-like syntax, then it’s probably the best thing to do. The things I find really annoying is inline tables.

          It’s funny how the exact same ideas made me make the opposite decision. I came to the conclusion that “the pain has to be felt somewhere” and that the config files are not the worst place to feel it.

          I have mostly given up on different config formats and just default to one of the following three options:

          1. Write .ini or Java properties-file style config-files when I don’t need more.
          2. Write a dtd and XML when I need tree or dependency-like structures.
          3. Store the configuration in a few tables inside an RDBMS and drop an .ini-style config file with just connection settings and the name of the config tables when things get complex.

          As of user-typed values, I came to conclusion that everything that isn’t an array or a hash should just be treated as a string. If you take user input, you cannot just assume that the type is correct and need to check or convert it anyway, so why even bother having different types at the format level?

          I fully agree with this well.

        2. 23

          Dhall is looking really good! Some highlights from the website:

          • Dhall is a programmable configuration language that you can think of as: JSON + functions + types + imports
          • You can also automatically remove all indirection in any Dhall code, converting the file to a logic-free normal form for non-programmers to understand.
          • We take language security seriously so that your Dhall programs never fail, hang, crash, leak secrets, or compromise your system.
          • The language aims to support safely importing and evaluating untrusted Dhall code, even code authored by malicious users.
          • You can convert both ways between Dhall and JSON/YAML or read Dhall configuration files directly into a language that supports a native language binding.
          1. 8

            I don’t think the tooling should be underestimated, too. The dhall executable includes low-level plumbing tools (individual type checking, importing, normalization), a REPL, a code formatter, a code linter to help with language upgrades, and there’s full blown LSP integration. I enjoy writing Dhall so much that for new projects I’m taking a more traditional split between a core “engine”, and then pushing out the logic into Dhall - then compiling it at a load time into something the engine can work with. The last piece of the puzzle to me is probably bidirectional type inference.

            1. 2

              That looks beautiful! Can’t wait to give it a go on some future projects.

              1. 2

                Although the feature set is extensive, is it really necessary to have such complex functionality in a configuration language?

                1. 4

                  It’s worth understanding what the complexity is. The abbreviated feature set is:

                  • Static types
                  • First class importing
                  • Function abstraction

                  Once I view it through this light, I find it easier to convince myself that these are necessary features.

                  • Static types enforce a schema on configuration files. There is almost always a schema on configuration, as something is ultimately trying to pull information out of it. Having this schema reified into types means that other tooling can make use of the schema - e.g., the VS Code LSP can give me feedback as I edit configuration files to make sure they are valid. I can also do validation in my CI to make sure my config is actually going to be accepted at runtime. This is all a win.

                  • Importing means that I’m not restricted to a single file. This gives me the advantage of being able to separate a configuration file into smaller files, which can help decompose a problem. It also means I can re-use bits of configuration without duplication - for example, maybe staging and production share a common configuration stanza - I can now factor that out into a separate file.

                  • Function abstraction gives me a way to keep my configuration DRY. For example, if I’m configuring nginx and multiple virtual hosts all need the same proxy settings, I can write that once, and abstract out my intention with a function that builds a virtual host. This avoids configuration drift, where one part is left stale and the rest of the configuration drifts away.

                  1. 1

                    That’s very interesting, I hadn’t thought of it like that. Do you mostly use Dhall itself as configuration file or do you use it to generate json/yaml configuration files?

                2. 1

                  I finally need to implement Dhall evaluator in Erlang for my projects. I <3 ideas behind Dhall.

                3. 5

                  I am not sure that there aren’t better options. I am probably biased as I work at Google, but I find Protocol Buffer syntax to be perfectly good, and the enforced schema is very handy. I work with Kubernetes as part of my job, and I regularly screw up the YAML or don’t really know what the YAML is so cutty-pasty from tutorials without actually understanding.

                  1. 4

                    Using protobuf for config files sounds like a really strange idea, but I can’t find any arguments against it.
                    If it’s considered normal to use a serialisation format as human-readable config (XML, JSON, S-expressions etc), surely protobuf is fair game. (The idea of “compiled vs interpreted config file” is amusing though.)

                    1. 3

                      I have experience with using protobuf to communicate configuration-like information between processes and the schema that specifies the configurations, including (nested) structs/hashes and arrays, ended up really hacky. I forgot the details, but protobuf lacks one or more essential ingredients to nicely specify what we wanted it to specify. As soon as you give up and allow more dynamic messages, you’re of course back to having to check everything using custom code on both sides. If you do that, you may as well just go back to yaml. The enforced schema and multi language support makes it very convenient, but it’s no picnic.

                      1. 2

                        One issue here is that knowing how to interpret the config file’s bytes depends on having the protobuf definition it corresponds to available. (One could argue the same is true of any config file and what interprets it, but with human-readable formats it’s generally easier to glean the intention than with a packed binary structure.)

                        1. 2

                          At Google, at least 10 years ago, the protobuf text format was widely used as a config format. The binary format less so (but still done in some circumstances when the config file wouldn’t be modified by a person).

                          1. 3

                            TIL protobuf even had a text format. It sounds like it’s not interoperable between implementations/isn’t “fully portable”, and that proto3 has a JSON format that’s preferable.. but then we’re back to JSON.

                    2. 2

                      JSON can be validated with a schema (lots of tools support it, including VSCode), and it’s possible to insert comments in unused fields of the object, e.g. comment or $comment.

                      1. 17

                        and it’s possible to insert comments in unused fields of the object, e.g. comment or $comment.

                        I don’t like how this is essentially a hack, and not something designed into the spec.

                        1. 2

                          Those same tools (and often the system on the other end ingesting the configuration) often reject unknown fields, so this comment hack doesn’t really work.

                          1. 8

                            And not without good reason: if you don’t reject unknown fields it can be pretty difficult to catch misspellings of optional field names.

                            1. 2

                              I’ve also seen it harder to add new fields without rejecting unknown fields: you don’t know who’s using that field name for their own use and sending it to you (intentionally or otherwise).

                          2. 1

                            Yes, JSON can be validated by schema. But in my experience, JSON schema implementations are widely diverging and it’s easy to write schemas that just work in your particular parser.

                          3. 1

                            JSON is unusable (no comments, easy to make mistakes) as a config file format.

                            JSON5 fixes this problem without falling prey to the issues in the article: https://json5.org/

                            1. 2

                              Yeah, and then you lose the main advantage of json, which is how ubiquitous it is.

                              1. 1

                                In the context of a config format, this isn’t really an advantage, because only one piece of code will ever be parsing it. But this could be true in other contexts.

                                I typically find that in the places where YAML has been chosen over JSON, it’s usually for config formats where the ability to comment is crucial.

                          1. 10

                            With the built-in container support in SystemD you don’t even need new tools:

                            https://blog.selectel.com/systemd-containers-introduction-systemd-nspawn/

                            …and with good security if you build your own containers with debootstrap instead of pulling stuff made by random strangers on docker hub.

                            1. 8

                              The conflict between the Docker and systemd developers is very interesting to me. Since all the Linux machines I administer already have systemd I tend to side with the Red Hat folks. If I had never really used systemd in earnest before maybe it wouldn’t be such a big deal.

                              1. 5

                                …and with good security if you build your own containers with debootstrap instead of pulling stuff made by random strangers on docker hub.

                                I was glad to see this comment.

                                I have fun playing with Docker at home but I honestly don’t understand how anyone could use Docker Hub images in production and simultaneously claim to take security even quasi-seriously. It’s like using random npm modules on your crypto currency website but with even more opaqueness. Then I see people arguing over the relative security of whether or not the container runs as root but then no discussion of far more important security issues like using Watchtower to automatically pull new images.

                                I’m no security expert but the entire conversation around Docker and security seems absolutely insane.

                                1. 4

                                  That’s the road we picked as well, after evaluating Docker for a while. We still use Docker to build and test our containers, but run them using systemd-nspawn.

                                  To download and extract the containers into folders from the registry, we wrote a little go tool: https://github.com/seantis/roots

                                  1. 2

                                    From your link:

                                    Inside these spaces, we can launch Linux-based operating systems.

                                    This keeps confusing me. When I first saw containers, I saw them described as light weight VM’s. Then I saw people clarifying that they are really just sandboxed Linux processes. If they are just processes, then why do containers ship with different distros like Alpine or Debian? (I assume it’s to communicate with the process in the sandbox.) Can you just run a container with a standalone executable? Is that desirable?

                                    EDIT

                                    Does anyone know of any deep dives into different container systems? Not just Docker, but a survey of various types of containers and how they differ?

                                    1. 4

                                      Containers are usually Linux processes with their own filesystem. Sandboxing can be good or very poor.

                                      Can you just run a container with a standalone executable? Is that desirable?

                                      Not desirable. An advantage of containers over VMs is in how easily the host can inspect and modify the guest filesystem.

                                      1. 5

                                        Not desirable.

                                        Minimally built containers reduce attack surface, bring down image size, serve as proof that your application builds in a sterile environment and act as a list with all runtime dependencies, which is always nice to have.

                                        May I ask why isn’t it desirable?

                                        1. 1

                                          You can attach to a containerized process just fine from the host, if the container init code doesn’t go out of it’s way to prevent it.

                                          gdb away.

                                        2. 3

                                          I’m not sure if it’s as deep as you’d like, but https://www.ianlewis.org/en/tag/container-runtime-series might be part of what you’re looking for.

                                          1. 1

                                            This looks great! Thank you for posting it.

                                          2. 3

                                            I saw them described as light weight VM’s.

                                            This statement is false, indeed.

                                            Then I saw people clarifying that they are really just sandboxed Linux processes.

                                            This statement is kinda true (my experience is limited to Docker containers). Keep in mind more than one process can run on a container, as containers have their own PID namespace.

                                            If they are just processes, then why do containers ship with different distros like Alpine or Debian?

                                            Because containers are spun up based on a container image, which is essentially a tarball that gets extracted to the container process’ root filesystem.

                                            Said filesystem contains stuff (tools, libraries, defaults) that represents a distribution, with one exception: the kernel itself, which is provided by the host machine (or a VM running on the host machine, à la Docker for Mac).

                                            Can you just run a container with a standalone executable? Is that desirable?

                                            Yes, see my prometheus image’s filesystem, it strictly contains the prometheus binary and a configuration file.

                                            In my experience, minimising a container image’s contents is a good thing, but for some cases you may not want to. Applications written in interpreted languages (e.g. Python) are very hard to reduce down to a few files in the image, too.

                                            I’ve had most success writing minimal container images (check out my GitHub profile) with packages that are either written in Go, or that have been around for a very long time and there’s some user group keeping the static building experience sane enough.

                                            1. 3

                                              I find the easier something is to put into a docker container, the less point there is. Go packages are the ideal example of this: building a binary requires 1 call to a toolchain which is easy to install, and the result has no library dependencies.

                                            2. 2

                                              They’re not just processes: they are isolated process trees.

                                              Why Alpine: because the images are much smaller than others.

                                              Why Debian: perhaps because reliable containers for a certain application happen to be available based on it?

                                              1. 1

                                                Afaik: Yes, you can and yes, it would be desirable. I think dynamically linked libraries were the reason why people started to use full distributions in containers. For a Python environment you would probably have to collect quite a few different libraries from your OS to copy into the container so that Python can run.

                                                If my words are true then in the Go environment you should see containers with only the compiled binary? (I personally installed all my go projects without containers, because it’s so simple to just copy the binary around)

                                                1. 3

                                                  If you build a pure Go project, this is true. If you use cgo, you’ll have to include the extra libraries you link to.

                                                  In practice, for a Go project you might want a container with a few other bits: ca-certificates for TLS, /etc/passwd and /etc/group with the root user (for “os/user”), tzdata for timezone support, and /tmp. gcr.io/distroless/static packages this up pretty well.

                                                  1. 1

                                                    You can have very minimal containers. Eg. Nix’s buildLayeredImage builds layered Docker images from a package closure. I use it to distribute some NLP software, the container only contains glibc, libstdc++, libtensorflow, and the program binaries.

                                              1. 3

                                                I looks like the LE certificate in use has expired:

                                                Expires On: Sunday, December 2, 2018 at 12:30:37 AM

                                                1. 4
                                                  • Mail (postfix: dovecot, rainloop for the less technical also)
                                                  • Chat (Prosody)
                                                  • Calendar/Contacts (Radicale: caldavzap also for the less technical)
                                                  • duplicity for backups over tor to server in house
                                                  • Website/social network presence (IndieWeb, into silos via brid.gy)
                                                  • Personal projects (cheogram.com, usetint.com, and others)
                                                  • IPFS pinning for my video series
                                                  • Bittorrent seeding for my video series
                                                  • Syncthing on home server
                                                  • Mumble for podcasting
                                                  • DNS with adblocking
                                                  1. 2

                                                    Personal projects (cheogram.com

                                                    checks most of his XMPP contacts

                                                    I’m going to hope this is just the website.

                                                    1. 2

                                                      You’re a JMP customer? I’m the primary sysadmin for the main server – dedicated box with OVH in Quebec

                                                      1. 2

                                                        Yes. The phrasing above just makes it seems like you’re running this on an old shoebox you have. ;)

                                                  1. 4

                                                    I have two physical servers, one at home, one colocated, both running SmartOS. Split between them, I’m running:

                                                    • Plex Media Server, for media hosting and streaming
                                                    • Prosody, for Jabber/XMPP
                                                    • ZNC, as an IRC bouncer
                                                    • Software to remote control my house lights (via a RS-232 to Ethernet bridge, as I don’t have the correct ports anymore)
                                                    • A WordPress site, at least until I export it to be a static site
                                                    • Gerrit, for code hosting and review for personal projects
                                                    • An SFTP/SCP Dropbox
                                                    • Envoy for L4 and L7 load balancing

                                                    Along with a miscellaneous legacy stuff on a Digital ocean droplet I plan on turning down soon.

                                                    I’ll I’m looking to start self-hosting in the future:

                                                    • Simplified music streaming with a read-only view of the underlying music, preferably with optional mpd and upnp support (currently using Plex, but it doesn’t respect metatdata tags, which I’m so careful to set)
                                                    • VPN. Wireguard seems interesting, but I’m on the wrong host OS, I think
                                                    • A secure and easy to use CA for my personal CA, to make provisioning TLS on other things easier.
                                                    • Gopher and a BBS, for fun.
                                                    • Grafana / Prometheus, because I should probably be a little serious
                                                    • URL shortener
                                                    • Buildbot for building and testing the projects on Gerrit

                                                    Unlike many others in this thread, I’m not interested in self-hosted PIMs: Google and Fastmail do a much better job than I ever would.

                                                    1. 3

                                                      I’ve managed to check it out last night, and it appears to be working as advertised.

                                                      Key generation is super awesome, built in QRcode reader to transfer configuration/public-keys between a desktop would be a great feature for semi-automated setups.

                                                      The error reporting is still a little bit weird, for example I can’t configure 10.0.0.1/24 as Allowed IPs for a Peer with the error message: “Bad address”. 10.0.0.0/24 works though, so maybe just a user error.


                                                      With the Wireguard(WG) Android connectivity I can/could now:

                                                      • Stream music to my phone from my mpd-server with httpd/lame as output configured (MPDroid), or just configuring my mpd-server at home (works already)
                                                      • Accessing my phone via. Termux/sshd (works already), sshfs via LTE works unexpectedly well OR adb via. VPN.
                                                      • Do backups with Syncopoli and rsync:// instead of ssh (Keyfile management in Syncopoli is confusing)
                                                      • Sync with radicale calendar server (probably contacts/notes too?)
                                                      • Access read-only monitoring web-interface, getting alerts via. self hosted Matrix instance?
                                                      • Report back the location of my phone (couldn’t find a tool for that yet, Termux API examples can report the location, though - might be done with a python script then)

                                                      None of this requires root, I’m using CopperheadOS, which has root-access disabled.

                                                      I need to figure out how to properly protect random apps to access those services. rsync:// supports secret-based-authentication, so that might be good enough.

                                                      Basically I’d like to avoid having each service to do it’s own authentication/key management, but to have one ‘global instance’ (WG) to do deal with encryption instead.

                                                      I’ve seen Orbot supports setting tunneling per app basis, so might be possible to implement for WG too.

                                                      I’m still not sure if this all makes sense, but it feels rewarding to setup, so I’m trying to push forward what is possible. Especially backups are a huge painpoint in Android, I hope I’ll solve that for myself soon.

                                                      Everything could be replaced by $VPN-technology, but WG, besides tor, is the first tool that kept me exited for long enough.

                                                      1. 3

                                                        Report back the location of my phone

                                                        I’ve found OwnTracks works well for this use case. Reports back location and battery info. Downside is that MQTT brokers are a bit fiddly to configure and use.

                                                        1. 1

                                                          Thank you for the pointer, unfortunately they won’t provide a Google services free version (ticket.

                                                          1. 1

                                                            That’s certainly a bummer. Skimming the thread, seems to be a result of there being no free replacements for the geofencing APIs.

                                                        2. 1

                                                          Key generation is super awesome, built in QRcode reader to transfer configuration/public-keys between a desktop would be a great feature for semi-automated setups.

                                                          The TODO list actually has this on it. Hopefully we’ll get that implemented soon. You’re welcome to contribute too, if you’re into Android development.

                                                          The error reporting is still a little bit weird, for example I can’t configure 10.0.0.1/24 as Allowed IPs for a Peer with the error message: “Bad address”. 10.0.0.0/24 works though, so maybe just a user error.

                                                          The error reporting is very sub-par right now indeed. We probably should have more informative error messages, rather than just bubbling up the exception message text.

                                                          That “bad address” is coming from Android’s VPN API – 10.0.0.1/24 is not “reduced” as a route; you might have meant to type 10.0.0.1/32. Probably the app could reduce this for you, I suppose. But observe that normal Linux command line tools also don’t like unreduced routes:

                                                          thinkpad ~ # ip r a 10.0.0.1/24 dev wlan0
                                                          Error: Invalid prefix for given prefix length.
                                                          thinkpad ~ # ip r a 10.0.0.0/24 dev wlan0
                                                          thinkpad ~ # ip r a 10.0.0.1/32 dev wlan0
                                                          
                                                        1. 1

                                                          My configuration is the same between work and personal. In fact, the work machine’s ~/.gitconfig even has my personal email configured.

                                                          In the directory I use as the root for all the work projects, I have a direnv .envrc that sets GIT_AUTHOR_EMAIL and GIT_COMMITTER_EMAIL.

                                                          1. 1

                                                            I really like his workspace idea. I’ve sent one patch to three or four projects; there’s no need for me to have a whole clone of them up on github gathering dust.

                                                            1. 1

                                                              Would a PR from diff patch would be an interesting concept here ?

                                                              1. 1

                                                                Can you elaborate? I’m not sure what you mean.

                                                                1. 1

                                                                  To this day, I’m surprised this isn’t a feature. I often only make one or two changes to a project, and would be happy to just push the refs (or create a patch) rather than forking and remember to garbage collect later.

                                                              1. 5

                                                                Huh, nameless workflows. I should try that.

                                                                1. 1

                                                                  The nameless workflows and workspaces reminds me a bit of using Gerrit: you push an unnamed thing (a ref) to another ref to open a change list (aka, a PR). No need to name a local branch.

                                                                  Unfortunately, a little bit of metadata is still needed (namely the “Change-Id” line in the commit message), so Gerrit can track revisions of a change across refs. But in theory that could be stored in a way that it’s invisible to end users, and be tracked across rebases and amends.

                                                                  1. 1

                                                                    Me, too! My previous team used hg named branches as feature branches (they had, a few years before I joined, migrated from CVS to hg so this was a huge step up) and their workflow bloated the shared repo to such an extent that occasionally clients would fail to push/pull because there were too many tips to compare.

                                                                  1. 2

                                                                    Last time

                                                                    • Continuing to work on mead a Go tool I started last week to aid in maintaining Go packages in Homebrew. I’ll probably give some TLC to bakelite my Go tool for doing GOOS/GOARCH builds in parrallel in the process.
                                                                    • On Thursday I’m giving a meetup talk on the standard Go tools. I need to write this talk and design the slides before then. This pairs well with the above, as I’m wanting bakelite to feel like an extension of go build.
                                                                    1. 12

                                                                      At one point in my life, I finally got my act together and consolidated the number of email addresses I used regularly from around 10 to 3. One of these was a Gmail account, and I happened to be living in Japan at the time.

                                                                      Google has never forgotten this, but isn’t sure that it matters either, as if it can’t make up its mind about what to do with that information. I still occasionally see a お待ちください when authenticating or switching Google services. I have not noticed a pattern as to when this occurs.

                                                                      On international travel to non-English speaking countries, my first touch with Google used to send me to google.co.jp. It used do this and display google.co.jp in Japanese. Then for a while it would send me to google.co.jp but have it in English. Now I get the local country TLD, but in English or Japanese, and with an offer to switch to the native language.

                                                                      I am not bothered by this, just intensely curious what the actual inputs are to the function that determines what version and language Google sends me to!

                                                                      1. 1

                                                                        At some point in the early aughts, I told Google I lived in Australia. Like you, for years, it would bounce me through google.com.au when in a new country or doing SSO. At some point in the last handful of years, it has decided I do live in the US, and stopped doing that.

                                                                      1. 3

                                                                        Last week

                                                                        • I put together my Jarvis desk over the weekend. I need to order a new surge protector that fits in the cable tray so I can clean up the wires. I’ll probably upgrade to a 4K monitor and monitor arm when the deals start coming out at the end of the week.
                                                                        • I need to finish migrating my Chrome extension away from deprecated APIs and add Firefox Quantum support.
                                                                        • Ubiquiti Security Gateway seemed dead on arrival. Will have to RMA it this week and hope the replacement is good.
                                                                        • I’m hoping my Game Boy capacitors arrive this week, so I can replace them. My new desk is in less disarray, so it should be easier to find project space.
                                                                        • At $WORK, I’m dealing with some growing pains with Kubernetes Ingress resources and Istio, as well as Istio TCP support. It’s a short week due to Thanksgiving, but hoping to at least have a plan by the end of Wednesday.
                                                                        1. 3

                                                                          Nearly forgot the less technical work for this week:

                                                                          • Pumpkin pie
                                                                          • Cheesy potato casserole
                                                                          • Green beans

                                                                          ;)

                                                                        1. 12

                                                                          I have 2 MacBook Pros from the 2012 to 2014 era. Love them both. Awesome hardware. I wish I could say the same for the software. Each version of OSX has gotten progressively worse for me as a developer since the high point that was Snow Leopard.

                                                                          Several folks I know using Sierra and High Sierra are dealing with regular kernel panics.

                                                                          I’ve started to contemplate what my next laptop and OS are going to be for work. Sometimes I harbor fantasies of buying another used MacBook Pro and installing something like Dragonfly or FreeBSD on it.

                                                                          In the end, I’m probably going to settle for something like a Thinkpad that I’m “ok” with and some Linux distro.

                                                                          Leaving aside “consumer” apps I need, there’s enough software like Zoom et al that support windows, Mac, and windows that I need for work that are going to end up being limiting factors.

                                                                          1. 2

                                                                            I am currently writing this on a mid-2014 MBP with High Sierra and the most recent updates have been grim. I have a lot of hanging applications, even Apple applications like GarageBand, and have to restart a couple times a day to keep things usable.

                                                                            It was a great computer for a long time, but the software recently has been terrible.

                                                                            1. 1

                                                                              I feel like that is a theme in my life.

                                                                              I accidentally upgraded by iPhone 7 to iOS 11 and now its mostly unusable. The level of lag opening a new application is nuts. Lyft as an example takes 60 to 90 seconds from when I open to it being usable.

                                                                              Earlier today I opened the Messages app and wanted to take a picture and text it. It took almost 2 minutes for the message app to open, for me to be able to select the person I wanted to message, for that to open and then for the camera to come up. By the time it was ready, the thing I wanted to take a photo of was gone.

                                                                              It feels like when I stopped using Apple products in the 90s again except now they have a lot more market share and they arent dealing with a signature laptop bursting into flames.

                                                                            2. 1

                                                                              I had a 2011 macbook pro with the 15” screen that I thought could never be topped. Couldn’t justify the increased price for the 2016/2017 model so I went for a thinkpad t460p and loaded kbuntu on it, I’m very happy with the machine in general but there are a few irritances such as photos being synced between my laptop & iPhone no longer happens, its just not as integrated which I do miss.

                                                                              1. 1

                                                                                Had a 2012 non-Retina (but the higher resolution variant) MBP until earlier this year. Had replaced the HDD with an SSD years ago, and upgraded even that to 1TB. Replaced the WiFi/Bluetooth card once it died. Took out the combo drive. Heavy, thick, but still worked great.

                                                                                I ended up finally swapping out for the T470s. Slim, higher resolution, NVMe, Linux-compatible hardware.

                                                                              1. 3

                                                                                I’m working on a little Cocoa/Swift app in my spare time, coming from mostly web and server dev. It’s a simple speedrunning timer app, where a run can be split up into named ‘splits’, and some history is kept.

                                                                                It feels a lot like unlearning a decade of techniques learned as a web dev: declarative ui, state management, etc. My first attempts were to try and fit that in Cocoa, and looking around for tools that may help. But macOS is a barren wasteland, with everyone focusing on iOS apparently.

                                                                                So I’m trying to learn it more or less properly and the hard way. I’m not using Interface Builder, because I find it helps to learn how things actually work. (And xibs seem more a convenience any way.)

                                                                                I’m still figuring out structure, splitting op classes that were implementing too many protocols, etc. Mostly have a document-based app up with working models and views, but need to start hooking up behaviour.

                                                                                1. 2

                                                                                  There’s some of the declarative, reactiveness alive within the Swift community in the RectiveCocoa and RxSwift communities. Each time I’ve tried to get into ReactiveCocoa (I’ve tried for each major version number) the lack of beginner documentation does me in. React has nailed this with a quick example app that introduces all the major concepts, I’m not sure why this doesn’t exist in ReactiveCocoa.

                                                                                  You can get pretty far with code driven UIs, but there’s definitely a large segment of developers that swear by Interface Builder and Storyboards. I’ve never been able to get into them myself.

                                                                                  1. 1

                                                                                    ReactiveSwift seems equal parts awesome and daunting. I think it’d be very interesting to take a deep dive, but not sure if I’ll ever take the time. :)

                                                                                1. 3

                                                                                  Last week

                                                                                  I ended up working on my hardware projects. I put together the Monarch, and started working on a Game Boy art project (shameless plug).

                                                                                  I’ve got a couple of tasks this week:

                                                                                  • Replacing my Mikrotik router with a Ubiquiti Security Gateway. I’ve been unable to convince the Mikrotik developers that they have a bug in their IPv6 Prefix Delegation support that prevents me from getting a v6 pool from my ISP.
                                                                                  • I ordered a Jarvis adjustable frame to replace my IKEA hacked “desk”. I’ll continue to use the butcher block desk top that I have from the IKEA desk, since I’ve already got it, and it’s pretty awesome. This should arrive Thursday.
                                                                                  • I need to replace all the capacitors on my Game Boy, as mentioned in the aforementioned blog post.
                                                                                  • I need to update my Chrome extension to use newer APIs, since some I’m using are deprecated. I’ll probably use this opportunity to finally fully support Firefox Quantum and setup CI.

                                                                                  Probably won’t happen this week, but I’d like to get Joyent’s Triton running in KVM so I can see if I can’t shim the pieces needed to run Kubernetes natively, since I think the environment has much of what’s needed (with Crossbow for networking and Manta for storage). The official guides for Kubernetes on Triton are just running it in KVM.

                                                                                  1. 2

                                                                                    Last week

                                                                                    Keeping things simple this week: primarily working on my soldering skills with a backlog of Boldport projects.

                                                                                    I’m continuing to work on my Insteon Go library: I’m struggling with the best API to expose responses from the serial connection. The hardware I have also seems to have issues receiving a new command while still processing the last one, so I’ll need to ensure I don’t send commands too quickly.

                                                                                    1. 7

                                                                                      Who is this aimed at?

                                                                                      Is the author suggesting Node.JS shouldn’t provide an LTS? Linux?

                                                                                      I don’t understand.

                                                                                      1. 6

                                                                                        I interpreted this as being directed at smaller projects within the Node.js community, like Gulp, which requires 0.10 compatibility for changes. Gulp is a task runner primarily used for build pipelines in the frontend communities.

                                                                                        Despite being a very small core team, Gulp versions 3 and 4 continue to support Node.js 0.10 lineage. This means the project deeply cares about how its dependencies are written and what features they use, and those maintainers feel the burn when dependencies change to use the latest and greatest. This, naturally, makes it more difficult for contributors to develop new features and provide new contributions.

                                                                                        Why Node.js 0.10? This is a lineage that long pre-dates the Node Foundation and io.js. What makes it still relevant? It’s the version that’s still supported by the Debian LTS team in Wheezy, and soon by the LTS team in Jessie. It will presumably be the lineage shipped until April 2020 when the LTS expires.

                                                                                        It was important to the developers of Gulp that they supported the versions a user trying to replace ad-hoc shell scripts would have available to them.

                                                                                        1. 4

                                                                                          Debian will use an old version of Gulp because this is a contract they have with their users. If Debian randomly and regularly upgraded programs to new versions of things that behave differently (different/incompatible command-line arguments, etc), then many people would probably not use Debian.

                                                                                          If the Gulp developers don’t backport security and bug fixes, then either the Debian package maintainers will do it, the security/bug fix will be in Debian making Gulp developers look stupid, or Debian won’t ship with Gulp.

                                                                                          So I get why Gulp developers will backport as long as it’s easy enough, but I don’t get why the author cares what Gulp does.

                                                                                          1. 3

                                                                                            In general I agree, but consider that the release notes for the most recent couple Debian versions have had great big “NO SECURITY FIXES FOR NODE/V8” on them; I don’t know that it helps that much to have Gulp doing the right thing when Node itself is such a mess that the Debian team gave up and labeled it a lost cause due to the high-volume torrent of CVEs they produce.

                                                                                            1. 1

                                                                                              Interesting. I’ve not noticed that small paragraph in Chapter 5^1 of the releases notes before. I downloaded the referenced debian-security-support, but I didn’t see anything inside mentioning the lack of security or LTS releases for nodejs, libv8, or node-* packages.

                                                                                              It’s possible I’ve just not found the relevant bits.

                                                                                              1. 2

                                                                                                I re-read them and they are definitely not as emphatic as I remember. They are, however, extremely sarcastic:

                                                                                                Unfortunately, this means that libv8-3.14, nodejs, and the associated node-* package ecosystem should not currently be used with untrusted content, such as unsanitized data from the Internet.

                                                                                          2. 2

                                                                                            I never thought Gulp would be the shining beacon of how to do things right, but here we are. What a great example to follow!

                                                                                            1. 1

                                                                                              Yea I agree to some extent. Long term support is important, and it’s actually not that difficult so long as you have a good set of automated unit and/or integration tests that you run from your CI system.

                                                                                              The last company I worked at had over 100 unit tests per microservice and that made it really easy to quickly update dependencies or move to entirely new platforms. If we do a big update and something breaks, we can just add a new test to prevent it from happening in the future. Is something not relevant anymore? Make sure it’s covered in the integration tests and discard the old unit tests.

                                                                                              There’s nothing wrong with long term support, so long as you’re not supporting legacy stuff that isn’t maintained anymore or you have dependencies you haven’t updated in forever that are rotting. (That being said, you shouldn’t update jars unless you need to for features and security, but it’s good to keep things as up to date as possible because if package A depends on X 0.12 and B depends on X 1.13, something like sbt will pull in the later one which could break everything. We had this problem with json4s … also never use json4s for anything ever).

                                                                                          3. 4

                                                                                            I think the author explicitly says that it is aimed at relatively small projects. The recommendation is to discontinue previous releases as long as no contributor is actually using them (or paid by someone else to maintain them).

                                                                                            The author mentions that Node.JS long-term releases are maintained separately from the main development by people on enterprise-Node-users payroll.

                                                                                            1. 2

                                                                                              … okay. What small projects are they thinking about?

                                                                                              1. 2

                                                                                                No idea.

                                                                                                Maybe I am wrong and the sibling comment is right that this is about stopping the support for older releases of dependencies more than about older releases of the project per se.

                                                                                          1. 2

                                                                                            Last week

                                                                                            I’ve got a couple of tasks to work on this week:

                                                                                            • Continue working on conference talks.
                                                                                            • Find a decent chassis for driver for the home server. Really looking for a 1U JBOD without a RAID controller, but haven’t found anything good so far.
                                                                                            • Returning to iOS development for the first time in half a decade. Last week I was working on an old MacBook Pro, but over the weekend I installed High Sierra in KVM, which is working out quite well (and my ThinkPad is much thinner and lighter).
                                                                                            1. 4

                                                                                              Last week

                                                                                              I didn’t end up having time last week to work on my Insteon controller library, I’m planning on continuing work on that this week. Last time I promise something in this thread. :)

                                                                                              This week I’m also working on putting together talk proposals for upcoming conferences: one Go talk and one JavaScript talk. My conference acceptance rate is currently 0%; fill free to poke me if you’re good at these things and have useful advice.

                                                                                              1. 1

                                                                                                One thing brought up in the comments, but not addressed in the article is what you should do for the ambiguous cases where the URL parameter could be a name or id.

                                                                                                /shelf/{id}/book/{id}
                                                                                                /shelf/{name}/book/{name}
                                                                                                

                                                                                                While you could restrict the space of name to avoid ambiguities, your route handler will still have the two responsibilities of looking up by id and searching by name.

                                                                                                If you are following the advice in this article, how are you dealing with this?