1. 25

    This is a really weird list, with a couple things that feel like borderline errors to me.

    When discussing Dato, the author writes:

    So, when someone asks you what date is it today you have to open a calendar app. Seriously?

    Uh, no? You just click on the clock, same as you would with Dato. (And that’s assuming you didn’t turn on showing the date by default, which is an option right in the Date and Time preferences, and has been since I think Mac OS 9.) Dato also gives you a full calendar in the dropdown, but that’s not what you asked. And since Dato is brand-new, you might instead/also want to look at Fantastical 2, which has been out for some time and has the same feature.

    Should I really mention Flux? Because everyone should already have one installed.

    Agreed—but as it happens, it’s built-in to macOS since at least 10.13, and in my head somewhere in 10.12. Maybe there’s a reason for installing Flux, but you didn’t specify it.

    Spectacle is solid (I prefer Magnet, but that’s closed-source and not free), and the preview plugins are fine, but I’m not convinced they need their own blog posts at this point, nor that they “fit everyone.” Overall, this just felt like a weird listicle to me.

    1. 3

      Uh, no? You just click on the clock, same as you would with Dato.

      Big fan of iStat Menus as a similar improvement over Dato.

      https://i.imgur.com/1bwzIfo.png

      1. 1

        You just click on the clock,

        Apple+space, type “terminal”, hit enter, type “ cal”, hit enter? :)

        You give the sensible way of doing that but the stupid one is in my muscle memory too ;)

        1. 10

          It’s “Cmd” not “Apple” you heathen :)

        2. 1

          In this vein, I also found it mildly annoying that the first couple of items in the list weren’t linked to, and I had to search for them, even though the later ones were.

        1. 14

          The main issue that IRC faces imo is the lack of connection persistence: If you get thrown into a room without knowing if your spamming by interrupting, how active it is, etc. you’ll have a bad experience. If you can’t turn of your laptop because you’re still expecting a response, you’ll have a bad experience. If the suggestion to fix this is to try to set up one of the who knows how many broken bouncer servers, you’ll have a bad experience.

          And if you have a bad experience, you’ll loose users.

          1. 3

            But isn’t this more natural? Consider entering a room in real life where people may be having a conversation. Do you barge right in and immediately start talking at everybody? Of course not, you take time to see who’s present and what the feel of the situation is.

            1. 11

              Of course, that might be the case, but now imagine a room with people sitting around staring into the void, and seeing no reaction when you ask a question. Since you just suddenly appeared in this room and have no ability to look back into it’s history (one unrealistic fact for another) you’ll have to wait to see if anyone is even alive – what the value of being “natural” is in this situation can clearly be questioned.

              1. 3

                IRC has addressed this problem mostly through norms rather than technically (i.e., it’s sort of impolite to just join a channel, ask a question, and leave if you don’t get an answer: proper netiquitte is to idle in every channel you think you’re liable to be interested in using more or less forever, which produces local logs & also raises the likelihood that conversation will happen & relationships will be developed). IRC is not stackoverflow, in other words, & this makes IRC great for developing long-term relationships in a community but terrible as a mechanism for newbies to get help.

                I think the problem here is not that IRC fails to be stackoverflow, but that folks who do a lot of their dev communication on IRC have made the mistake of suggesting non-developers use IRC for tech support, filling channels with users who don’t know or care about the norms or about developing long-term relationships as regulars. (Or, in said in the hyperbolic & slightly acid way I used to say it back in my heavy IRC days, “people who shut off their computers shouldn’t be on IRC”.) Mailing lists (unless they are announcement-based) have basically the same issue.

                There are common norms around hosting public logs, as well. As Drew says, pretty much everything that slack tries to build in is already supported on IRC as an add-on (or as expectations around behavior), & this allows the system to be accessible to a wider variety of people – it’s just not accessible to people who are unwilling to learn the norms or use the tools (i.e., people who aren’t going to buy into the community).

                1. 1

                  people who shut off their computers shouldn’t be on IRC

                  I like this, well said!

                2. 3

                  Yeah, the in-person experience usually has existing conversations you overhear, people physically situated in a way that tells you stuff, and even their gestures or clothing might indicate some interests. Whereas, IRC is much more like a void at the start unless it’s really active.

                3. 2

                  If I wanted something as shitty as real life I’d go outside. I want my tools to do better than I could do without them.

                4. 1

                  That’s why there are IRC bouncers/persistent clients. The problem is that you either pay a monthly fee or you have to figure out how to set it up yourself.

                  That said, all of the alternatives to IRC offer worse experiences.

                  1. 6

                    you either pay a monthly fee or you have to figure out how to set it up yourself.

                    That’s not even the primary issue, instead it’s that in practice most bouncers are unmaintained, have very specific and peculiar settings, too many moving components, bad documentation (*cough*, ZNC) etc. They are generally a mess and don’t integrate all to well into IRC as a protocol in general.

                    That said, all of the alternatives to IRC offer worse experiences.

                    If I’m quite honest, and I don’t like saying this, but most IM networks like WhatsApp or Facebook Messenger offer a far more stable and expectable experience, which is why people use these kinds of clients/networks. The network effect only determines which from this category becomes popular.

                    1. 2

                      most bouncers are unmaintained, have very specific and peculiar settings, too many moving components, bad documentation (cough, ZNC) etc

                      Weechat is really excellent. It still has some upfront ‘costs’ in terms of setting it up, but it’s really easy and pleasant, and I’ve had no real issues with IRC via Weechat.

                      If I’m quite honest, and I don’t like saying this, but most IM networks like WhatsApp or Facebook Messenger offer a far more stable and expectable experience,

                      I also end up using WhatsApp from time to time, and it’s a far, far, far worse experience than IRC. The web client sucks, and has all sorts of connectivity issues. If your phone isn’t on the same network, it simply doesn’t work. It expects to be used on mobile. I can’t easily adjust how things are displayed to me in the app. I can’t connect to it from, say, Emacs.

                      So in practice I much more frequently check IRC than I do WhatsApp, despite having family members etc. on WhatsApp.

                      Thus, I think your example of WhatsApp is excellent. As an example of a far worse, very miserable user experience.

                      1. 1

                        (You might find sms-irc quite useful… :p)

                    2. 2

                      Have you given Matrix a fair shake?

                      1. 1

                        Eventually I’ll probably try to figure out how to set up a Matrix bridge or whatever via Weechat.

                        1. 1

                          No need for a bridge if you’re an end user and just want Weechat to speak Matrix:

                          https://matrix.org/docs/projects/client/weechat-matrix

                  1. 1

                    What are some techniques that can be applied early in a codebase’s life to prevent this kind of situation in the future?

                    1. 2

                      I use to work with Drupal, a php/sql cms/framework, and as a result I still use it years later (funny how that works). While professionally I dealt with CI/CD around this subject, I didn’t want that much overhead for my personal blog so it’s effectively a docker container with my site’s code mounted from the host’s filesystem. I don’t run a machine or vm dedicated to it, so there’s other virtualhosts at play too. I have a few containers with the following:

                      • Nginx - internet-facing, ssl termination, proxies different domains to different machines.
                      • Varnish - cache server - for domains that use caching, nginx proxies the request here where it can be again proxied to the actual app on cache miss. I use this separate instance instead of plain nginx caching because Varnish is far more flexible - the configuration file uses a DSL that is translated to a C program and compiled at runtime.
                      • Nginx again - actually, nginx and php in one docker container, along with the actual app code. It talks to mysql but I don’t exactly remember how I built that. It’s probably in docker as well.

                      What I like about this setup is that it gives me both the flexibility of docker for dependency management and the simplicity of “classic” web hosting techniques. My git repo for the site’s assets also has a scripts folder with things like pull_prod_db.sh, pull_prod_files.sh, push_prod_code.sh, run_nginx_local.sh etc.

                      I of course use Let’s Encrypt, and using that centralized nginx instance let me have a little fun with how I manage that. Since it’s centralized, I was able to route all requests concerning the /.well-known/ path prefix used by Let’s Encrypt to one instance of a custom LE client. Obviously, this is all rather pointless now that LE supports DNS challenges too, but this setup predates that.

                      Now if only I had as much patience for writing content for the blog as I did building the infrastructure - https://dpedu.io/

                      Edit: OP asked about deployment, and I suppose I didn’t answer that directly. The custom docker images flow through a self-hosted registry. The machine running them is configured by puppet. And the site’s code is manually uploaded with rsync.

                      1. 4

                        I’d love to see TempleOS runnable in the browser. Without thinking I went ahead and tried v86, an in-browser virtualization codebase.

                        So close yet so far.

                        1. 3

                          I know, right?

                        1. 2

                          While it’s pretty obvious why they have them, I’m a little disappointed that the screen resolutions haven’t improved on this class of device. Around 2009 I picked up a Dingoo a320, a similar type of device that sported the same 320×240 screen many of these still have. While sufficient for emulating retro games, it really limits these device’s possibilities especially when displaying just text.

                          Going back to my 2009 device, it wasn’t far off from high-end devices such as the Blackberry Tour’s 2.4” 480×360 screen, released the same year. Even such a screen today would more than double the pixel count over 320x240 and I’d expect it to be dirt cheap these days.

                          1. 6

                            I know a lot of these are riding on the manufacturing capability of existing mass-market devices. The screen on the pocketchip, for instance, is the exact same screen used by the PSP, which means it’s super cheap and easily available since Sony optimized the hell out of that supply chain.

                          1. 7

                            but most Docker images that use it are badly configured.

                            Man, this has just been the trend lately hasn’t it? Official Java images using “mystery meat” builds, root users improperly configured in Alpine for years.

                            This isn’t meant to be an off-topic rant, but I think simply backs up the central point of the article. Containers are subtly different than a VM, but on the other hand, they’re also not “just a process” as some would like to believe. The old rules of system administration still apply and pretending that containers will fix all your problems puts you in situations like this.

                            I’m a huge advocate of containers, but they’re definitely easy to learn and difficult to master. If you want to run containers in production reliably, you need to have a solid understanding of all the components which support them.

                            1. 6

                              I’m a huge advocate of containers, but they’re definitely easy to learn and difficult to master. If you want to run containers in production reliably, you need to have a solid understanding of all the components which support them.

                              Yes, it’s a massive amount of details. I’m working on a prepackaged template for Dockerizing Python applications (https://pythonspeed.com/products/pythoncontainer/), and to build good images you need to understand:

                              1. Dockerfile format, including wacky stuff about “use syntax A not syntax B if you want signals handling to work”. (see https://hynek.me/articles/docker-signals/ for that and all the other tiny details involved).
                              2. Docker layer-image format and its impact on image size
                              3. Docker’s caching model, as it interacts with 1 and 2.
                              4. The way CI will break caching (wrote about this bit in earlier post: https://pythonspeed.com/articles/faster-multi-stage-builds/)
                              5. The details of the particular base image you’re choosing, and why. E.g. people choose Alpine when it’s full of jagged broken edges for Python (and probably other langauges)—https://pythonspeed.com/articles/base-image-python-docker-images/
                              6. Operational processes you need to keep system packages and Python packages up-to-date.
                              7. Enough Python packaging to ensure you get reproducible builds instead of random updates every time.
                              8. Random stuff like the gunicorn config in this post (or if you’re using uWSGI, the 7 knobs you have to turn off to make uWSGI not broken-out-of-the-box).
                              9. Be wary enough of bash to either avoid it, or know about bash strict mode and shellcheck and when to switch to Python.
                              10. Not to run container as root.

                              And then there’s nice to haves like enabling faulthandler so when you segfault you can actually debug it.

                              And then there’s continuous attention to details even if you do know all the above, like “oh wait why is caching not happening suddenly… Oh! I’m adding timestamp as metadata at top of the Dockerfile and that invalidates the whole cache cause it changes every time.”

                              Some of this is Dockerfiles being no good, but the majority is just the nature of ops work.

                              1. 2

                                At the risk of going even further off-topic: do you have any recommendations for properly applying the old rules of system administration to containers? For example, frequent updating of docker containers could be a cron job that stops the container, then rebuilds with the dockerfile (unless there’s a better way to do live-updating?), but how do you handle things like setting a strong root password or enabling ssh key auth only (with pre-configured accepted keys) when the container configuration is under public source control?

                                1. 2

                                  Typically you don’t run ssh in a container, so that’s less of an issue. For rebuilds: Dockerfile is more of an input, so update process is usually:

                                  1. Build new image from Dockerfile.
                                  2. When that’s done, kill old container.
                                  3. Start new container.

                                  And you really want to rebuild the image from scratch (completely, no caching) once a week at least to get system package updates.

                                  1. 1

                                    There are legitimate cases where running ssh in a container is desired (e.g. securely transferring data).

                                    Anyways, what about the root password bit of my question?

                                    1. 2

                                      First thing that comes to mind: you can copy in a sshd config that allows public key auth only, and pass in the actual key with a build arg (https://docs.docker.com/engine/reference/commandline/build/#set-build-time-variables---build-arg)

                                2. 2

                                  Containers are subtly different than a VM, but on the other hand, they’re also not “just a process” as some would like to believe.

                                  I might be one of the “some” that “would like to believe”, as I said containers are isolated processes in a recent blog post.

                                  I still think containers are processes. Let me explain and please correct me if I’m wrong:

                                  A container might not end up as a single process, but it definitely begins as one, execing into the container’s entrypoint and maybe forking to create children. Therefore they might not be a single process but a tree, with the one that execs into the image’s entrypoint as the root of the tree.

                                  And while under my logic of container = process you could call everything a process (i.e., the operating system, because it all begins with PID 1), it’s not all that wrong, and I said so in my post so that people realised containers are more like processes than VMs.

                                  Therefore, is it right then, to define “container” as “a heavily isolated process tree”?

                                  1. 1

                                    That definition makes sense if you use the terms “docker container” and “containers” interchangeably, but that is not the case (as you point out in your article!). Containers are a collection of namespaces and cgroups; are emphasized as a container literally consists of these underlying linux-provided components.

                                    Seeing that you can create all of these items that make up a “container” without ever having a process running within it I think that’s a good reason that the “a heavily isolated process tree” definition is not accurate.

                                    1. 1

                                      Ooh… Alright, thanks!

                                1. 3

                                  This reminds me of Corkscrew, a tool for tunneling ssh through http/s proxies. I needed to use this in a similar environment where the only egress allowed was through an http proxy.

                                  https://github.com/bryanpkc/corkscrew/

                                  1. 1

                                    I picked up a Protectli FW4B for use on my home network, which includes VPN use. They seem to be a lesser known brand but it’s been great for me. Ships with no OS and I installed pfSense on it.

                                    1. 1

                                      Looks like a nice product but it’s out of my budget.

                                    1. 8

                                      While Dwarf Fortress looks and plays like an old game, its resource footprint is far from it. Get your fortress’s population up to around 70 and fps drops are unavoidable on even the newest and fastest processors. Note that in Dwarf Fortress fps is actually the rate at which the game’s logical ticks are performed, not graphical updates.

                                      Many mods have been created to cope with this problem, such as fastdwarf or teledwarf which causes your dwarves to walk faster or teleport to their destinations instead of computing a path and walking down it.

                                      1. 4

                                        The universe of Dwarf Fortress attempts to self-correct localised time dilation by throwing goblin hordes and mythical creatures at the problem.

                                        Fastdwarf and teledwarf seem like interesting workarounds. They effectively add ‘frameskip’ to the entity logic, if you want to think of it that way.

                                        1. 3

                                          Thank you for the explanation, I had no idea why this might be challenging.

                                        1. 1

                                          I used to keep documentation in Markdown files alongside related source and tools - this was an operations type role. It was nice way to keep the docs easily locatable (using foo_tool.py? there’s a foo_too.md right next to it) as well as the obvious benefits that git’s history provides.

                                          I’ve since moved to a different company that doesn’t have a standardized practice, and as a result docs have ended up in various wiki tools and microsites. It’s really awful.

                                          1. 6

                                            I’ve been learning C for the past couple months by writing a web-based IRC client. The web facing bits are python, but some of the underlying services (the IRC connection itself and message cache) are. The stack is C + ZeroMQ + Cap’n Proto + Python - It’s been a blast!

                                            1. 3

                                              Napkin calculation: it’s 624,197,820,790 bytes, spread across 4,073,468 files. That’s about 153,235 bytes per file. Assuming the average length of a line is around ~40 characters, and knowing that Windows takes two bytes to store each character (because they use UTF-16 for legacy reasons) and it’s a bit extra in needing two characters to represent a line ending, we can assume about 100 bytes per line. That would mean the average code file has ~1,532 lines, and the whole monorepo has about 6,241,978,207 lines (that’s 6 billion lines of code).

                                              4 million files is a lot. The linux kernel supports far more architectures and possibly has more drivers built in, and it only has 61,725 files with an average length of ~414 lines. So, I think there’s a chance that there’s something wrong with the way they counted the files, for example, if it’s a git repository that they counted files in .git/ as well? Not quite sure.

                                              1. 14

                                                [this includes] source code, test files, build tools, everything The build tools and test fixtures probably make up a sizable portion, so that would definitely skew the total

                                                1. 1

                                                  Does the repo include graphical assets too?

                                                2. 6

                                                  more than a half million folders containing the code for every component making up the OS workstation and server products and all their editions, tools, and associated developement kits

                                                  You cannot compare their entire stack with another family’s vanilla kernel alone. Pull in all of GNU userland, GCC, an IDE, some browser, and so on… maybe that will help the numbers starting to compare.

                                                1. 1

                                                  Overlaps quite strongly with what systemd does….

                                                  What would you say are the pros and cons of Orderly vs systemd?

                                                  1. 2

                                                    I use runit instead of systemd at backupbox.io for faster boot times and smaller virtual machine images. The big problem is runit doesn’t support ordered setup/teardown without writing ad hoc scripts.

                                                    Orderly is one layer I am going to use in conjunction with runit to provide these things in a more modular way than systemd.*

                                                    Another strength of orderly is that it is quite good for development environments when you are coding multiple servers and want to restart/stop them in groups while testing.*

                                                    *In theory, but I literally just wrote it, so we will see what needs to change.

                                                    1. 1

                                                      backupbox.io

                                                      Hmm. Took a brief poke around your site…. vaguely reminds me of my favourite backup on to usb pen drive tool… http://zbackup.org/

                                                      I presume you’ve met up with zbackup?

                                                      1. 1

                                                        Haven’t tried zbackup, but I have played with quite a few tools. Will need to look into it, thank you.

                                                      2. 1

                                                        Ok, so if I understand you correctly you’re adding a bunch of systemd like functionality for alternative init systems.

                                                        faster boot times

                                                        Interesting.

                                                        My experience has been systemd has done a lot to speed up boot times because it parallelizes everything that can be and hence runs a lot faster on multicores.

                                                        Are you using single core machines?

                                                        when you are coding multiple servers and want to restart/stop them in groups while testing.*

                                                        Systemd is quite handed for doing exactly that, plus can force’em to “play nice” about using resources. https://www.freedesktop.org/wiki/Software/systemd/ControlGroupInterface/

                                                        However cgroups are not a systemd thing but a kernel thing which you could probably also use.

                                                        1. 6

                                                          My experience has been systemd has done a lot to speed up boot times because it parallelizes everything that can be and hence runs a lot faster on multicores.

                                                          Well, I am using a custom linux image inside single core VM’s. Each vm only runs ssh, a fuse filesystem mount, audit daemon and getty. The extra work involved in initializing systemd is actually less than just starting those directly.

                                                          Actually, after removing systemd I was able to get the boot down from ~5 seconds to 0.5 seconds. Though it is difficult to measure precisely, because I also switched away from a ‘buildroot’ based filesystem.

                                                          The main problem is some ordering requirements between the audit daemon, ssh and the filesystem, especially if the file system fuse daemon crashes.

                                                          However cgroups are not a systemd thing but a kernel thing which you could probably also use.

                                                          Yeah, I just use them directly from runit.

                                                          1. 1

                                                            My experience has been systemd has done a lot to speed up boot times because it parallelizes everything that can be and hence runs a lot faster on multicores.

                                                            Compared to what? Init? To be clear, systemd vs $other has been beaten to death and I’m not trying to rehash it. Upstart has parallelized job starts since its inception so I’m curious what systemd is doing differently.

                                                            1. 2

                                                              Compared to whatever ubuntu had on the LTS prior to systemd… I forget exactly what.

                                                              I just noticed, wow… this is booting humanly noticeably faster. I suspect it’s a combo of parallelizing and accurately tracking dependencies and starting them as/when needed, but I never rolled forward and back and measured to track down what exactly.

                                                              If I remember correctly the sneaky ureadahead trick pre-dated systemd so I don’t think it was that.

                                                              Although we also shifted a single core openembedded system over to systemd from init.d and they seem to boot faster as well, probably because some of the I/O could happen while the CPU was doing useful stuff.

                                                      1. 6

                                                        I loved the look of the recently-posted Endlessh project so much that I’m going to integrate it with fail2ban and iptables to redirect banned attackers into the tarpit.

                                                        1. 1

                                                          Apparently we have similar ideas. I just threw an SSH tarpit together earlier and was looking at how to integrate it.

                                                        1. 15

                                                          The easiest way to solve the problem is to increase the size of your gp2 volume.

                                                          While this is true, there’s another way that will give you even more iops for less (or zero!) additional cost. Many EC2 instances come with a local SSD. For example the i3.large instance type - which is fairly small, just 2 cores and 16GB ram - includes a 475GB NVMe SSD. You can perform tens of thousands of iops on this disk easily.

                                                          Obviously, since this SSD is local it’s contents are lost if your instance is stopped for any reason, like hardware failure.

                                                          1. 3

                                                            Also worth noting there’s more options like this since the introduction of the new generation instances with “d” designators, like c5d and m5d, which have local nvme storage, and might be a good balance between general purpose compute while still having local storage. The i-type hosts are “I/O optimised” which solves the storage problem but might leave you without much for the actual build tasks.

                                                            1. 2

                                                              Thanks for the idea, noted in the article.

                                                          1. 6

                                                            This is definitely a problem with custom allocators, but using a custom allocator and also be leveraged to detect memory bugs. AFL does this:

                                                            Libdislocator is an abusive allocator that can be loaded as a drop-in replacement for the libc implementation via LD_PRELOAD or AFL_LD_PRELOAD.

                                                            It’s in no way AFL-specific, but it should play pretty well with the fuzzer. Basically, when loaded alongside with any dynamically linked binary (source not needed, but static binaries won’t work), it behaves in a way that maximizes the odds of triggering heap corruption issues in the targeted code:

                                                            1. It allocates all buffers so that they are immediately adjacent to a subsequent PROT_NONE page, causing most off-by-one reads and writes to immediately segfault,
                                                            1. It adds a canary immediately below the allocated buffer, to catch writes to negative offsets (won’t catch reads, though),
                                                            1. It sets the memory returned by malloc() to garbage values, improving the odds of crashing when the target accesses uninitialized data,
                                                            1. It sets freed memory to PROT_NONE and does not actually reuse it, causing most use-after-free bugs to segfault right away,
                                                            1. It forces all realloc() calls to return a new address - and sets PROT_NONE on the original block. This catches use-after-realloc bugs,
                                                            1. It checks for calloc() overflows and can cause soft or hard failures of alloc requests past a configurable memory limit (AFL_LD_LIMIT_MB, AFL_LD_HARD_LIMIT).

                                                            https://groups.google.com/forum/#!topic/afl-users/RW4RF6x9aBc

                                                            1. 4

                                                              This idea seems like a different way of expressing cyclomatic complexity.

                                                              1. 1
                                                                memset(ptr, sizeof(*ptr), 0);
                                                                

                                                                Shouldn’t this (the first snippet) be this instead?

                                                                memset(ptr, 0, sizeof(*ptr));
                                                                
                                                                1. 6

                                                                  (that’s the point)

                                                                  1. 4

                                                                    I believe that is indeed the basis of the motivating example.

                                                                    1. 1

                                                                      One of the *sans, maybe asan with replace_intrin ? will catch this error.

                                                                    1. 3

                                                                      I’m sinking time into some personal projects to improve “developer life” in my vm lab - a local apt/rpm/pypi mirror and some automation around that.

                                                                      1. 1

                                                                        I’d love a container for proxy/mirrors of each. I have never had great results with apt mirrors. I set them up in anger, but then they go sour.