1. 17

    I keep my vote on QEmu(KVM) + Qcow2 as the non-distribution sanctioned packaging and sandboxing runtime. Once ironically, now as faint hope. The thing is, the packaging nuances matters little in the scale of things, heck - Android got away with a bastardised form of .zip and an xml file.

    The runtime data interchange interfaces are much more important, and Linux is in such a bad place that even the gargantuan effort Android 5.0 and onwards applied trying to harden theirs to a sufficient standard wouldn’t be enough. No desktop project has the budget for it.

    You have Wayland as the most COM-like being used (tiptoeing around the ‘merits’ of having an asynchronous object oriented IPC system) – but that can only take some meta-IPC (data-device), graphics buffers in only one direction and snaps like a twig under moderate load. Then you attach Pipewire for bidirectional audio and video, but that has no mechanism for synchronisation or interactive input, and other ‘meta’ aspects. Enter xdg-desktop-portal as a set of D-Bus interfaces.

    Compared to any of the aforementioned, Binder is a work of art.

    1. 9

      Indeed, it seems to me that a lot of companies that jumped on the sandboxing bandwagon have missed a critical point that the sandboxing systems used in the mobile world didn’t. Sandboxing is a great idea but it’s not going to fly without a good way to talk to the world outside the sandbox. Without one, you either get a myriad incompatible and extremely baroque solutions which so far are secure by obscurity and more often than not through ineffectiveness (Wayland), or a sandboxing system that everyone works around in order to keep it useful (Flatpak – an uncomfortable amount of real-life programs are ran with full access to the home dir, rendering the sandbox practically useless as far as user applications are concerned).

      Our industry has a history of not getting these second-order points. E.g. to this day I’m convinced that the primary reason why microkernels are mostly history is that everyone who jumped on the microkernel bandwagon in the 1990s poured a lot of thought into the message passing and ignored the part where the performance will always be shit if the message passing system and the scheduler aren’t integrated. QNX got it and is the only one that enjoyed some modicum of long-term success.

      I’m not convinced we’re going to get the sandboxing part right, either. While I’m secretly hoping that a well-adapted version of KVM + QCow2 is going to win, what I think is more likely to happen is that the two prevailing operating systems today will just retrofit a sandboxing system from iOS, or a clone of it, and dump all “legacy apps” into a shared “legacy sandbox” that’s shielded from everything except a “Legacy Document” folder or whatever.

      1. 2

        or a sandboxing system that everyone works around in order to keep it useful

        That’s the next tier: say you have nice and working primitives, now you just need to design user interfaces that fit these ergonomically. Mobile didn’t even try, but rather just assumed you have the threat modelling capacity of a bag of sand and proposed “just send it to us and we can give it back to you later” – it worked. I am no stranger to picking uphill battles, but designing scenarios for conditioning users to adopt interaction patterns that play well with threat compartmentation is a big no.

        Our industry has a history of not getting these second-order points.

        There is much behind that, especially so in open source. Rewrite ‘established tech’ for ‘hyped platform’. If you’re a well-funded actor, throw money at marketing the thing and repeat until something sticks. People new to the game might even think that what is being sockpuppeted this time around actually a new thing and not a tired rehash.

        ’m not convinced we’re going to get the sandboxing part right, either. While I’m secretly hoping that a well-adapted version of KVM + QCow2 is going to win

        Heavens no. While I believe in whole-system virtualisation for compatibility or performance, the security/safety angle is dead even before decades of hardware lies are uncovered.

        What boggles my mind is that we’re almost dipping into triple digit well-analysed big budget product sandbox escapes and people still go for that as the default strategy. Post facto bubble wrapping? that’s a parrot pining for the fjords. Design and build for least-privilege separation? perhaps, but hardly applies everywhere and the building blocks in POSIX/Win32/… are … “not ideal”.

        There are some big blockers for the KVM/QCow2 angle, getting good ‘guest additions’ and surrounding tooling for interchange and discovery (search) is a major one. The container-generation solution of ssh and/or web comes to mind as the opposite of good here.

        what I think is more likely to happen is that the two prevailing operating systems today will just retrofit a sandboxing system from iOS, or a clone of it, and dump all “legacy apps” into a shared “legacy sandbox” that’s shielded from everything except a “Legacy Document” folder or whatever.

        That would be the pinnacle of tragedy (until the next one) – the prospect of having all the ergonomics of data sharing between domains on a smartphone with the management and licensing overhead of a desktop.

        1. 1

          I know you’re kidding about the KVM + QCow2 sandboxing but I really think it’s the least bad option that can be built with what we have now and has a chance at industry traction. Guest additions are only a problem if you’re trying to run a kernel built for real hardware, in which case you need “special’ drivers. But if one were to devise and implement an ACME Virtual Sandbox Qemu machine, qemu-based sandboxing engine could just use a kernel with the guest additions baked in. Fine-grained access control is then a matter of mounting the correct devices over a virtual network.

          It’s not a good solution but it does have the potential of providing satisfactory solutions for a bunch of thorny problems, not the least of which is dealing with legacy applications that nobody’s going to update for some fancy new sandboxing system. At least in the desktop space, neither of the two major players has any interest in solving much simpler problems. I doubt any of them wants to throw money at solving this problem properly, especially when they both have perfectly good walled gardens that they can sell as security oil. This one’s clunky but at least it’s not Windows Subsystem for Android.

          I’m still waiting for the day when we’ll just sell software along with the computer that runs them, and every computer will be the size of an SD card and we’ll just plug the thing into the deckstation and our sandboxing solution is going to be real, physical segregation :-P.

          (I also just know someone’s gonna figure out how to break that but hey, that’s the kind of fun that got me into computers in the first place!)

          1. 1

            I know you’re kidding about the KVM + QCow2 sandboxing …

            Yes and no. So I compartment some things, particularly browsers, by hardware. There’s a cluster. It netboots, gets a ramdisk friendly image, boots into chrome, forwards to my desktop. When I close a ‘tab’, the connection is severed and RST is pulled (unless there’s a browser crash and in those cases I collect the dump and some state history, more than a few in-the-wild 0-days have been found that way). That puts the price point for ‘smash and grab’ing me well beyond what little I am worth, and it opens up for a whole lot of offensive privacy. I think this can be packaged and made easy enough for a large set of users.

            Guest additions are only a problem if you’re trying to run a kernel built for real hardware, in which case you need “special’ drivers.

            They are used for some things that are most readily available in user-space, integration with indexing services, clipboard, drag and drop. I didn’t do any requirements engineering for arcan shmif. I picked a set of most valuable applications, and wrote backends to see what I was missing, then iteratively added that.

            The first round was emulators because games and speedruns are awesome free tight timing test-sets. The second round was QEMU for basically the reasons we’re talking about. Linux won’t ever fix its rain forest of broken ABIs. Important legacy applications will break for someone. I don’t agree with that. While my belief is compatibility only, if someone thinks it fits their threat model, I won’t judge (openly).

            I’m still waiting for the day when we’ll just sell software along with the computer that runs them, and every computer will be the size of an SD card and we’ll just plug the thing into the deckstation and our sandboxing solution is going to be real, physical segregation :-P.

            Cartridges are coming back in style. One project I have in the sinister pile with an investor pitch deck comes distributed on SD cards targeting certain SBCs.

        2. 1

          Indeed, it seems to me that a lot of companies that jumped on the sandboxing bandwagon have missed a critical point that the sandboxing systems used in the mobile world didn’t.

          I mean, doing any work that involves multiple distinct software on an Android phone or even worse on iOS is completely and utterly impractical because of this so I’m not sure about the “didn’t”. Those are good for consuming content, but for producing it ? They only show that even the best sandboxing systems mankind was able to come up with so far are a full-blown failures for doing actual work.

        3. 4

          Sandboxing is a pipe dream; it’ll never work. Reserve hope for packaging (though not very much). And wrt packaging, the primary issue is the stability, not the quality of the associated APIs, wherefore the steam/flibit approach works decently well in practice. And has the advantage that it is not completely opaque, such that it is easier and more sensible to, say, swap in a patched libSDL2.

          I think the reason android is in better shape than linux is because it has more clearly defined goals. ‘What do we want to package, and why, and for whom?’—‘Apps written in java, to collect ad revenue, for clueless smartphone users.’

          1. 5

            It depends on how you define sandbox and what you expect from it. VMs are sandboxes, docker is a sandbox, systemd’s dynamic users are a sandbox, most of my system utilities run in selinux sandbox, your browser has at least 2 sandboxes, etc. The underspecified “sandbox” description is a problem. We don’t lack sandboxes which actually work.

            1. 3

              depends on how you define sandbox and what you expect from it

              A sandbox is something I can use to run untrusted code and limit the scope of harm it can deal. No sandbox implemented in software has this property.

              1. 11

                A sandbox is something I can use to run untrusted code and limit the scope of harm it can deal. No sandbox implemented in software has this property.

                Nor does any implemented in hardware, for that matter… Spectre/Meltdown, enclave escapes, etc…

                1. 5

                  No sandbox implemented in software has this property.

                  A web browser? DOSBOX? QEMU?

                    1. 33

                      There’s a difference between “we have system doing X and the implementations have bugs” and “we don’t have a system doing X”. If any past and future problem disqualifies an approach, then we don’t have lightbulbs, cars, agriculture, … They all have failure modes. I don’t think that hard stance is practical.

                      Modern sandboxes moved the bar from “just put a long string somewhere” (1990s) to “you need a lot of skills or $100k-s to get a temporary exploit”. And we’re not slowing down on improvements.

                      1. 3

                        There’s a difference between “we have system doing X and the implementations have bugs” and “we don’t have a system doing X”. If any past and future problem disqualifies an approach, then we don’t have lightbulbs, cars, agriculture, … They all have failure modes. I don’t think that hard stance is practical.

                        A container’s fundamental purpose is to prevent infection of the system and any other systems that can possibly be linked, and thus to prevent exfiltration of secrets or abuse / damage to the machine. The fact is that this is fundamentally impossible to guarantee and very, very costly to assume. While cars have failure modes, they still mostly get you from A to B – they at least mostly do the job that you intended, whereas containers do not and cannot prevent malicious attacks. At best this is like everyone having a car that will only take you half way to where you are driving, and at worst it’s like having half a dam – it completely and utterly defeats the point of having the dam in the first place. Sure, it “sometimes works”, but I’m not very sure I would want to use it, and I certainly wouldn’t sell it to anyone else as a good thing.

                        Modern sandboxes moved the bar from “just put a long string somewhere” (1990s) to “you need a lot of skills or $100k-s to get a temporary exploit”.

                        Citation needed. All/Most of the following attacks are fundamentally “long string attacks”, i.e. buffer overflows.

                        Attacking containers is not very different from traditional servers or virtual machines. You can use well-known attacks to exploit vulnerabilities found in a container, for example: Buffer Overflows, SQL Injections or even default passwords. The point here is that you can initially get remote code execution (RCE) in containers using traditional techniques.

                        https://morphuslabs.com/attacking-docker-environments-a703fcad2a39

                        The exploit makes use of specially crafted image files that bypass the parsing functionality of a delegates feature in the ImageMagick library. This capability of ImageMagick executes system commands that are associated with instructions inside the image file. Escaping from the expected input context allows an attacker to inject system commands.

                        https://hackerone.com/reports/1332433

                        https://portswigger.net/daily-swig/vulnerabilities-in-kata-containers-could-be-chained-to-achieve-rce-on-host

                        https://www.trendmicro.com/en_us/research/21/b/threat-actors-now-target-docker-via-container-escape-features.html

                        1. 7

                          A container’s fundamental purpose is to prevent infection of the system and any other systems that can possibly be linked, and thus to prevent exfiltration of secrets or abuse / damage to the machine.

                          Security is not a binary state. You work with a given threat model, then work to prevent specific classes of attacks/vulnerabilities. There’s no tool that “provides security”. Containers remove some classes, and add some new issues to think about. You want to isolate the filesystem and network between processes on the same host - containers will help you. You want to mitigate kernel exploits, SQL injections, or people storming your datacenter - containers won’t help you.

                          Secure / not secure are not real states of the system. You need to define what kind of secure we’re talking about and what abuse is. What you define as abuse may be my business model.

                          Which leads to:

                          whereas containers do not and cannot prevent malicious attacks

                          What I’m trying to say is: what you mentioned is not a fundamental purpose of the containers and trying to discuss things like that is a mistake at step 1. If you’re interesting in learning more about those issues, I recommend reading about threat modelling.

                          Citation needed. All/Most of the following attacks are fundamentally “long string attacks”, i.e. buffer overflows.

                          I don’t know of anything that summarised the whole decades of changes, but in short: stack overflow are dead with stack protectors, shadow attacks and many compiler improvements; heap overflows are much harder due to (k)aslr and various layout mitigations, also almost dead due to W^X; rop is kind of there, but CET is getting popular and in general control-flow-integrity is a thing we talk about. The most popular breakout from V8 these days are double-free / dangling pointers and type confusion as far as I remember from browsing CVEs. These are multi-step exploits which are often still not reliable or immediate since they require address leaks first and there’s some luck involved. (And then you need to do a separate browser sandbox escape) Either way, basic overflows are stone age tools at this point and rarely work.

                          https://github.com/microsoft/MSRC-Security-Research/blob/master/presentations/2018_02_OffensiveCon/The%20Evolution%20of%20CFI%20Attacks%20and%20Defenses.pdf describes a part of that in better details.

                          As for the effort/price, Zerodium was advertising a couple years ago wanting to buy windows Chrome rce + sandbox escape for 250k.

                          1. 5

                            This is wholly antithetical to defense in depth. I don’t like linux containers, and docker in particular has had a long history of bugs, common misconfigurations and footguns; sandboxing complicates the attacker’s experience from just exploiting the application to exploiting the application and then pivoting to a container escape, if one is possible.

                            One thing to mention in the case of sandboxing an application in a container, is that you don’t necessarily get a whole system like you would if the app was running on an OS installed on bare metal. Commonly at my company, and I’m sure at many others, we use distroless images and run the applications in these containers as non-root. This severely limits what an attacker has in their toolkit to exploit a container escape.

                            Attacking containers is not very different from traditional servers or virtual machines. You can use well-known attacks to exploit vulnerabilities found in a container, for example: Buffer Overflows, SQL Injections or even default passwords. The point here is that you can initially get remote code execution (RCE) in containers using traditional techniques.

                            The next paragraph

                            For me, what differentiates containers from others technologies during a pentest engagement is the Post-Exploitation phase. Docker environments can be very dynamic (containers may be created and destroyed at any time). This can be challenging for attackers as gaining persistence may be difficult. Some of the containers might also not have any services exposed (so how can we access them?).

                            The post-exploitation phase is important, since as I said, the goalpost has now shifted. The question becomes how do I pivot from the application I exploited to one of these other containers (that as the article mentions, may not have an exposed service).

                            https://hackerone.com/reports/1332433

                            idk what this has to do with container weaknesses. They left a container management app exposed to the world unconfigured. The application has to have access to the docker runtime to do its job. If we wanna talk about a bare metal equivalent, this would be like leaving cPanel open to the world. Same shit.

                            https://portswigger.net/daily-swig/vulnerabilities-in-kata-containers-could-be-chained-to-achieve-rce-on-host

                            Quoting the researcher that found the exploits:

                            “Containers are only as secure as their configuration, and a simple way to improve their security is to drop unused privileges.”

                            What they’re encouraging as a solution is to aggressively drop privileges from the sandbox that they don’t need. However, this a problem with Kata Containers and not the concept of sandboxing itself. The researcher here is arguing for more aggressive sandboxing, not doing away with the idea.

                            However, we’re currently seeing something completely different — a payload specifically crafted to be able to escape privileged containers with all of the root capabilities of a host machine. It’s important to note that being on Docker doesn’t automatically mean that a user’s containers are all privileged. In fact, the vast majority of Docker users do not use privileged containers. However, this is further proof that using privileged containers without knowing how to properly secure them is a bad idea.

                            The attack here is on a specific Docker configuration. This configuration is generally quite rare afaik (very few times have I ever run privileged docker containers, and we don’t run them in prod). Moreover, again the researcher argues here that more aggressive sandboxing is needed, not less.

                            I’m not sure how any of these point to doing away with sandboxing altogether. I’d argue that instead, they point to problems with specific implementations of Linux containers. And I’d agree! I think the piecemeal way in which you construct Linux containers via various primitives make it very easy to construct insecure containers, and many implementers (Docker obvs, and I guess here Kata Containers as well) have fallen into that trap. I don’t think that this is damning of sandboxing as a concept though.

                2. 1

                  ‘Apps written in java, to collect ad revenue, for clueless smartphone users.’

                  Basically the only apps I use on my work-issued Android phone are 2FA apps (Okta, Google etc). These at least are considered ok to run on the platform.

              1. 3

                Unfortunately, I believe this domain runs afoul of the rules for the .cat domain

                1. 10

                  The non-user-generated content is translated into Catalan if your browser’s language is set to “ca”.

                  1. 1

                    so does https://www.nyan.cat/ and that has been around for 10 years. I guess nobody really cares?

                    1. 2

                      https://twitter.com/huy/status/373161206317477888?t=3pap76DQ2Dk7n4tHMZz69Q&s=19

                      They do seem to audit periodically (this is a tweet from the creator of nyan.cat).

                      1. 1

                        https://www.nyan.cat is available in catalan, if you use the menu in the upper left corner of the page. But I bet nobody really cares :)

                    1. 4

                      Time to break out UTF-EBCDIC

                      1. 2

                        Putting my EBCDIC hat back on: at least on i, no one uses UTF-EBCDIC. It’s much easier to use UCS-2 or UTF-8, considering tagging a column as such is trivial and RPG can use it nowadays. (Maybe it’s different on z, or god forbid, something like BS2000.)

                      1. 19

                        Unfortunately, being nice won’t get you a lot of money in the modern corporate workplace.

                        Being nice means some people will think you are a weakling.

                        It’s really important to know how and when to be nice and how and when to be assertive (or whatever opposite is of nice is in this context…).

                        For every “guide to being nice” there’s a career article in the vein of “how to get what you want: step 1. stop saying yes all the time”

                        Just as different programming languages can be suited for different jobs. Different personality is suited for dealing with different incarnation of the corporation/society. A friendly personality is great for making friends. But that should not be your primary goal in a workplace.

                        Developers who don’t have social skills and usually seem upset or angry.

                        This has nothing to do with being nice. Telling people who don’t have social skills to ‘just be nice’ is like telling starving people to ‘just be rich’

                        Developers who undermine each other at every turn.

                        Necessary in many modern workplaces in order to compete for limited upward potential.

                        Generally defensive developers.

                        This has more to do with culture around mistakes. Not the fault of the individual.

                        Developers who think that other departments of the company are stupid, that they don’t know what they want.

                        They are stupid. At least for this limited domain. If you are a knowledge worker you rely on other people being stupid in your domain. So to assume that they are not would just not make any sense.

                        1. 38

                          I think a point of this blog post was to be polite when talking to and about your colleagues. Doing that does not imply in any way that you are a weakling. It makes the conversation better and you are more likely to come up with good solutions, in my experience.

                          1. 9

                            My experience in the corporate workplace matches @LibertarianLlama’s’s post very much, albeit in a somewhat nuanced way (which I suspect is just a matter of how you present your ideas at the end of the day?).

                            For example, being polite when talking to and about your colleagues is important, primarily for reasons of basic human decency. (Also because “being professional” is very much a high-strung nerve game, where everyone tries to get the other to blink first and lose it but that, and how “professional” now means pretty much anything you want it to mean, is a whole other story.)

                            However, there are plenty of people who, for various reasons, will be able to be jerks without any kind of consequences (usually anyone who’s a manager and on good terms with either their manager, or someone who wants to undercut their manager). These people will take any good-faith attempt to keep things nice even in the face of unwarranted assholeness as a license to bring on the abuse. Being polite only makes things worse, and at that point you either step up your own asshole game, or – if that’s an option – you scramble for another job (which may or may not be worse, you never know…).

                            Also, all sorts of things, including this, can be taken as a sign of weakness under some circumstances. Promotions aren’t particularly affected by that. While plenty of incompetent people benefit from favoritism, everyone who’s in an authority position (and therefore also depends on other people’s work to move further up) needs at least some competent underlings in order to keep their job. So they will promote people they perceive as “smart” whether they are also perceived as “weak” or not. But it does affect how your ideas are treated and how much influence you can have. I’ve seen plenty of projects where “product owners” (their actual title varied but you get the point) trusted the advice of certain developers – some of them in different teams, or different departments altogether – to the point where they took various decisions against the advice of the lead developers in said projects,sometimes with disastrous consequences. It’s remarkably easy to have the boat drift against your commands, and then get stuck with the bill when it starts taking water.

                            Basically, I think all this stuff the blog post mentions works only in organisations where politeness, common sense etc. are the modus operandi throughout the hierarchy, or at least through enough layers of the hierarchy to matter. In most modern corporate workplaces, applying this advice will just make you look like that one weirdo who talks like a nerd.

                            (inb4: yes yes, I know your experience at {Microsoft|Google|Apple|Facebook|Amazon|Netflix|whatever} wasn’t like that at all. My experience at exactly one corporate workplace wasn’t like that either, but I knew plenty of people in other departments who were racking up therapy bills working for the same company. Also, my experience at pretty much all other corporate workplaces was exactly like that, and the only reason I didn’t rack up on therapy bills is that, while I hate playing office politics because it gets in the way of my programming time, if someone messes with me just to play office politics or to take it out on someone, I will absolutely leave that job and fuck them up and torch their corporate carcass in the process just for the fun of it).

                            Edit: I guess what I’m saying is, we all have a limited ability to be nice while working under pressure and all, and you shouldn’t waste it on people who will make a point of weaponizing it against you, even if it looks like the decent thing to do. Be nice but don’t be the office punching bag, that doesn’t do you any good.

                            1. 3

                              I mean, in the case you’re describing, I think it’s still valuable to act nice, like this post describes. You definitely gain more support and generate valuable rapport by being nice, rather than being an asshole. Oftentimes, being able to do something large that cuts across many orgs requires that you have contacts in those other orgs, and people are much more willing to work with you if you’re nice.

                              Nice should be the default. However, when you have to work with an asshole, I think it’s important to understand that the dynamic has changed and that you may need to interact with them differently from other coworkers. Maybe this means starting nice, seeing that they will exploit that, and then engaging far more firmly in the future. Maybe you start with trying to empathize with their position (I don’t mean saying something like “I see where you’re coming from and I feel blah blah,” but by speaking their language, “Yeah dude, this shit sucks, but we have to play ball” or whatever).

                              In general, the default should always be nice, but nice does not mean necessarily not being firm when it’s required (someone wants to explore a new technology, but your team is not staffed for it and you have other priorities that the team needs to meet), and nice does not mean you should put on social blinders and interact with everyone the same way. Part of social interaction is meeting people where they are.

                              1. 4

                                Nice should be the default.

                                Oh, yeah, no disagreement here. We have a word to describe people who aren’t nice by default and that word is “asshole”. You shouldn’t be an asshole. Some people are, and deserve to be treated as such. Whether they’re assholes because they just have a shit soul or they’re pre-emptively being nasty to everyone defensively, or for, um, libertarian reasons, makes very little difference IMHO.

                            2. 14

                              The benefits of being nice find no purchase in the libertarian’s mentality. Keep this in mind when you encounter them. Adjust your approach and expectations accordingly. More generally, try to practice what I call “impedance matching” with them (and with all people). What I mean by that is (1) understand their personality’s API and (2) customize your interface accordingly. Meet them where they are. Then there will be fewer internal reflections in your signaling. Of course, if they proudly undermine you, don’t think you can change them. You’ll have to just keep your chin up and route around that damage somehow.

                              1. 1

                                This corresponds to a very personal and painful lesson that I have recently learned. I would caution against stereotypes, but I’m a bit beaten down by the experience.

                            3. 30

                              Hard no. I’ve tried to be nice in my 35-year career (at least, never tried to undermine or hurt others) and have nevertheless accumulated what many would see a “a lot of money”. (And I’d have a lot more if I hadn’t kept selling Apple stock options as soon as they vested, in the ‘00s…) Plenty of “nice” co-workers have made out well too.

                              Telling people who don’t have social skills to ‘just be nice’ is like telling starving people to ‘just be rich’

                              The advice in that article is directly teaching social skills.

                              Necessary in many modern workplaces in order to compete for limited upward potential.

                              Funny, I’ve always used productivity, intelligence and social skills to compete. If one has to use nastiness, then either one is lacking in more positive attributes, or at least is in a seriously f’ed up workplace that should be escaped from ASAP.

                              1. 19

                                Unfortunately, being nice won’t get you a lot of money in the modern corporate workplace.

                                I’ve been at a workplace like yours but at my current one most of the most-senior and presumably best-paid folk are incredibly nice and I aspire and struggle to be like them. I’ve learned a lot trying to do so and frankly not being nicer is one of the things holding me back. Consider changing yourself or workplaces, I think you’ll be surprised. I’m disappointed by the “but I have to be an asshole” discourse here, part of growing up professionally for me was leaving that behind.

                                Unfortunately that version of me also wouldn’t have listened to this advice and would fall into this what’s with all the unnecessary verbosity? trap so I don’t know that this will actually land for anybody.

                                1. 9

                                  I did not expect you to be a fan of the modern corporate workplace.

                                  I recall one time at a former employer where I pissed off my managers by pointing out to an upper executive that it was illegal, under USA labor laws, to instruct employees to not discuss salaries. I was polite and nice, but I’m sure you can imagine that my managers did not consider my politeness to be beneficial given that I had caught them giving unlawful advice.

                                  If you want to be assertive, learn when employers cannot retaliate against employees. I have written confident letters to CEOs, asking them to dissolve their PACs and stop interfering in democracy. This is safe because it is illegal for employers to directly retaliate; federal election laws protect such opinions.

                                  It is true that such employers can find pretexts for dismissal later, but the truth is that I don’t want to be employed by folks who routinely break labor or election laws.

                                  1. 12

                                    It is true that such employers can find pretexts for dismissal later, but the truth is that I don’t want to be employed by folks who routinely break labor or election laws.

                                    This is one of the best pieces of advice that a young tech worker can receive, and I want to second this a million times, and not just with regard to PACs and federal election laws. Just a few other examples:

                                    • Don’t cope with a toxic workplace, leave and find a place where you won’t have to sacrifice 16 hours a day to make it through the other 8.
                                    • Don’t “cope with difficult managers”, to quote one of the worst LinkedIn posts I’ve seen. Help them get their shit together (that’s basic human decency, yes, if they’re going through a tough patch and unwittingly taking it out on others, by all means lend a hand if you can) but if they don’t, leave the team, or leave the company and don’t sugar coat it in the exit interview (edit: but obviously be nice about it!). Let the higher-ups figure out how they’ll meet their quarterly objectives with nobody other than the “difficult managers” that nobody wants to work with and the developers who can’t find another job.
                                    • Don’t tolerate shabby workplace health and safety conditions any more than companies tolerate shabby employee work.
                                    • Don’t tolerate illegal workplace regulations and actions (including things like not discussing your salary) any more than companies tolerate employees’ illegal behaviour.

                                    Everyone who drank the recruiting/HR Kool-Aid blabbers about missing opportunities when they hear this but it’s all bullshit, there are no opportunities worth taking in companies like these. Do you really think you’ll have a successful career building amazing things and get rich working in a company that can’t even get its people to not throw tantrums like a ten year-old – or, worse, rewards people who do? In a company that’s so screwed up that even people who don’t work there anymore have difficulty concentrating at work? In a company that will go to any lengths – including breaking the law! – to prevent you from negotiating a fair deal?

                                    I mean yes, some people do get rich working for companies like these, but if you’re a smart, passionate programmer, why not get rich doing that instead of playing office politics? The sheer fact that there are more people getting treatment for anxiety and PTSD than people with senior management titles at these companies should be enough to realize that success in these places is a statistical anomaly, not something baked in their DNA.

                                    Obviously, there are exceptions. By all means put up with all that if it pays your spouse’s cancer treatment or your mortgage or whatever. But don’t believe the people who tell you there’s no other way to success. It’s no coincidence that most of the people who tell you that are recruiters – people whose jobs literally depend on convincing other people to join their company, but have no means to enact substantial change so as to make that company more attractive.

                                1. 8

                                  User beware. The hand-rolled recopies showcased here are better than your average hand-rolled recopies I see on many projects, but they still leave a lot to be desired. For example mkdir dir; cp foo dir is not nearly as robust as install -D -t dir foo. But that’s trivial compared to the bash specific syntax in an sh file that should be plain POSIX, or the fact that the simulated argument handling doesn’t work quite the way as the real ./configure script generated by autotools does.

                                  As ugly as it can be sometimes, using autotools (autoconf, automake) is far more robust than hand-rolled scripts will ever be precisely because it has such a long history and wide usage all the edge cases not considered by any one person are already taken care of. And in the end the amount of code you actually have to write and maintain would be less for the examples given.

                                  1. 7

                                    IME many autotools-using projects have failed to build for me, and writing a 12 line makefile by hand could build the same project fine.

                                    1. 6

                                      2 points.

                                      1. That is almost certainly not autotools fault, but its so confusing to get started using that it doesn’t always get setup right, and that can certainly make it break for users.

                                      2. Your 12 line makefile may work for you, but it likely doesn’t work for systems that are not like yours. It probably leaves distro packages writing their own build routines and different OS layouts hacking around the over-simplified model shown in 12 lines. And I say this as someone who wrote THOUSANDS of lines of hand-brew makefiles before finally turning to autotools and finding it solved so many of the edge cases I was working around it really was useful.

                                      1. 7

                                        That is almost certainly not autotools fault, but its so confusing to get started using that it doesn’t always get setup right, and that can certainly make it break for users.

                                        I’d argue that if it is confusing/hard to use, it is the fault of the tool.

                                        1. 3

                                          Your 12 line makefile may work for you, but it likely doesn’t work for systems that are not like yours.

                                          How important is it that a given piece of software be buildable on the maximum number of architectures?

                                          Where is the line beyond which complexity to achieve portability is more cost than benefit? Is there such a line?

                                          1. 5

                                            How important is it that a given piece of software be buildable on the maximum number of architectures?

                                            Just as important as it is to have access to more systems that x86_64, and run more operating systems than Windows (or, whatever, Aarch64 and macOS). Portability isn’t just about being able to build libpng for OpenVMS in 2021, it’s also about being able to build whatever is going to be important, under whatever operating system is going to be important, in 2025.

                                            You could presumably drop support for the former without causing much of a fuss, but autotools & friends allow us to support any number of architectures, whether maximum or not. If we start to package everything with 12-line makefiles we might as well all buy x86_64 laptops running whatever Linux distro op is running and give it up. Then in 2031 we can all gather here and whine how the next best operating system supports a direct-to-brain interface but is held back by the fact that most software only works with Ubuntu and Docker (or, realistically, Windows and macOS…)

                                            1. 1

                                              Just as important as it is to have access to more systems that x86_64, and run more operating systems than Windows (or, whatever, Aarch64 and macOS).

                                              I don’t think I understand your point. I agree that it’s important to support more than a single platform. But my claim is that there’s diminishing returns beyond a certain point. Do you dispute that? If not, where is that point? I guess I’m claiming it’s somewhere around the top 5-10 in terms of userbase. Do you think it’s more than that?

                                              1. 2

                                                My point is two-fold (sorry, it may have been useful to be a little more clear).

                                                First, “the maximum number of architectures” is just another way of saying “any architectures one needs”. Any system, no matter how useful, including Linux, started by not being even close to the top 5-10 in terms of userbase. If you restrict your build system’s capability to supporting only the leading two or three platforms of the day (realistically, a short Makefile will at most get you Linux, BSDs, and maybe Cygwin, mingw & friends need quite a few hacks for non-trivial programs – 5-10 is already way more than you can realistically manage for non-trivial software), you’re dooming it to obsolescence as soon as the next different enough thing hits the market.

                                                I bet that most of the programs that use autotools don’t actually run (at all, let alone well) on all platforms that autotools can build them on. That’s only as big a problem as the users of said software make it to be though, presumably nobody is losing too much sleep over not being able to run qemu on 4.4 BSD. However, if and when you need, you’re at least gonna be able to build it and you can go right ahead and do the substantial porting work you need. Clunky though they may be, programs built using autotools are gonna compile on Fuchsia 3.1 for AR Workgroups or whatever pretty much out of the box (barring any platform-specific things that need code, not build system incantations, of course). Beautiful, minimalistic, hand-rolled, organic Makefiles are gonna be a curse as soon as Ubuntu 20.04 LTS will be a distant memory instead of something that was pretty cool five years ago,

                                                It’s not like we haven’t seen this already, autotools became a thing because, terrible though it may have been even then (like, m4, seriously?) it was still better than managing Makefiles for even two or three commercial Unices (which meant way more than just the OS back then – also different libcs, different C compilers and so on). It didn’t come up in a vacuum, either, it was based on (or rather tried to improve) a bunch of prior art (Metaconfig, imake).

                                                Second, and strictly regarding the number of platforms: in my experience, while the tooling required to build on 1-2 platforms vs. 5-10 platforms is massively different, past that point, the complexity of the tooling itself is pretty much the same, whether you support 5-10 platforms or 50-100. The number of platform-specific quirks obviously increases with the number of platforms, but extending a tool that already supports 10 platforms so as to allow it to support 100 is largely an exercise in implementing platform-specific hacks, whereas extending the tooling (“12-line makefile”) to support 10 platforms instead of 2 basically amounts to writing a new build system.

                                                So while it takes a lot of effort to go from supporting 1 platforms to 5-10, there’s not that much effort in ensuring your software can be built on more than that. If you’ve done that, you might as well go all the way and use a system that has the traction and popularity required to work everywhere for the foreseeable future. autotools isn’t the only one. CMake is another, for example. It’s terrible but lots of popular software is built with it. Anyone who wants their platform to be relevant will have to ensure CMake-built software runs on it, otherwise they’re gonna be stuck without some leading DBMSs, web browsers and so on.

                                                1. 1

                                                  I think I understand your point here. But I think the difference between our positions is that you’re judging autotools by its asserted capabilities, and I’m judging it by it’s practical drawbacks. In my experience autotools fails more often than it succeeds, and it delivers net-negative value to projects that use it. I’m sure your experience is different!

                                                  If you restrict your build system’s capability to supporting only the leading two or three platforms of the day (realistically, a short Makefile will at most get you Linux, BSDs, and maybe Cygwin, mingw & friends need quite a few hacks for non-trivial programs – 5-10 is already way more than you can realistically manage for non-trivial software), you’re dooming it to obsolescence as soon as the next different enough thing hits the market.

                                                  I don’t think I buy this. Software is a living thing, and if my project is important enough to stand the test of time, it will grow build capabilities for new architectures as they become en vogue.

                                                  1. 2

                                                    I’m sure your experience is different!

                                                    I think so. While it’s definitely far from my favourite build system, I can’t remember the last time I’ve seen it fail. However, it’s so widespread that it’s very easy to be lucky and only encounter it in scenarios where it works fine.

                                          2. 3

                                            Your 12 line makefile may work for you, but it likely doesn’t work for systems that are not like yours.

                                            When someone else’s 12 line makefile doesn’t work, it’s clear how to fix it. The same isn’t true for autotools, cmake, or many of the other more complicated alternatives.

                                            1. 1

                                              tbh I don’t actually agree this. make unfortunately is quite difficult to debug without turning to external tools. Honestly, I find both cmake and autotools a lot easier to debug than just a raw Makefile, since they have debugging facilities.

                                              That said, I don’t turn to cmake or autotools until I hit the level of complexity or reach where that matters, and most of the time just use simple Makefiles.

                                            2. 2

                                              If it is really 12 lines, it is probably less annoying for distros to maintain their own version than to maintain a command line of ./configure .... If something breaks, it’s much easier to fix a Makefile with 12 lines than the whole auto* setup.

                                              1. 1

                                                Not to mention default rules which are good and can change to fit your platform without you trying or caring.

                                              2. 1

                                                Your 12 line makefile may work for you, but it likely doesn’t work for systems that are not like yours.

                                                Absolutely, I always assume pkg-config and other of these sorts of things of course that autotools will try to paper over for you.

                                                1. 2

                                                  Can you build a proper shared library on AIX without using libtool? I’d like to see your makefile try.

                                                  1. 4

                                                    No idea, and also no interest from me :) If someone wants to build something on AIX that’s cool I guess, and they probably should be using autotools.

                                                    1. 1

                                                      You seem to be implying that using -G and ar aren’t enough. Care to elaborate on why not?

                                                      1. 4

                                                        This is worthy of a blog post, but while technically you CAN just go gcc -shared libhober.so libhober.c and call it a day, there’s a lot of subtleties. Specifically:

                                                        • Your binary will export everything by default because symbol visibility sucks.

                                                        • Your binary is one architecture only - AIX actually has fat libraries, but not binaries. It works by making libraries .a archives, and you can import based on member.

                                                        • To circle back a point, you can also put a file in the archive (shr.imp/shr_64.imp - it is very tempting to call them shrimp files) indicating what should be imported, and its name to link against (we’ll come back to that later.)

                                                        • Want versioning? GNU fucked it up by coming up with two schemas: libhober.a(libhober.so.69) and libhober.so.69 (with an implicit shr.o/shr_64.o depending on arch). The former scheme is insane to manage via a package manager, so everyone sane went for the latter.

                                                        • Problem: if you do cc -lhober, it’ll link against libhober.so, not libhober.so.69. This works, but is subtly wrong (don’t have the devel package installed, what if soname bumps for a reason?), but it’s easy not to catch because it works in development.

                                                        • To work around that, your shrimp file can declare the real name of the library to link against.

                                                        This kinda makes sense but is unnecessarily brittle. It does make for a great smoke test of how committed to portability you are though. libtool is like violence, it’s the cause and solution to all of life’s problems.

                                                        1. 1

                                                          Your binary is one architecture only - AIX actually has fat libraries, but not binaries. It works by making libraries .a archives, and you can import based on member.

                                                          I mentioned using “-G” and “ar” to specifically create AIX-like shared libraries, which doesn’t suffer from this issue.

                                                          Some of the others may be issues if you need some of those features, but I don’t see how that really precludes you from building a “proper shared library” that doesn’t use those features with just the compiler and archive tools and a really simple makefile.

                                                          1. 1

                                                            I think most people using the really simple makefiles probably aren’t using -G and ar to make a shared library (do you think Makefile curators give a shit about AIX?), and now you have to curate your shrimp file for exports (careful, it’s REALLY easy to accidentally end up exporting strcpy if you do all, because strcpy is only a static object with AIX libc :) ). Hell, stuff like CMake barely does it.

                                                            1. 1

                                                              The original position was that shared libraries on AIX are hard or impossible without libtool - but my position is, a simple Makefile can handle that just fine (if the author wants to support AIX explicitly, of course). Now, using libtool may give you different platforms like AIX automatically, but that doesn’t support the claim that a simple Makefile couldn’t build “proper shared libraries” on AIX, if desired.

                                                              strcpy is only a static object with AIX libc

                                                              Are you sure?

                                                              $ dump -Tv test.a
                                                              
                                                              test.a[test.o]:
                                                              
                                                                                      ***Loader Section***
                                                              
                                                                                      ***Loader Symbol Table Information***
                                                              [Index]      Value      Scn     IMEX Sclass   Type           IMPid Name
                                                              
                                                              [0]     0x0000fc00    undef      IMP     XO EXTref   libc.a(shr.o) ___strcpy
                                                              [1]     0x20000244    .data      EXP     DS SECdef        [noIMid] f
                                                              
                                                              1. 1

                                                                Perhaps it’s changed in 7.2; I remember having plenty of versions like strcpy end up exported in shared libraries naively exporting all, instead of a curated export list or using nm on objects to generate one.

                                            1. 3

                                              At first I was in love with the idea of pattern matching in Python, but this article has put me off a bit, in seeing that it’s not as useful as it is in other languages that I write in (Rust, namely).

                                              In thinking about it and discussing this article with an ex-coworker, it began to remind me of the smart match operator and given/when in Perl, where it was introduced in 5.10 (see https://metacpan.org/release/DAPM/perl-5.10.1/view/pod/perlsyn.pod#Switch-statements and the following section “Smart matching in detail” for how it worked), and then in four short years after it was introduced it was marked as experimental again in 5.18.

                                              Obviously, Perl’s smart match is a lot more surprising than Python’s pattern matching, but this article does highlight that it’s a complex feature with some surprising syntax that breaks expectations. This makes me wonder if it’ll go the way of Perl’s smart match/given/when as a result.

                                              1. 11

                                                Gross, right? m4 is great for macros and includes. Not super fun for general programming. But, like immigrants, it gets the job done.

                                                Uh… what?

                                                1. 22

                                                  I think it’s a reference to a lyric from Hamilton?

                                                  [LAFAYETTE] Immigrants:

                                                  [HAMILTON/LAFAYETTE] We get the job done

                                                  https://genius.com/7862398

                                                  1. 2

                                                    Huh. Cool!

                                                  2. 9

                                                    I don’t know what’s more offensive: making light of the exploitation of immigrant labor, or referencing Hamilton.

                                                    1. 3
                                                      1. 1

                                                        Immigrants work hard?

                                                        1. 1

                                                          I think he is referencing the fact that immigrants are in many places known to be very hard workers and will “get the job done”.

                                                        1. 2

                                                          A small note: UUID structure is version-dependent. That is, if you use UUIDv1, v3 or v5, there’s inherent structure that helps ensure that there aren’t collisions between machines.

                                                          UUIDv4 is almost completely random with barely any structure. Functionally, there’s no difference between using randomblob(16). The structure is described in RFC4122. So essentially, all bits but bits 4, 6 and 12-15 are randomly set and the others just indicate the UUID version. Some versions (like python’s) just set the entire value randomly, ignoring the version bits.

                                                          If you want to use a UUID that does have some structure, you can use one of the uuid1(), uuid3() or uuid5() functions listed in the python uuid documentation.

                                                          1. 1

                                                            Somewhat confusing comment, because you say there’s no functional difference and then describe said difference. Python’s uuid4 uses a random value but sets the version bits in the UUID constructor. From a quick skim of the RFC I didn’t see it mentioning it’s optional (correct me if I’m wrong).

                                                            1. 2

                                                              Thanks for pointing out what I missed in python’s implementation. I was wrong about that.

                                                              The version bits are not optional afaik, however they add no additional collision-avoidance since they’re costants. A v4 uuid is equivalent to a 122bit random number, so idk what benefit the added complexity in the article adds vs just using randomblob(16). By “functional difference,” I definitely could’ve specified that I meant in terms of collision-avoidance, since that’s usually the property people want when reaching for UUIDs.

                                                              1. 2

                                                                I added the note to be informative, it’s not that using randomblob is wrong or less helpful in this case. As said elsewhere you can have rowids or INTEGER AUTOINCREMENT or any other identifier.

                                                          1. 4

                                                            I like how systemd brings all these features, but I don’t like how this makes this not portable to other operating systems, as systemd only supports Linux. I know that not all operating systems support all the underlying features needed by systemd, but I believe it is a shame to be Linux-centric.

                                                            I am not a user of non Linux-based operating systems myself, but I prefer having common standards.

                                                            1. 22

                                                              Personally, I’m completely fine that Systemd-the-init-system is Linux-only. It’s essentially built around cgroups, and I can imagine reimplementing everything cgroups-like on top of whatever FreeBSD offers would be extremely challenging if at all possible. FreeBSD can build its own init system.

                                                              …However, I would prefer if systemd didn’t work to get other software to depend on systemd. It definitely sucks that systemd has moved most desktop environments from being truly cross platform to being Linux-only with a hack to make them run on the BSDs. That’s not an issue with the init system being Linux-only though, it’s an issue with the scope and political power of the systemd project.

                                                              1. 11

                                                                The issue is that it’s expensive to maintain things like login managers and device notification subsystems, so if the systemds of the world are doing it for free, that’s a huge argument to take advantage of it. No political power involved.

                                                                1. 6

                                                                  With politcal power I just meant that RedHat and Poettering have a lot of leverage. If I, for example, made a login manager that’s just as high quality as logind, I can’t imagine GNOME would switch to supporting my login manager, especially as the only login manager option. (I suppose we’ll get to test that hypothesis though by seeing whether GNOME will ever adopt seatd/libseat as an option.)

                                                                  It’s great that systemd is providing a good login manager for free, but I can’t shake the feeling that, maybe, it would be possible to provide an equally high quality login daemon without a dependency on a particular Linux-only init system.

                                                                  I don’t think the “political power” (call it leverage if you disagree with that term) of the systemd project is inherently an issue, but it becomes an issue when projects add a hard dependency on systemd tools which depend on the systemd init system where OS-agnostic alternatives exist and are possible.

                                                                  1. 5

                                                                    Everybody loves code that hasn’t been written yet. I think we need to learn to looks realistically at what we have now (for free, btw) instead of insisting on the perfect, platform-agnostic software. https://lobste.rs/s/xxyjxl/avoiding_complexity_with_systemd#c_xviza7

                                                              2. 18

                                                                Systemd is built on Linux’s capabilities, so this is really a question of–should people not try to take advantage of platform-specific capabilities? Should they always stay stuck on the lowest-common denominator? This attitude reminds me of people who insist on treating powerful relational databases like dumb key-value stores in the name of portability.

                                                                1. 5

                                                                  I believe the BSDs can do many of the things listed in the article, but also in their very own ways. A cross-platform system manager would be some sort of a miracle, I believe.

                                                                  1. 9

                                                                    The big difference is that systemd (as well as runit, s6, etc.) stay attached to the process, whereas the BSD systems (and OpenRC, traditional Linux init scripts) expect the program to “daemonize”.

                                                                    Aside from whatever problems systemd may or may not have, I feel this model is vastly superior in pretty much every single way. It simplifies almost everything, especially for application authors, but also for the init implementation and system as a whole.

                                                                    A cross-platform system manager would be some sort of a miracle, I believe.

                                                                    daemontools already ran on many different platforms around 2001. I believe many of its spiritual successors do too.

                                                                    It’s not that hard; like many programs it’s essentially a glorified for loop:

                                                                    for service in get_services()
                                                                        start_process(service)
                                                                    

                                                                    Of course, it’s much more involved with restarts, logging, etc. etc. but you can write a very simple cross-platform proof-of-concept service manager in a day.

                                                                    1. 4

                                                                      Yes and no. Socket activation can be done with inetd(8), and on OpenBSD you can at least limit what filesystem paths are available with unveil(2), although that requires system-specific changes to your code. As far as dynamic users, I don’t think there’s a solution for that.

                                                                      Edit: Also, there’s no real substitute for LoadCredentials, other than using privdropping and unveil(8). I guess you could use relayd(8) to do TLS termination and hand-off to inetd(8). If you’re doing strictly http, you could probably use a combo of httpd(8) and slowcgi(8) to accomplish similar.

                                                                      1. 3

                                                                        Then I’m imagining a modular system with different features that can be plugged together, with specifications and different implementations depending to the OS. Somehow a way to go back to having a single piece of software for each feature, but at another level. The issue is how you write these specifications while having things implementable on any operating system it makes sense of.

                                                                        1. 2

                                                                          Hell, a Docker API implementation for BSD would be a miracle. The last FreeBSD Docker attempt was ancient and has fallen way out of date. Have a daemon that could take OCI containers and run them with ZFS layers in a BSD jail with BSD virtual networks would be a huge advantage for BSD in production environments.

                                                                          1. 3

                                                                            There is an exciting project for an OCI-compatible runtime for FreeBSD: https://github.com/samuelkarp/runj. containerd has burgeoning FreeBSD support as well.

                                                                        2. 2

                                                                          But, are FreeBSD rc.d scripts usable verbatim on, say, OpenBSD or SMF?

                                                                          1. 8

                                                                            SMF is a lot more like systemd than the others.

                                                                            In fact aside from the XML I’d say SMF is the kind of footprint I’d prefer systemd to have, it points to (and reads from) log files instead of subsuming that functionality, handles socket activation, supervises processes/services and drops privileges. (It can even run zones/jails/containers).

                                                                            But to answer the question: yes any of the scripts can be used essentially* verbatim on any other platform.

                                                                            (There might be differences in pathing, FreeBSD installs everything to /usr/local by default)

                                                                            1. 2

                                                                              I wish SMF was more portable. I actually like it a lot.

                                                                            2. 6

                                                                              Absolutely not. Even though they’re just shell scripts, there are a ton of different concerns that make them non-portable.

                                                                              I’m gonna ignore the typical non-portable problems with shell scripts (depending on system utils that function differently on different systems (yes, even within the BSDs), different shells) and just focus on the biggest problem: both are written depending on their own shell libraries.

                                                                              If we look at a typical OpenBSD rc.d script, you’ll notice that all the heavy-lifting is done by /etc/rc.d/rc.subr. FreeBSD has an /etc/rc.subr that fulfills the same purpose.

                                                                              These have incredibly different interfaces for configuration, you can just take a look at the manpages: OpenBSD rc.subr(8), FreeBSD rc.subr(8). I don’t have personal experience here, but NetBSD appears to have a differing rc.subr(8) as well.

                                                                              It’s also important to note that trying to wholesale port rc.subr(8) into your init script to make it compatible across platforms will be quite the task, since they’re written for different shells (OpenBSD ksh vs whatever /bin/sh is on FreeBSD). Moreover, the rc.subr(8) use OS-specific features, so porting them wholesale will definitely not work (just eyeballing the OpenBSD /etc/rc.d/rc.subr, I see getcap(1) and some invocations of route(8) that only work on OpenBSD. FreeBSD’s /etc/rc.subr uses some FreeBSD-specific sysctl(8) MIBs.)

                                                                              If you’re writing an rc script for a BSD, it’s best to just write them from scratch for each OS, since the respective rc.subr(8) framework gives you a lot of tools to make this easy.

                                                                              This is notably way better than how I remember the situation on sysvinit Linux, since iirc there weren’t such complete helper libraries, and writing such a script could take a lot of time and be v error-prone.

                                                                              1. 5

                                                                                Yeah, exactly. The rc scripts aren’t actually portable, so why do people (even in this very thread) expect the systemd scripts (which FWIW are easier to parse programmatically, see halting theory) to be?

                                                                                Also, thank you for the detailed reply.

                                                                                1. 3

                                                                                  I’m completely in agreement with you. I want rc scripts/unit files/SMF manifests to take advantage of system-specific features. It’s nice that an rc script in OpenBSD can allow me to take advantage of having different rtables or that it’s jail-aware in FreeBSD.

                                                                                  I think there are unfortunate parts of this, since I think it’d be non-trivial to adapt provided program in this example to socket activation in inetd(8) (tbh, maybe I should try when I get a chance). What would be nice is if there was a consistent set of expectations for daemons about socket-activation behavior/features, so it’d be easier to write portable programs, and then ship system-specific configs for the various management tools (systemd, SMF, rc/inetd). Wouldn’t be surprised if that ship has sailed though.

                                                                              2. 2

                                                                                I don’t see why not? They’re just POSIX sh scripts.

                                                                            1. 9

                                                                              I use Go a lot at work, and honestly while the MVS makes me incredibly uncomfortable, I haven’t had a ton of problems with it. I’m not aware of any CVEs/security issues for our deps, but it’s very possible that some are lurking in our products and will stay there. I also haven’t run into bugs in transitive deps that have caused issues, but again, it’s something I’ve worried about.

                                                                              However, because of these fears, I have tried to use go get -u -t to update all my deps and that typically always breaks. Between subtle backwards incompatible-changes that modules have snuck in and repository renames, usually trying to upgrade all my deps breaks in some way.

                                                                              1. 9

                                                                                I wish Rust had a bigger standard library (“batteries” included, like python in some degree)

                                                                                See for example sort. I realise all of us download and run programs with lots of dependencies most days but I feel like core utils should not pull non-standard dependencies.

                                                                                1. 13

                                                                                  Note that among those 12 direct dependencies, Python’s stdlib has direct equivalents only to 4: clap, itertools, rand, tempfile. Things like unicode-width, rayon, semver, binary-heap-plus are not provided by Python. compare, fnv, memchr and ouroboros are somewhat hard to qualify Rust-isms.

                                                                                  1. 2

                                                                                    In addition, it’s worth noting that a lot of projects eschew argparse (what the alternative to clap would be) for click. If a similar project was done in python, I’d almost bet money that they’d use click.

                                                                                    rand being separate has some advantages, largely that it is able to move at a pace that’s not tied to language releases. I look at this as a similar situation that golang’s syscall package has (had? the current situation is unclear to me rn). If an OS introduces a new random primitive (getrandom(2), getentropy(2)), a separate package is a lot easier to update than the stdlib, which is tied to language releases.

                                                                                    Golang’s syscall package has (had?) a similar problem, which led to big changes being locked down, and the recommended pkg being golang.org/x/sys. There’s a lot more agility to be had to leverage features of the underlying OS if you don’t tie certain core features to the same cadence as a language release. (this is not to say that this is the only problem with the syscall package being in the stdlib, but it’s definitely one of them. more info on the move here: https://docs.google.com/document/d/1QXzI9I1pOfZPujQzxhyRy6EeHYTQitKKjHfpq0zpxZs/edit)

                                                                                    1. 1

                                                                                      argparse

                                                                                      I’d use getopt over argparse. argparse just has really abysmal parsing which is different from other shell tools, especially when dealing with subcommands.

                                                                                    2. 1

                                                                                      True. Could be just rust-lang crates like futures-rs or cargo instead of being in the stdlib.

                                                                                    3. 12

                                                                                      This problem is Software Engineering Complete (like NP-Complete - transformable to any other SW Eng Complete thing). As just one other example, Nim also struggles with what should be in the stdlib vs. external packages. Rust has just about 1000x the resources than Nim for a much more spartan core stdlib, but of course the Nim stdlib almost surely has more bugs than that Rust spartan core. So, a lot of this boils down to A) tolerance for bugs, B) resources to maintain going forward, and C) the cost of complexity/generality in the first place, and probably a factor or two I’m forgetting/neglecting. ABC relate far more to community & project management/attitudes than language details themselves. Also, presence in the stdlib is not a panacea for discoverability because as the stdlib grows more and more giant, discoverability crashes.

                                                                                      Note this is neither attack nor defense but elaboration on why this is not an easy problem.

                                                                                      1. 2

                                                                                        I wonder if the reason nim has to have a big standard library is to attract people. Rust already has the following, as you said, and people are sure to create an kinds of things. Whereas, if one was to try nim, if there wasn’t the stdlib, they would have to do everything on their own.

                                                                                      2. 3

                                                                                        Same. A big part of the learning curve for me was discovering modules like serde, tokio, anyhow/thiserror, and so on that seem necessary in just about every Rust program I write.

                                                                                        1. 3

                                                                                          Not providing a standard executor was the only complaint I had from async Rust.

                                                                                          1. 2

                                                                                            I like that there is no built-in blessed executor - it keeps Rust runtime-free.

                                                                                            I’ve worked on projects where using an in-house executor was a necessity.

                                                                                            Also gtk-rs supports using GTK’s event loop as the executor and it’s very cool to await button clicks :)

                                                                                            1. 1

                                                                                              Yeah i’ve used it and it felt refreshing :) But for other small tools perhaps having a reference and minimal implementation would be good. I like the smol crate and I think would be perfect for this

                                                                                          2. 3

                                                                                            All of them developed over time and became a de-facto standard. But it was always the intention that the std doesn’t try to develop these tools as you need some iterations, which won’t work on with a stability guarantee. tokio just went to 1.0 this? year, I’ve got code lying around using 0.1,0.2 and some 0.3 (and don’t forget futures etc).

                                                                                            anyhow/thiserror ? Well there is failure,error-chain,quick-error,snafu,eyre (stable-eyre,color-eyre), simple-error….. And yes, some of them are still active as they solve different problems (I specifically had to move away from thiserror) and some are long deprecated. So there was a big amount of iteration (and some changes to the std Error trait as a result).

                                                                                            You don’t want to end up like c++ (video) where everybody treats the std implementation of regex as something you don’t ever want to use.

                                                                                          3. 3

                                                                                            There is problem with such approach of “batteries included” in the standard library - development of such libraries slows down or stagnates. Actually I prefer to use set of well behaved external libraries than need to replace “built-ins” because these are too simplified for any reasonable usage.

                                                                                            1. 3

                                                                                              The thing is that as a user I “trust” standard rust-lang crates at first glance; surely if I check the external libraries out or recognize them I will know they are well behaved and performant. Trust and trusting trust is such a big problem in software in general.

                                                                                              1. 5

                                                                                                Yes, that why there are some “blessed” crates as well as there is Crev project to expand the trust.

                                                                                              2. 1

                                                                                                just version the standard library interface, though

                                                                                                1. 4

                                                                                                  And point me to one, just one, example where it worked? If something is merged into the core, then it will die there. Python has examples of such code, Ruby has examples of such code, etc. How often for example built in HTTP client is good enough to be used in any serious case? How often instead you pull dependency to handle the timeouts, headers, additional HTTP versions, etc. better/faster/easier?

                                                                                              3. 2

                                                                                                The rust ecosystem, IMO, is far too eager to pull in third party dependencies. I haven’t looked deep into this tool, but a quick glance leads me to believe that many of these dependencies could be replaced with the standard library and/or slimmed down alternative libraries and a little extra effort.

                                                                                                1. 2

                                                                                                  Unfortunately it’s not always that simple. Let’s see the third party dependencies I pulled for meli, an email client, which was a project I started with the intention of implementing as much as possible myself, for fun.

                                                                                                  xdg = "2.1.0"
                                                                                                  crossbeam = "0.7.2"
                                                                                                  signal-hook = "0.1.12"
                                                                                                  signal-hook-registry = "1.2.0"
                                                                                                  nix = "0.17.0"
                                                                                                  serde = "1.0.71"
                                                                                                  serde_derive = "1.0.71"
                                                                                                  serde_json = "1.0"
                                                                                                  toml = { version = "0.5.6", features = ["preserve_order", ] }
                                                                                                  indexmap = { version = "^1.6", features = ["serde-1", ] }
                                                                                                  linkify = "0.4.0"
                                                                                                  notify = "4.0.1"
                                                                                                  termion = "1.5.1"
                                                                                                  bincode = "^1.3.0"
                                                                                                  uuid = { version = "0.8.1", features = ["serde", "v4"] }
                                                                                                  unicode-segmentation = "1.2.1"
                                                                                                  smallvec = { version = "^1.5.0", features = ["serde", ] }
                                                                                                  bitflags = "1.0"
                                                                                                  pcre2 = { version = "0.2.3", optional = true }
                                                                                                  structopt = { version = "0.3.14", default-features = false }
                                                                                                  futures = "0.3.5"
                                                                                                  async-task = "3.0.0"
                                                                                                  num_cpus = "1.12.0"
                                                                                                  flate2 = { version = "1.0.16", optional = true }
                                                                                                  

                                                                                                  From a quick glance, only nix, linkify, notify, uuid, bitflags could be easily replaced by invented here code because the part of the crates I use is small.

                                                                                                  I cannot reasonably rewrite:

                                                                                                  • serde
                                                                                                  • flate2
                                                                                                  • crossbeam
                                                                                                  • structopt
                                                                                                  • pcre2
                                                                                                  1. 1

                                                                                                    You could reduce transitory dependencies with:

                                                                                                    serde -> nanoserde

                                                                                                    structopt -> pico-args

                                                                                                    Definitely agree that it isn’t that simple, and each project is different (and often it’s not worth the energy, esp for applications, not libraries), but it’s something I notice in the Rust ecosystem in general.

                                                                                                    1. 4

                                                                                                      But then you’re getting less popular deps, with fewer eyeballs on them, from less-known authors.

                                                                                                      Using bare-bones pico-args is a poor deal here — for these CLI tools the args are their primary user interface. The fancy polished features of clap make a difference.

                                                                                                2. 1

                                                                                                  Why do you think an external merge sort should be part of the Rust stdlib? I don’t think it’s part of the Python stdlib either. Rust already has sort() and unstable_sort() in its stdlib (unstable sort should have been the default, but that ship has sailed).

                                                                                                1. 10

                                                                                                  Comments can lead you astray, because they get out of date. The code is the only source of truth.

                                                                                                  The reason I’ve never liked this sentiment (and the OP is only quoting it) is that I consider comments part of the code.

                                                                                                  1. 5

                                                                                                    You can also forget to update a piece of code when you’re making a large swooping change, but that will most likely just crash or produce incorrect results, which makes you aware that you “missed a spot”. Comments don’t have this property, so they will more easily be missed as part of an update.

                                                                                                    1. 6

                                                                                                      Large swooping changes are exactly the scenario where extra time spent on documentation is warranted. Plus, if the comments are co-located with the code, it’s easy to catch missing/incorrect docs in a code review (which again, should be performed for any large swooping changes!)

                                                                                                      1. 4

                                                                                                        On the other hand, words-in-sentences-in-comments are better than words-in-variable-names at describing requirements, and assumptions, and what direction the code should evolve in. Incidentally all things that are more likely to start correct and/or remain correct across updates, while code is more likely to contain or introduce defects.

                                                                                                        Secondly, if I’m gong to figure out the intent and assumptions of a piece of code, I’m basically reconstructing what a comment would say, and I’d rather have the comment to start with.

                                                                                                        Thirdly, even if the code is updated and the comment is not, having the ‘previous’ comment there is a scaffolding that’ll help me figure out the new code quicker. Diff to the previous intent is likely to be small, after all. Of course I’m going to need to figure out whether the new code is correct or the old comment; but that’s better than figuring out whether the new code is correct without a comment.

                                                                                                      2. 2

                                                                                                        My tests can’t check my comments, though.

                                                                                                        Granted, they can’t check that my function/method/procedure names match what they do either.

                                                                                                        1. 1

                                                                                                          Rust can! Sort of. They’re called doctests and they make sure your code samples compile and assert correctly, just like regular tests!

                                                                                                          I like them because examples doubling as simple tests also encourages you to write more examples.

                                                                                                          1. 1

                                                                                                            I almost feel like that misses the point of this article though? If I’m writing an explanation of the happy path of a function, there’s likely not really anything I can put in a comment that doctest would be useful for. It’s most likely just prose.

                                                                                                            Where doctest does work very well is if I’m writing comments that get turned into API documentation, but that’s an entirely different set of comments than what Hillel is describing in the article.

                                                                                                            1. 1

                                                                                                              That’s true, hence “sort of” in my comment. But doctests are better than nothing, even if they can’t check prose comments. You don’t need to refactor a crate into a separate library to document some strange behavior of a particular function with examples.

                                                                                                              The author does mention two cases where examples could help:

                                                                                                              • Tips: “This function is currently idempotent, but that’s an implementation detail we intentionally don’t rely on.”
                                                                                                              • Warnings: “This routine is really delicate and fails in subtle ways, make sure you talk with XYZ if you want to change it.”

                                                                                                              But otherwise I agree. The “tests” for comments are rigorous code review, which unfortunately falls under the “try harder” class of solutions.

                                                                                                              Also, even if it’s only tangentially related, I think doctests are pretty cool and worth mentioning.

                                                                                                            2. 1

                                                                                                              I’m aware of doctests, but those are code again, even if they are written “in comments” – the comment less, code better crowd absolutely treat all tests (doctest or no) as part of the solution.

                                                                                                        1. 67

                                                                                                          I don’t understand how heat maps are used as a measuring tool, it seems pretty useless on its own. If something is little clicked, does it mean people don’t need the feature or people don’t like how it’s implemented? Or how do you know if people would really like something that’s not there to begin with?

                                                                                                          It reminds about the Feed icon debacle: it’s been neglected for years and fell out of active use, which lead Mozilla to say “oh look, people don’t need the Feed icon, let’s move it away from the toolbar”. And then after a couple of versions they said “oh look, even less people use the Feed functionality, let’s remove it altogether”. Every time I see a click heatmap as a means to drive UI decisions I can’t shake the feeling that it’s only used to rationalize arbitrary product choices already made.

                                                                                                          (P.S. I’ve been using Firefox since it was called Netscape and never understood why so many people left for Chrome, so no, I’m not just a random hater.)

                                                                                                          1. 11

                                                                                                            Yeah, reminds me of some old Spiderman game where you could “charge” your jump to jump higher. They removed the visible charge meter in a sequel but kept the functionality, then removed the functionality in the sequel after that because nobody was using it (because newcomers didn’t know it was there, because there was no visible indication of it!).

                                                                                                            1. 8

                                                                                                              It’s particularly annoying that the really cool things, which might actually have a positive impact for everyone – if not now, at least in a later release – are buried at the end of the announcement. Meanwhile, some of the things gathered through metrics would be hilarious were it not for the pretentious marketing language:

                                                                                                              There are many ways to get to your preferences and settings, and we found that the two most popular ways were: 1) the hamburger menu – button on the far right with three equal horizontal lines – and 2) the right-click menu.

                                                                                                              Okay, first off, this is why you should proofread/fact-check even the PR and marketing boilerplate: there’s no way to get to your preferences and settings through the right-click menu. Not in a default state at least, maybe you can customize the menu to include these items but somehow I doubt that’s what’s happening here…

                                                                                                              Anyway, assuming “get to your preferences and settings” should’ve actually been “do things with the browser”: the “meatball” menu icon has no indication that it’s a menu, and a fourth way – the old-style menu bar – is hidden by default on two of the three desktop platforms Firefox supports, and isn’t even available on mobile. If you leave out the menubar through sheer common sense, you can skip the metrics altogether, a fair dice throw gets you 66% accuracy.

                                                                                                              People love or intuitively believe what they need is in the right click menu.

                                                                                                              I bet they’ll get the answer to this dilemma if they:

                                                                                                              • Look at the frequency of use for the “Copy” item in the right-click menu, and
                                                                                                              • For a second-order feature, if they break down right-click menu use by input device type and screen size

                                                                                                              And I bet the answer has nothing to do with love or intuition ;-).

                                                                                                              I have also divined in the data that the frequency of use for the right-click menu will further increase. The advanced machine learning algorithms I have employed to make this prediction consist of the realisation that one menu is gone, and (at least the screenshots show) that the Copy item is now only available in the right-click menu.

                                                                                                              Out of those 17 billion clicks, there were three major areas within the browser they visited:

                                                                                                              A fourth is mentioned in addition to the three in the list and, as one would expect, these four (out of… five?) areas are: the three areas with the most clickable widgets, plus the one you have to click in order to get to a new website (i.e. the navigation bar).

                                                                                                              1. 12

                                                                                                                They use their UX experts & measurements to rationalize decisions done to make Firefox more attractive to (new) users as they claim, but … when do we actually see the results?

                                                                                                                The market share has kept falling for years, whatever they claim to be doing, it is exceedingly obvious that they are unable to deliver.

                                                                                                                Looking back, the only thing I remember Mozilla doing in the last 10 years is

                                                                                                                • a constant erosion of trust
                                                                                                                • making people’s lives miserable
                                                                                                                • running promising projects into the ground at full speed

                                                                                                                I would be less bitter about it if Mozilla peeps wouldn’t be so obnoxiously arrogant about it.


                                                                                                                Isn’t this article pretty off-topic, considering how many stories are removed for being “business analysis”?

                                                                                                                This is pretty much “company losing users posts this quarter’s effort to attract new users by pissing off existing ones”.

                                                                                                                1. 14

                                                                                                                  The whole UI development strategy seems to be upside down: Firefox has been hermorrhaging users for years, at a rate that the UI “improvements” have, at best, not influenced much, to the point where a good chunk of the browser “market” consists of former Firefox users.

                                                                                                                  Instead of trying to get the old users back, Firefox is trying to appeal to a hypothetical “new user” who is technically illiterate to the point of being confused by too many buttons, but somehow cares about tracking through 3rd-party cookies and has hundreds of tabs open.

                                                                                                                  The result is a cheap Chrome knock-off that’s not appealing to anyone who is already using Chrome, alienates a good part of their remaining user base who specifically want a browser that’s not like Chrome, and pushes the few remaining Firefox users who don’t specifically care about a particular browser further towards Chrome (tl;dr if I’m gonna use a Chrome-like thing, I might as well use the real deal). It’s not getting anyone back, and it keeps pushing people away at the same time.

                                                                                                                  1. 16

                                                                                                                    The fallacy of Firefox, and quite a few other projects and products, seems to be:

                                                                                                                    1. Project X is more popular than us.
                                                                                                                    2. Project X does Y.
                                                                                                                    3. Therefore, we must do Y.

                                                                                                                    The fallacy is that a lot of people are using your software is exactly because it’s not X and does Z instead of Y.

                                                                                                                    It also assumes that the popularity is because of Y, which may be the case but may also not be the case.

                                                                                                                    1. 3

                                                                                                                      You’re not gonna win current users away from X by doing what X does, unless you do it much cheaper (not an option), or 10x better (hard to see how you could do more of chrome better than chrome).

                                                                                                                      1. 1

                                                                                                                        You might; however, stop users from switching to X by doing what X does, even if you don’t do it quite as well.

                                                                                                                    2. 4

                                                                                                                      The fundamental problem with Firefox is: It’s just slow. Slower than Chrome for almost everything. Slower at games (seriously, its canvas performance is really bad), slower at interacting with big apps like Google Docs, less smooth scrolling, even more latency between you hit a key on the keyboard and the letter shows up in the URL bar. This stuff can’t be solved with UI design changes.

                                                                                                                      1. 3

                                                                                                                        Well, but there are reasons why it’s slow - and at least one good one.

                                                                                                                        Most notably, because Firefox makes an intentionally different implementation trade-off than Chrome. Mozilla prioritizes lower memory usage in FF, while Google prioritizes lower latency/greater speed.

                                                                                                                        (I don’t have a citation on me at the moment, but I can dig one up later if anyone doesn’t believe me)

                                                                                                                        That’s partially why you see so many Linux users complaining about Chrome’s memory usage.

                                                                                                                        These people are getting exactly what they asked for, and in an age where low CPU usage is king (slow mobile processors, limited battery life, more junk shoved into web applications, and plentiful RAM for people who exercise discipline and only do one thing at once), Chrome’s tradeoff appears to be the better one. (yes, obviously that’s not the only reason that people use Chrome, but I do see people noticing it and citing it as a reason)

                                                                                                                        1. 2

                                                                                                                          I rarely use Google Docs; basically just when someone sends me some Office or Spreadsheet that I really need to read. It’s easiest to just import that in Google Docs; I never use this kind of software myself and this happens so infrequently that I can’t be bothered to install LibreOffice (my internet isn’t too fast, and downloading all updates for it takes a while and not worth it for the one time a year I need it). But every time it’s a frustrating experience as it’s just so darn slow. Actually, maybe it would be faster to just install LibreOffice.

                                                                                                                          I haven’t used Slack in almost two years, but before this it was sometimes so slow in Firefox it was ridiculous. Latency when typing could be in the hundreds or thousands of ms. It felt like typing over a slow ssh connection with packet loss.

                                                                                                                          CPU vs. memory is a real trade-off with a lot of various possible ways to do this and it’s a hard problem. But it doesn’t change that the end result is that for me, as a user, Firefox is sometimes so slow to the point of being unusable. If I had a job where they used Slack then this would be a problem as I wouldn’t be able to use Firefox (unless it’s fixed now, I don’t know if it is) and I don’t really fancy having multiple windows.

                                                                                                                          That being said, I still feel Firefox gives a better experience overall. In most regular use it’s more than fast enough; it’s just a few exceptions where it’s so slow.

                                                                                                                          1. 1

                                                                                                                            That being said, I still feel Firefox gives a better experience overall. In most regular use it’s more than fast enough; it’s just a few exceptions where it’s so slow.

                                                                                                                            I agree. I absolutely prefer Firefox to Chrome - it’s generally a better browser with a much better add-on ecosystem (Tree Style Tabs, Container Tabs, non-crippled uBlock Origin) and isn’t designed to allow Google to advertise to you. My experience with it is significantly better than with Chome.

                                                                                                                            It’s because I like Firefox so much that I’m so furious about this poor design tradeoff.

                                                                                                                            (also, while it contributes, I don’t blame all of my slowdowns on Firefox’s design - there are many cases where it’s crippled by Google introducing some new web “standard” that sites started using before Firefox could catch up (most famously, the ShaddowDOM v0 scandal with YouTube))

                                                                                                                          2. 1

                                                                                                                            I don’t have a citation on me at the moment, but I can dig one up later if anyone doesn’t believe me

                                                                                                                            I’m interested in your citations :)

                                                                                                                            1. 1

                                                                                                                              Here’s one about Google explicitly trading off memory for CPU that I found on the spot: https://tech.slashdot.org/story/20/07/20/0355210/google-will-disable-microsofts-ram-saving-feature-for-chrome-in-windows-10

                                                                                                                      2. 4

                                                                                                                        I remember more things from Mozilla. One is also a negative (integration of a proprietary application, Pocket, into the browser; it may be included in your “constant erosion of trust” point), but the others are more positive.

                                                                                                                        Mozilla is the organization that let Rust emerge. I’m not a Rust programmer myself but I think it’s clear that the language is having a huge impact on the programming ecosystem, and I think that overall this impact is very positive (due to some new features of its own, popularizing some great features from other languages, and a rather impressive approach to building a vibrant community). Yes, Mozilla is also the organization that let go of all their Rust people, and I think it was a completely stupid idea (Rust is making it big, and they could be at the center of it), but somehow they managed to wait until the project was mature enough to make this stupid decision, and the project is doing okay. (Compare to many exciting technologies that were completely destroyed by being shut out too early.) So I think that the balance is very positive: they grew an extremely positive technology, and then they messed up in a not-as-harmful-as-it-could-be way.

                                                                                                                        Also, I suspect that Mozilla is doing a lot of good work participating to the web standards ecosystem. This is mostly a guess as I’m not part of this community myself, so it could have changed in the last decade and I wouldn’t know. But this stuff matters a lot to everyone, we need to have technical people from several browsers actively participating, it’s a lot of work, and (despite the erosion of trust you mentioned) I still trust the Mozilla standard engineers to defend the web better than Google (surveillance incentives) or Apple (locking-down-stuff incentives). (Defend, in the sense that I suspect I like their values and their view of the web, and I guess that sometimes this makes a difference during standardization discussion.) Unfortunately this part of Mozilla’s work gets weaker as their market share shrinks.

                                                                                                                        1. 3

                                                                                                                          Agreed. I consider Rust a positive thing in general (though some of the behavioral community issues there seem to clearly originate from the Mozilla org), but it’s a one-off – an unexpected, pleasant surprise that Rust didn’t end in the premature death-spiral that Mozilla projects usually end up in.

                                                                                                                          Negative things I remember most are Persona, FirefoxOS and the VPN scam they are currently running.

                                                                                                                          1. 4

                                                                                                                            I consider Rust a positive thing in general (though some of the behavioral community issues there seem to clearly originate from the Mozilla org), but it’s a one-off

                                                                                                                            Hard disagree there. Pernosco is a revolution in debugging technology (a much, much bigger revolution than what Rust is to programming languages) and wouldn’t exist without Mozilla spending engineering resources on RR. I don’t know much about TTS/STT but the Deepspeech work Mozilla has done also worked quite nicely and seemed to make quite an impact in the field. I think I also recall them having some involvement in building a formally-proven crypto stack? Not sure about this one though.

                                                                                                                            Mozilla has built quite a lot of very popular and impressive projects.

                                                                                                                            Negative things I remember most are Persona, FirefoxOS and the VPM scam they are currently running.

                                                                                                                            None of these make me as angry as the Mister Robot extension debacle they caused a few years ago.

                                                                                                                            1. 2

                                                                                                                              To clarify, I didn’t mean it’s a one-off that it was popular, but that it’s a one-off that it didn’t get mismanaged into the ground. Totally agree otherwise.

                                                                                                                            2. 4

                                                                                                                              the VPM [sic] scam they are currently running

                                                                                                                              Where have you found evidence that Mozilla is not delivering what they promise - a VPN in exchange for money?

                                                                                                                              1. 0

                                                                                                                                They are trying to use the reputation of their brand to sell a service to a group of “customers” that has no actual need for it and barely an understanding what it does or for which purposes it would be useful.

                                                                                                                                What they do is pretty much the definition of selling snake oil.

                                                                                                                                1. 7

                                                                                                                                  I am a Firefox user and I’m interested in their VPN. I have a need for it, too - to prevent my ISP from selling information about me. I know how it works and what it’s useful for. I can’t see how they’re possibly “selling snake oil” unless they’re advertising something that doesn’t work or that they won’t actually deliver…

                                                                                                                                  …which was my original question, which you sidestepped. Your words seem more like an opinion disguised as fact than actual fact.

                                                                                                                        2. 2

                                                                                                                          It’s a tool like a lot of other things. Sure, you can abuse it in many ways, but unless we know how the results are used we can’t tell if it’s a good or bad scenario. A good usage for a heatmap could be for example looking at where people like to click on a menu item and how far should the “expand” button go.

                                                                                                                          As an event counter, they’re not great - they can get that info in better/cheaper ways.

                                                                                                                          1. 2

                                                                                                                            This is tricky and also do for survey. I often am in a situation where it asks me “What do you have the hardest time with it” or “What prevents you from using language X on your current project?” and when the answer essentially boils down to “I am doing scripting and not systems programming” or something similar, I don’t intend to tell them that they should make a scripting language out of a systems language or vice versa.

                                                                                                                            And I know these are often taken wrongly, by reading the results and interpretation. There rarely is a “I like it how it is” option or a “Doesn’t need changes” or even “Please don’t change this!”.

                                                                                                                            I am sure this is true about other topics too, but programming language surveys seem to be a trend so that’s where I often see it.

                                                                                                                            1. 1

                                                                                                                              I feel like they’re easily gamed, too. I feel like this happened with Twitter and the “Moments” tab. When they introduced it, it was in the top bar to the right of the “Notifications” tab. Some time after introduction, they swapped the “Notifications” and “Moments” tab, and the meme on Twitter was how the swap broke people’s muscle memory.

                                                                                                                              I’m sure a heat map would’ve shown that after the swap, the Moments feature suddenly became a lot more popular. What that heat map wouldn’t show was user intent.

                                                                                                                              1. 1

                                                                                                                                from what I understand, the idea behind heat maps is not to decide about which feature to kill, but to measure what should be visible by default. The more stuff you add to the screen, the more cluttered and noisy the browser becomes. Heat maps help Mozilla decide if a feature should be moved from the main visible UX to some overflowing menu.

                                                                                                                                Most things they moved around can be re-arranged by using the customise toolbar feature. In that sense, you do have enough bits to make your browser experience yours to some degree.


                                                                                                                                The killing of feed icon was not decided with heat maps alone. From what I remember, that feature was seldom used (something they can get from telemetry and heat maps) but also was some legacy bit rot that added friction to maintenance and whatever they wanted to do. Sometimes features that are loved by few are simply in the way of features that will benefit more people, it is sad but it is true for codebases that are as old as Firefox.

                                                                                                                                Anyway, feed reading is one WebExtension away from any user, and those add-ons usually do a much better job than the original feature ever did.

                                                                                                                                1. 1

                                                                                                                                  I’m wondering how this whole heatmaps/metrics thing works for people who have customized their UI.

                                                                                                                                  I’d assume that the data gained from e. g. this is useless at best and pollution at worst to Mozilla’s assumption of a perfectly spherical Firefox user.

                                                                                                                                  1. 1

                                                                                                                                    @soc, I expect the browser to know it’s own UI and mark heat maps with context so that clicking on a tab is flagged the same way regardless if tabs are on top or the side. Also, IIRC the majority of Firefox users do not customise their UI. We live in a bubble of devs and power users who do, but that is a small fraction of the user base. Seeing what the larger base is doing is still beneficial.

                                                                                                                                    worst to Mozilla’s assumption of a perfectly spherical Firefox user.

                                                                                                                                    I’m pretty sure they can get meaningful results without assuming everyone is the same ideal user. Heat maps are just a useful way to visualise something, specially when you’re doing a blog post.

                                                                                                                                2. 1

                                                                                                                                  never understood why so many people left for Chrome,

                                                                                                                                  The speed difference is tangible.

                                                                                                                                  1. 2

                                                                                                                                    I don’t find it that tangible. If I was into speed, I’d be using Safari here which is quite fast. There are lots of different reasons to choose a browser. A lot of people switched to Chrome because of the constant advertising in Google webapps and also because Google has a tendency of breaking compatibility or reducing compatibility and performance with every other browser, thus making Google stuff work better on Chrome.

                                                                                                                                1. 1

                                                                                                                                  Honestly, I would bet that MRI releases the GIL when it does IO or similar. I know the behavior in CPython is similar, where when you call out to libc funcs to do IO, the GIL is released first. I don’t know anything specifically about the internals of MRI Ruby, but I wouldn’t be surprised if it was the case there as well. This reinforces the author’s conclusion that high IO workloads wouldn’t benefit from TruffleRuby/JRuby.

                                                                                                                                  1. 1

                                                                                                                                    No, it’s not GIL here. As the article states, Jekyll is not multithreaded.

                                                                                                                                  1. 57

                                                                                                                                    You don’t need to put random files somewhere, the ext filesystems have had a “reserved percentage” and set it to 5% for exactly this reason. If you ever run out of space on a partition, just

                                                                                                                                    tune2fs -m 0

                                                                                                                                    clean up fs

                                                                                                                                    tune2fs -m 5

                                                                                                                                    and you’re done.

                                                                                                                                    • NB, this doesn’t work if the thing that is using all the disk space is run as root, but that is one of the many reasons why you don’t run services as root.
                                                                                                                                    1. 10

                                                                                                                                      I can already see myself forgetting to do the second part.

                                                                                                                                      1. 11

                                                                                                                                        Yeah, I prefer the “8GB empty file” approach more. If I’m in a panic trying to fix a server which fell over, I’m much more likely to remember how rm works than how tune2fs works.

                                                                                                                                        1. 15
                                                                                                                                          alias uhoh='tune2fs -m 0'
                                                                                                                                          alias phew='tune2fs -m 5'
                                                                                                                                          
                                                                                                                                          1. 8

                                                                                                                                            You’re almost never going to use those aliases. If you’re lucky, you’ll still have them in your shell rc file in 5 years when you run out of space. You’ll certainly have forgot about them by then.

                                                                                                                                            1. 3

                                                                                                                                              I was being somewhat facetious :) my point is that now you know about the trick, there are ways to make it easier to remember and use if you ever need it.

                                                                                                                                          2. 2

                                                                                                                                            I think the trick is knowing that the fs saves 5% of the drive for exactly this situation. The exact command can be googled. I’ll admit I looked at the man page before typing in the original comment. I know disk space is cheap these days, but losing 8GB of space more than the 5% that you’ve already lost seems really wasteful to me.

                                                                                                                                          3. 7

                                                                                                                                            Wouldn’t you also be just as likely to forget to recreate the file if you were using the strategy proposed by the article?

                                                                                                                                            1. 2

                                                                                                                                              I’d probably have them set up in an ansible config. Don’t need to remember, the computer remembers for you.

                                                                                                                                              1. 4

                                                                                                                                                you can set up tune2fs -m 5 in an ansible config as well.

                                                                                                                                            2. 5

                                                                                                                                              You can tune a file system, but you can’t tune a fish.

                                                                                                                                              1. 2

                                                                                                                                                This is great information, but I can see it not finding its way to everybody who needs to know it, like someone spending their time mainly on building the application they then serve. When / how did you learn this?

                                                                                                                                                1. 7

                                                                                                                                                  This is something that’s been in UNIX since at least the ‘80s, so any basic intro to UNIX course ought to cover it. I came across it in an undergrad UNIX course. Per-user filesystem quotas have also been around for a long time, this is just a degenerate special case (it’s traditionally implemented by setting root’s quota to 105% and underreporting the ‘capacity’ of the disk).

                                                                                                                                                  Note that this is far more reliable than creating an empty file. Most *NIX filesystems support ‘holes’, so unless you actually write data everywhere, you’re not actually going to consume disk space. Worse, on CoW filesystems, it’s possible that deleting a file can require you to allocate space on the disk, which may not be possible if you let the filesystem get 100% full. I believe this is the case for ZFS if the zpool gets completely full.

                                                                                                                                                  1. 2

                                                                                                                                                    Thanks! In the early years after university (think mid-2000s), I often wished my school had given some more practical knowledge. They were very much on the theory side, so I learned a lot of OS concepts and was ready when functional programming really landed, but only my internships stooped to discuss pragmatic things like version control. If you didn’t get this knowledge from academia or if you didn’t go through academia in the first place, where would you look for it?

                                                                                                                                                    1. 3

                                                                                                                                                      It’s been over 20 years since I read a Linux book, but I’m pretty sure that the last one I read covered it. It’s the sort of thing I’d expect to see in anything aimed at sysadmins.

                                                                                                                                                    2. 1

                                                                                                                                                      the output of df and fsck will tell you about this. I probably learned about it when I realized the filesystem was one size but df had a slightly smaller size.

                                                                                                                                                1. 2

                                                                                                                                                  I don’t think the ‘c’ end of the scale is quite fair. Null pointer dereference is ‘safe’ as long as you’re in userland. And overflow and bounds checks are implemented by some environments (asan, tcc). It is an implementation issue, not a language issue.

                                                                                                                                                  1. 13

                                                                                                                                                    Null pointer deref is not necessarily safe. Eg:

                                                                                                                                                    int* array = func_returning_null();
                                                                                                                                                    return array[a_very_big_number];
                                                                                                                                                    

                                                                                                                                                    Depending on the system I’m running on (is there an invalid page around 0 / how long does that extend), and the size of a_very_big_number, this could well run over into valid memory and cause chaos. In practice, its normally fine though, I agree.

                                                                                                                                                    1. 4

                                                                                                                                                      The approach of Chrome is that null pointer dereferences with a consistent, small, fixed offset (FAQ) do not need to be treated as a security bug. So array[a_very_big_number] is clearly problematic but e.g. accessing a struct field through a null pointer is mostly harmless and can be expected to result in a non-exploitable crash.

                                                                                                                                                      1. 2

                                                                                                                                                        That’s pretty reasonable. A language feature that would ignore small fixed offsets like that, but insert null checks for large or dynamic offsets would be a nice thing to have.

                                                                                                                                                    2. 10

                                                                                                                                                      In fact, it is not simply not safe, it’s even worse than the article makes it seem like: dereferencing a null pointer is undefined behavior, which allows the compiler to produce arbitrary code in case it detects that a null pointer is being dereferenced—in particular, the compiler can safely assume that if a pointer is ever dereferenced, it can’t be null, so null checks can be removed altogether. An example: https://blog.kevinhu.me/2016/02/21/Avoid-Nasal-Damons/

                                                                                                                                                      EDIT: a more relevant example: https://software.intel.com/content/www/us/en/develop/blogs/null-pointer-dereferencing-causes-undefined-behavior.html

                                                                                                                                                      1. 2

                                                                                                                                                        -fno-delete-null-pointer-checks. Yes, it’s dumb that you need it, but it’s there.

                                                                                                                                                        (Related: pick your poison from among -fwrapv and -ftrapv.)

                                                                                                                                                        1. 3

                                                                                                                                                          Sure, but we are talking about C-the-language here, not C-with-tooling-specific-workarounds-for-nasty-language-issues; otherwise, this would be a very different discussion, as then we could also mention the plethora of analysis tools.

                                                                                                                                                          1. 0

                                                                                                                                                            My impression is we are talking about programs written in c. Of course you can run analysis and verification tools on c, but most existing c programs will not pass muster regardless of how safe they are in practice. On the other hand most code can be compiled by gcc or clang with corresponding flags. (And for platforms not supported thereby, compilers—e.g. sdcc—are unlikely to optimize in ways that make it an interesting consideration.)

                                                                                                                                                            1. 3

                                                                                                                                                              I see your point: it is true that adding -fno-delete-null-pointer-checks is free in the sense that it fixes bugs without requiring anything else from the programmer, whereas running additional checkers requires changing the program.

                                                                                                                                                              That said, how many C programmers don’t use verification tools but do know to use this flag? I’d been writing C++ (which also suffers from this problem) for three years before learning of the flag, and I did attempt at the time to learn the language deeply. The fact that the flag exists doesn’t make a difference to the plethora of programs written in C that don’t use it already.

                                                                                                                                                              So, it is absolutely a language issue that the barrier for writing in C-the-language is so low (just pick up K&R and hack away), but, comparatively, much, much more effort is required for learning the fractal of gotchas and the ways to mitigate them—before which one has to somehow learn that this fractal exists and reading K&R is simply the beginning of learning to write C without being dangerous.

                                                                                                                                                      2. 5

                                                                                                                                                        In addition to the earlier two rebuttals, may I point out that a userspace crash may turn into a DoS vulnerability. Even if the crashed process isn’t user-visible, and a parent process transparently restarts it, the rapid generation of crash logs and/or core dumps can take a toll.

                                                                                                                                                        1. 0

                                                                                                                                                          That’s a higher-level concern, though, and it affects zig and rust as well.

                                                                                                                                                        2. 2

                                                                                                                                                          I’ve clarified that the table refers to software as it is typically run in production. The vast majority of c code running today is not doing bounds checks. The vast majority of rust code is.

                                                                                                                                                          Also it’s not recommended to run asan in production eg https://www.openwall.com/lists/oss-security/2016/02/17/9

                                                                                                                                                          1. 0

                                                                                                                                                            Asan should not be used for production, yes. Tcc’s bounds-checking interfaces, on the other hand, are fine.

                                                                                                                                                          2. 1

                                                                                                                                                            Another reason why it’s not necessarily safe is that this definition of safe depends on the operating system. Newer operating systems will allocate an unusable page at address 0, causing access violations and (usually) crashes. Older OSes or in embedded cases, this may not happen and a null pointer may be treated as a normal pointer (after all, it’s undefined behavior so compilers will happily allow that, unless additional options are turned on).

                                                                                                                                                          1. 17

                                                                                                                                                            TL;DR: “minikube was slow so I swapped it out for an alternative called kind that runs in docker”

                                                                                                                                                            I expected some kind of investigation or profiling on why minikube was slow, a bit disappointing.

                                                                                                                                                            1. 5

                                                                                                                                                              Right? What happens if kind starts being slow one day before a demo? Does the author just abandon orchestration and find a new niche?

                                                                                                                                                              1. 5

                                                                                                                                                                This is Devops.

                                                                                                                                                                1. 2

                                                                                                                                                                  Agreed, esp because the author is on macOS and using docker relies on having a VM running. iirc on macOS, Docker uses hyperkit by default, and minikube can be configured to use hyperkit instead of vbox as well. A deep dive into the differences between running minikube on hyperkit vs vbox would’ve been far more interesting, since that likely was part of the problem affecting the article author.

                                                                                                                                                                  Edit: Moreover, minikube can also run in docker if configured as such.

                                                                                                                                                                1. 5

                                                                                                                                                                  I was intrigued by this:

                                                                                                                                                                  According to the author of Kitty, tmux is a bad idea, apparently because it does not care about the arbitrary xterm-protocol extensions Kitty implements. Ostensibly, terminal multiplexing (providing persistence, sharing of sessions over several clients and windows, abstraction over underlying terminals and so on) are either unwarranted and seen as meddling by a middleman, or should be provided by the terminal emulator itself (Kitty being touted as the innovator here). A remarkable standpoint, to say the least.

                                                                                                                                                                  Because this is something that I completely agree with. I have recently switched to abduco from tmux because I want my terminal to handle being a terminal and the only thing that I wanted from tmux was connection persistence. There are a load of ‘features’ in tmux that really annoy me. It does not forward everything to my terminal and implements its own scrollback which means I can’t cat a file, select it, copy it, and paste it into another terminal connected to a different machine (which is something I do far more often than I probably should).

                                                                                                                                                                  1. 2

                                                                                                                                                                    Yeah, I do not like how some terminal emulators now are leaving everything to tmux/screen, rather than implementing useful features for management, scrollback, etc themselves. For 99% of my cases, I don’t need tmux in addition to my shell and a good terminal emulator, so idk why I’d want to introduce more complexity.

                                                                                                                                                                    kitty honestly works very well for me, and has Unicode and font features that zutty does not seem to consider. Clearly some work needs to be done for conformance to the tests that the author raises, but for my needs, kitty works great for Unicode coverage and rendering.

                                                                                                                                                                    1. 1

                                                                                                                                                                      Yeah, I do not like how some terminal emulators now are leaving everything to tmux/screen,

                                                                                                                                                                      So I think tmux and screen both suck since they don’t pass through to the terminal things like scrollback. Instead of the same mouse wheel or shift+page up, I have to shift gears to C-a [ or whatever it is.

                                                                                                                                                                      I actually decided to write my own terminal emulator… and my own attach/detach session thing that goes with it. With my custom pass-through features I can actually use them all the same way. If I attach a full screen thing, the shift pageup/down just pass through to the application, meaning it can nest. Among other things. I kinda wonder why the others don’t lobby for similar xterm extensions or something so they can do this too.

                                                                                                                                                                    2. 2

                                                                                                                                                                      I also love how Kitty pretty easily allows you to extend these features with other programs. Instead of Kitty’s default history, I have it enter neovim (with all of my configurations) so that I can navigate and copy my history in the same way the I write my code. I have been using Kitty for a few years and absolutely love it. The only issue I run into on occasion is that SSHing into some servers can mess the terminal up a little.

                                                                                                                                                                      1. 2

                                                                                                                                                                        Same. I never warmed to the “tmux is all you need” approach, because, honestly, it’s just a totally unnecessary interloper in my terminal workflow. I like being able to detach/reattach sessions, but literally everything else about tmux drives me bananas.

                                                                                                                                                                      1. 2

                                                                                                                                                                        This just moves the risk from Cloudflare to Cloudflare’s partners. Will they sell query logs?

                                                                                                                                                                        1. 2

                                                                                                                                                                          I mean, that depends.

                                                                                                                                                                          Cloudflare isn’t the only DoH player in the game (https://dnscrypt.info/public-servers/ contains DoH and DNSCrypt supporting servers), and the source to their proxy is released. You could set up a community DoH proxy that proxies to Quad9 and offer it to a bunch of folks. Your queries wouldn’t touch Cloudflare in this case.