1. 1
    • The graph clearly shows a correlation.
    • I personally am happier by working less (currently, working 4 days per week, by choice)
    • Nevertheless, I highly doubt this proves anything. There could be some confounding factor, such as wealth per capita: how about people are happier because they’re wealthier, and they work less because they can afford to (I don’t actually know)?

    That said, I do believe reducing the work week, at least in most OCDE countries, would result in better overall happiness… except for much of the ruling class. See, reducing the work week mechanically makes labour scarcer. Which will increase demand, lower unemployment, which is one of the biggest leverage employers can use to lower salaries or worsen working conditions. Scarcer labour could reverse this tendency, turn the job market into an employee’s market, and thus raise salaries and/or spur better working conditions.

    Such improvements could contribute to happiness just as much as working less might.

    1. 49

      Honestly I think that suckless page is a terrible criticism of systemd. It’s the kind of rantings that are easy to dismiss.

      A much better – and shorter – criticism of systemd is that for most people, it does a lot of stuff they just don’t need, adding to a lot of complexity. As a user, I use systemd, runit, and OpenRC, and I barely notice the difference: they all work well. Except when something goes wrong, in which case it’s so much harder to figure out systemd than runit or OpenRC.

      Things like “systemd does UNIX nice” are rather unimportant details.

      I’m a big suckless fan, but this is not suckless at their best.

      1. 10

        A much better – and shorter – criticism of systemd is that for most people, it does a lot of stuff they just don’t need, adding to a lot of complexity.

        How many things does the Linux kernel support that you don’t use or need, and how many lines of code in the kernel exist to support those things?

        1. 3

          a lot, and it’s also a criticism of linux. but sometimes people must use linux, and now sometimes people must use systemd.

          linux’s extra features are also much better modularized and can be left out, unlike systemd’s.

          1. 2

            linux’s extra features are also much better modularized and can be left out, unlike systemd’s.

            But they can. The linked article describes that many of the features that people wrongly claim PID1 now does are just modules. For example you don’t have to use systemd-timesyncd, but you can and it works way better on the desktop than the regular server-grade NTP implementations.

            1. 1

              I’m sorry but how does syncing time every once in a while get much improved by systemd-timesyncd? NTP is like the least of my worries.

              1. 2

                Somehow my computer was insisting on being 2 minutes off and even if I synced manually and wrote to my BIOS RTC clock NTPd and chrony were insisting on messing it up (and then possibly giving up since the jump was 2 minutes). Both these daemons feel like they aren’t good matches on a system that’s not on 24/7.

                1. 1

                  sounds like a configuration issue and nothing to do with the program itself. what distro did you use ntpd and chrony with? what distro are you using systemd-timesyncd with?

                  by default, void linux starts ntpd with the -g option which allows the first time adjustment to be big.

          2. 2

            If we’re going there, we might as well mention that Linux supports a whole freaking lot of hardware I don’t need. Those are most probably the biggest source of complexity in the kernel. Solving that alone would unlock many other thing, but unfortunately, with the exception of CPUs the interface to hardware isn’t an ISA, it’s an API.

            1. 5

              If we’re going there, we might as well mention that Linux supports a whole freaking lot of hardware I don’t need.

              While, simultaneously, not supporting all the hardware that you want.

              I think it’s a good example that the Linux model is culturally inclined to build monolithic software blocks.

              1. 7

                While, simultaneously, not supporting all the hardware that you want.

                Ah, that old bias:

                • Hardware does not work on Windows? It’s the hardware vendor’s fault.
                • Hardware does not work on Linux? It’s Linux’s fault.

                We could say the problem is Linux having a small market share. I think the deeper problem is the unbelievable, and now more and more unjustified, diversity in hardware interfaces. We should be able by now be able to specify sane, unified, yet efficient hardware interfaces for pretty much anything. We’ve done it for mouses and keyboards, we can generalise. Even graphics cards, which are likely hardest to deal with because of their unique performance constraints, are becoming uniform enough that standardising a hardware interface makes sense now.

                Imagine the result: one CPU ISA (x86-64, though far from ideal, currently is it), one graphics card ISA, one sound card ISA, one hard drive ISA, one webcam ISA… You get the idea. Do that, and suddenly writing an OS from scratch is easy instead of utterly intractable. Games could take over the hardware. Hypervisors would no longer be limited to bog standard server racks. Performance wouldn’t be wasted on a humongous pile of subsystems most single applications don’t need. Programs could sit on a reliable bed rock again (expensive recalls made hardware vendors better at testing their stuff).

                But first, we need hardware vendors to actually come up with a reasonable and open hardware interface. Just give buffers to write to, and a specification of the data format for those buffers. Should be no harder than to write an OpenGL driver with God knows how many game specific fixes.

                1. 7

                  Nah, that’s not what I’m implying. It’s not Linux fault, it’s still a major practical sore from a users perspective. I’m well aware that this is mainly the hardware vendors fault in all cases.

                  Also, it should be noted that Linux kernel development is in huge parts driven by exactly those vendors, so even if it were Linux fault, there’s a substantial overlap.

                  It’s still amazing how much of hardware is supported in the kernel, of very varying quality and is committed to be maintained.

                  1. 3

                    It’s still amazing how much of hardware is supported in the kernel, of very varying quality and is committed to be maintained.

                    One thing that amused me recently was the addition of SGI Octane support in Linux 5.5, hardware that’s basically extinct since 2 decades and was never particular popular to begin with. But the quixotism of this is oddly endearing.

                    1. 2

                      was never particular popular to begin with

                      Hey, popular isn’t always the best metric. SGI’s systems were used by smart folks to produce a lot of interesting stuff. Their graphics and NUMA architecture were forward-thinking with me still wanting their NUMAlink on the cheap.The Octanes been behind a lot of movies. I think the plane scene in Fight Club was SGI, too. My favorite was SGI’s Onyx2 being used for Final Fantasy given how visually groundbreaking it was at the time. First time I saw someone mistake a CG guy for a real person.

              2. 2

                Device drivers are the most modular part of the kernel. Don’t compile them if you don’t want them

                1. 1

                  True, but (i) pick & choose isn’t really the default, and (ii) implementing all those drivers is a mandatory, unavoidable part of writing an OS.

                  I don’t really care that the drives are there, actually. My real problem is the fact they need to be there. There’s no way to trim that fat without collaboration from hardware vendors.

                  1. 1

                    Well mainstream distribution kernels are still built modularly and the device driver modules are only loaded if you actually have hardware that needs them, at least as far as I understand it.

                    I don’t really care that the drives are there, actually. My real problem is the fact they need to be there. There’s no way to trim that fat without collaboration from hardware vendors.

                    Yeah that is a big PITA. It’s getting worse, too. It used to be that every mouse would work with basically one mouse driver. Now you need special drivers for every mouse because they all have a pile of proprietary interfaces for specifying LED colours and colour patterns, different special keys, etc.

            2. 6

              And there are no real alternatives to a full system layer. I like runit and openrc and I use them both (on my Void laptop and Gentoo desktop). When I use Debian or Ubuntu at work, for the most part I don’t have to worry about systemd, until I try to remember how to pull up a startup log.

              systemctl/journalctl are poorly designed and I often feel like I’m fighting them to get the information I really need. I really just prefer a regular syslog + logrotate.

              It’d be different if dbus had different role endpoints and you could assign a daemon to fulfill all network role messages and people could use NetworkManager or systemd-networking or … same with systemd being another xinitd type provider and everything get funneled through a communication layer.

              Systemd is everything, and when you start going down that route, it’s like AWS really. You get locked in and you can’t easily get out.

              1. 5

                As a note to those reading: there is murmur of creating a slimmed down systemd standard. I think it’d satisfy everyone. Look around and you’ll find the discussions.

                1. 4

                  I can’t really find anything about that at a moment’s notice other than this Rust rewrite; is that what you mean?

                  Personally, I think a lot of fundamental design decisions of systemd make it complex (e.g. unit files are fundamentally a lot more complex than the shell script approach in runit), and I’m not sure how much a “systemd-light” would be an improvement.

                  1. 16

                    As someone who just writes very basic unit files (for running IRC bots, etc). I find them a lot simpler than shell scripts. Everything is handled for me, including automatically restarting the thing after a timeout, logging, etc. without having to write shell scripts with all the associated bugs and shortcomings.

                    1. 12

                      Have you used runit? That does all of that as well. Don’t mistake “shell script approach” with “the SysV init system approach”. They both use shell scripts, but are fundamentally different in almost every other respect (in quite a few ways, runit is more similar to systemd than it is to SysV init).

                      As a simple example, here is the entire sshd script:

                      ssh-keygen -A >/dev/null 2>&1 # Will generate host keys if they don't already exist
                      [ -r conf ] && . ./conf
                      exec /usr/bin/sshd -D $OPTS

                      For your IRC bot, it would just be something like exec chpst -u user:group ircbot. Personally, I think it’s a lot easier than parsing and interpreting unit files (and more importantly, a lot easier to debug once things go wrong).

                      My aim here isn’t necessarily to convince anyone to use runit btw, just want to explain there are alternative approaches that bring many of the advantages that systemd gives, without all the complexity.

                      1. 2

                        I have never tried it. But then, if it’s a toplevel command, not even in functions, how can you specify dependencies, restart after timeout, etc.? It seems suspiciously too simple :-)

                        1. 3

                          Most of the time I don’t bother with specifying dependencies, because if it fails then it will just try again and modern systems are so fast that it rarely fails in the first place.

                          But you can just wait for a service:

                          sv check dhcpcd || (sleep 5; exit 1)
                          sv check wpa_supplicant || (sleep 5; exit 1)
                          exec what_i_want_to_run

                          It also exposes some interfaces via a supervise directory, where you can read the status, write to change the status, roughly similar to /proc. This provides a convenient platform-agnostic API in case you need to do advanced stuff or want to write your own tooling.

                          1. 23

                            No offense, but this snippet alone convinces me that I’m better off using systemd’s declarative unit files (as I am doing currently, with for similar uses than @c-cube ‘s). I’ve never been comfortable with the shell semantics generally speaking, and overall this feels rather fiddly and hackish. I’d rather just not have to think about it, and have systemd (or anything similar) do it for me.

                            1. 5

                              Well, the problem with Unit files is that you have to rely on a huge parser and interpreter from systemd to do what you want, which is hugely opaque, has a unique syntax, etc. The documentation for just systemd.unit(5) it is almost 7,000 words. I don’t see how you can “not have to think” about it?

                              Whereas composition from small tools in a shell script is very transparent, easy to debug, and in essence much easier to use. I don’t know what’s “fiddly and hackish” about it? What does “hackish” even mean in this context? What exactly is “fiddly”?

                              Like I said before, systemd works great when it works, it’s when it doesn’t work. I’ve never been able to debug systemd issues without the help of The Internet because it requires quite specific and deep knowledge, and you can never really be certain if the behaviour is a bug or error on your part.

                              1. 11

                                Well, the problem with Unit files is that you have to rely on a huge parser and interpreter from systemd to do what you want, which is hugely opaque, has a unique syntax, etc. The documentation for just systemd.unit(5) it is almost 7,000 words. I don’t see how you can “not have to think” about it?

                                To me it seems a bit weird to complain about the unit file parser but then just let the oddly unique and terrible Unix shell syntax just get a free pass. If I were to pick which is easier to parse, my money would be on unit files.

                                Plus, each shell command has its own flags and some have rather intricate internal DSLs (find, dd or jq come to mind).

                                1. 2

                                  The thing with shell scripts is that they’re a “universal” tool. I’d much rather learn one universal tool well instead of many superficially.

                                  I agree shell scripts aren’t perfect; I’m not sure what (mature) alternatives there are? People have talked about Oil here a few times, so perhaps that’s an option.

                                  1. 8

                                    The thing with shell scripts is that they’re a “universal” tool.

                                    But why do you even want that in an init system? The task is to launch processes, which is fairly mundane except having lots of rough edges. With shell scripts you end up reinventing half of it badly and hand-waving away the issues that remain because a nicer solution in shell would be thousands of lines and not readable at all.

                                    I would actually my tools to be less turing-complete and give me more things I can reason about. With unit files it is easier to reason about and see that they are correct (since the majority functionality is implemented in the process launcher and if bugs there are fixed it fixes them in all unit files).

                                    I actually don’t get the sudden hate for configuration files, since sendmail, postfix, Apache, etc all have their configuration formats instead of launching scripts to handle HTTP, SMTP and whatnot. The only software I have in recent memory that you configure with code is xmonad.

                                    1. 1

                                      I wrote a somewhat lengthy reply to this this morning, but then my laptop ran out of battery (I’m stupid and forgot to plug it in) so I lost it :-(

                                      Briefly: to be honest, I think you’re thinking too much about SysV init style shell scripts. In systems like runit/daemontools, you rarely implement logic in shell scripts. In practice the shell scripts tend to be just a one-liner which runs a program. Almost all of the details are handles by runit, not the shell script, just like with systemd.

                                      In runit, launching an external process – which doesn’t even need to be a shell script per-se, but can be anything – is just a way to have some barrier/separation of concerns. It’s interesting you mention postfix, because that’s actually quite similar in how it calls a lot of external programs which you can replace with $anything (and in some complex setups, I have actually replaced this with some simple shell scripts!)

                                      I agree the SysV init system sucked for pretty much the same reasons as you said, and would generally prefer systemd over that, but runit is fundamentally different in almost every conceivable way.

                                    2. 4

                                      This is hardly unique to systemd unit files, though.

                                      /etc/fstab is a good example of something old. There’s nothing stopping it from being a shell script with a bunch of mount commands. Instead, it has its own file format that’s been ad-hoc extended multiple times, its own weird way of escaping spaces and tabs in filenames (I had to open the manpage to find this; it’s \040 for space and \011 for tab), and a bunch of things don’t wind up using it for various good reasons (you can’t use /etc/fstab to mount /etc, obviously).

                                      But the advantage? Since it doesn’t have things like variables and control flow, it’s easy to modify automatically, and basic parsing gives you plenty of useful information. You want to mount a bunch of different filesystems concurrently? Go ahead; there’s nothing stopping you (which is, of course, why systemd replaced all those shell scripts while leaving fstab as-is).

                                      In other words: banal argument in favour of declarative file formats instead of scripts.

                                  2. 3

                                    I don’t know what’s “fiddly and hackish” about it?

                                    It’s fiddly, because you can’t use any automatic tool to parse the list of dependencies, and it’s hackish, because the build system doesn’t know what it’s doing, it just retries starting random services until it matches the proper order. It’s nondeterministic, so it’s impossible to debug in case of any problems.

                                    1. 3

                                      You can pipe it through grep, c’mon dude. And framing it as “starting random services” is just wrong, that’s the opposite of what’s happening.

                                      1. 4

                                        And framing it as “starting random services” is just wrong, that’s the opposite of what’s happening.

                                        This doesn’t look very convincing ;)

                                        Well, you can cat the startup script and see the list of dependencies if you’re only worried by one machine. But from the point of view of a developer, supporting automatic parsing of such startup scripts is impossible, because it’s defined by a Turing-complete language.

                                        Again, it’s still fine if you’re an administrator of just one machine (i.e. you’re the only user). But it’s not an optimized method when you have farms of servers (physical or VMs), and that’s the majority of cases where UNIX systems are used.

                                        Also it’s easier to install some rootkit inside shell scripts, because it’s impossible to reliably scan a bash script for undesirable command injections.

                                    2. 2

                                      While I agree that systemd.unit files syntax sometimes is weird and I would much more prefer for it to use for example TOML instead, I do not think that shell syntax is any better. TBH it is even more confusing sometimes (as @Leonidas said).

                                    3. 6

                                      I’d rather just not have to think about it, and have systemd (or anything similar) do it for me.

                                      Don’t be surprised when you pay the price this thread speaks of for the privilege of thinking slightly less ;)

                                      1. 8

                                        Sometimes abstractions are, in fact, good. I am glad I don’t have to think how my CPU actually works. And starting services is such a run-off the mill job that I don’t want to write a program that will start my service, I just want to configure how to do it.

                                      2. 3

                                        Dependencies in general are a mistake in init systems: Restarting services means that your code needs to handle unavailability anyways – so use that to simplify the init system. As a bonus, you ensure that the code paths to deal with dependencies restarting gets exercised.

                            2. 2

                              I really like systemd’s service files for the simple stuff I need to do with them (basically: execute daemon command, set user/group permission, working dir, dependencies, PID file location, that’s it). But there are other aspects of systemd I dislike. I wish someone would implement a service file parser for something like OpenRC that supports at least those basic systemd service files. It would ease cooperation among init systems quite a bit I think and make switching easier. It would also ease the life of alternative init system makers, because many upstream projects provide systemd service files already.

                            3. 3

                              A much better – and shorter – criticism of systemd is that for most people, it does a lot of stuff they just don’t need, adding to a lot of complexity.

                              This sort of computing minimalism confuses me. Should we say the exact same about the computing platforms themselves? x86 has a lot of things we don’t need so we should simply use a RISC until you need just the right parts of x86… That motherboard has too many PCI slots, I’m going to have to rule it out for one with precisely the right amount of PCI slots… If you can accomplish the task with exactly a stick and a rock why are you even using a hammer, you fool!

                              1. 3

                                It’s not ‘minimalism’ that makes me balk at systemd’s complexity. It’s that that complexity translates directly to security holes.

                                1. 2

                                  It’s really a long-standing principle in engineering to make things as simple as feasible and reduce the number of moving parts. It’s cheaper to produce, less likely to break, easier to fix, etc. There is nothing unique about software in this regard really.

                                  I never claimed to be in favour of absolute minimalism over anything else.

                              1. 9

                                That’s surprisingly neat. I’d still choose C in most situations, but I guess the Bash version could be useful in some extreme environment where you don’t have access to a compiler…

                                Oh I know: data exfiltration.

                                1. 5

                                  Sadly real situations exist for some stuff: eg shared hosts. PHP, perl and shell are fine; getting access to a compiler requires support requests and is timed. Not all are like this, but my current site’s one (Zuver) is.

                                  EDIT: Host uses an old glibc, had issues compiling on my machine and uploading the binaries. Since had luck with something along the lines of musl-gcc -static -libc-static

                                  At one point I added a C program into my (sh) website backend. It split up HTTP multipart submissions into files with controlled filenames, so shell (or any language) manipulation could be easy afterwards. Eg for when submitting both text and image files via one POST. Decided to cull it and stick to simpler methods of getting pictures up.

                                1. 1

                                  Nice article, I still think that if you can use 256 or even 512 bits without deteriorating the service in question, I’d personally go for that.

                                  1. 4

                                    There are indeed valid reasons to use 512-bit hashes even if you think 256-bits are plenty:

                                    • EdDSA take advantage of the bigger hash to remove bias from the modulo L operation, and to separate a 512-bit hash into two independent numbers (one to feed the nonce, one to make the public key).
                                    • My own Monokex protocol suite uses 512-bit hashes because Monocypher uses Blake2b to begin with. Earlier versions even cut the hash in two instead of using HKDF.
                                    • In general, 512-bits hashes have designs that run faster on modern 64-bit CPUs. There is often little point in cutting down the size of the digest, even though the hash was chosen for its speed to begin with.

                                    That said, if you’re using 512 bit hashes for security reasons, then you should also switch to bigger curves like Curve448. Otherwise the cryptanalist will just attack the curve, which now has become the weakest link.

                                    1. -1

                                      to remove bias from the modulo L operation

                                      Why not just truncate the hash to |L| bits?

                                      1. 5

                                        Noooo, you will break everything!

                                        The thing is, signature schemes like ECDSA and EdDSA (of which Ed25519 is an instantiation) require some random nonce at some point. That nonce must be between 0 and the order of the curve L, without any bias. If the attacker can detect even a slight bias in the way this number is generated, they can exploit it and recover the private key. The absolute worst bias is outright reusing the nonce, and that reveals your private key instantly. That’s how Sony lost its own PS3 master keys.

                                        Bias is pretty bad.

                                        Now you can’t just truncate down to |L| bits, because you’d overflow some of the time, and that introduces the dreaded bias. The obvious defence would be to test whether your random number exceeds L, then reject it and try again if it isn’t. Problem is, that’s not constant time, and now you have to worry about possible flows of information from secrets to timings. Workable, but not great.

                                        Ed25519 has two cleverer defenses:

                                        1. The order of the curve L is very very close to 2^252. So much so that numbers above 2^252 don’t even come up in random tests, or even in Google’s Wycheproof test vectors. If you just truncated your random number down to 252 bits, the bias would likely not even be noticeable.

                                        2. Since this defence is specific to Curve25519, EdDSA also take the precaution of computing the modulo of a ludicrously large hash (512 bits), so that even if the bias of a crude modulo would have been noticeable, it is no longer. The likelihood of picking up the largest group of number (the one between L^something and 2^512) is so overwhelmingly low that it can be considered flat out impossible.

                                        Oh, and EdDSA has a third precaution, which is to generate the random nonce deterministically, from the message and the private key. That way you can’t misuse the API and reuse a nonce like Sony did. That said, nothing prevents some clever fool from picking a random number by themselves. I personally would never write such an API (there are safer ways to get the same advantages), but some do take this approach.

                                        1. 0

                                          I see now. I was under the mistaken impression that |L| all-1 bits fit in L (as a number), but that obviously makes no sense unless L itself is all-1s, which it isn’t. I didn’t realize it would overflow, my bad.

                                          (Simple example for those still confused: Take L = 10, which is 0b1010. If you take the first 4 bits from a hash that starts with 0xff..., you’d take 0b1111. But 0b1111 > 0b1010, so it wouldn’t actually fit.)

                                          1. 2

                                            To expand on your example, we could avoid overflow by taking the first 3 bits instead. That way you never exceed 10. The problem is that you’ll never select 8 or 9 that way, and that kind of bias can be exploited.

                                            Similarly, you can take 4 bits, and do the modulo 10 to avoid overflow. But then numbers from 0 to 5 are selected twice as often as the numbers from 6 to 9.

                                            Or you can take 8 bits before you take the modulo. Now numbers from 0 to 5 are selected 26 times, out of 51, and numbers from 6 to 9 are selected 25 times out of 51. Now the bias is much smaller, and possibly not as exploitable, if at all.

                                  1. 5

                                    One thing that (I believe) is very difficult to measure when all you have is the code, is how much time and effort went into it.

                                    Without that data, I have this feeling that bug hunting simply stopped when the thing were deemed “good enough”. And I mean that at every stage of development. If something sorta makes sense, I fire up the compiler. If it sorta works, I test a few cases. If it looks like it works, I may write some unit tests or something to increase confidence. Then I just push the commit. And before publishing an official release, I simply make sure the code and tests are up to snuff, where “snuff” here is determined almost entirely by the problem domain: a simple one off script is often usable even if it’s buggy, but a crypto library better be hardened up to 10 point steel.

                                    Assuming I work correctly, whether I use C or Haskell will have little bearing on the end result. It may however have a significant effect on how much time I spent on the whole project. (And that effect may depend on the domain.)

                                    I’m not sure how much evidence can be derived from GitHub repositories. My guess is not much.

                                    1. 9

                                      I’m not sure how much evidence can be derived from GitHub repositories. My guess is not much.

                                      Someone asked this in Jan Vitek’s talk on the paper. His response was that he thinks the entire project was doomed from the start, but nobody would believe them. They had to first prove the original paper was internally flawed, “beat them at their own game”, before people would accept the more extreme claim that “comparing language bugs by github repos is a bad idea.”

                                    1. 32

                                      To me the big deal is that Rust can credibly replace C, and offers enough benefits to make it worthwhile.

                                      There are many natively-compiled languages with garbage collection. They’re safer than C and easier to use than Rust, but by adding GC they’ve exited the C niche. 99% of programs may work just fine with a GC, but for the rest the only practical options were C and C++ until Rust showed up.

                                      There were a few esoteric systems languages or C extensions that fixed some warts of C, but leaving the C ecosystem has real costs, and I could never justify use of a “weird” language just for a small improvement. Rust offered major safety, usability and productivity improvements, and managed to break out of obscurity.

                                      1. 38

                                        Ada provided everything except ADTs and linear types, including seamless interoperability with C, 20 years before Rust. Cyclone was Rust before Rust, and it was abandoned in a similar state as Rust was when it took off. Cyclone is dead, but Ada got a built-in formal verification toolkit in its latest revision—for some that stuff alone can be a reason to pick instead of anything else for a new project.

                                        I have nothing against Rust, but the reason it’s popular is that it came at a right time, in the right place, from a sufficiently big name organization. It’s one of the many languages based on those ideas that, fortunately, happened to succeed. And no, when it first got popular it wasn’t really practical. None of these points makes Rust bad. One just should always see a bigger picture especially when it comes to heavily hyped things. You need to know the other options to decide for yourself.

                                        Other statically-typed languages allow whole-program type inference. While convenient during initial development, this reduces the ability of the compiler to provide useful error information when types no longer match.

                                        Only in languages that cannot umabiguously infer the principal type. Whether to make a tradeoff between that and support for ad hoc polymorphism or not is subjective.

                                        1. 15

                                          I’ve seen Cyclone when it came out, but at that time I dismissed it as “it’s C, but weird”. It had the same basic syntax as C, but added lots of pointer sigils. It still had the same C preprocessor and the same stdlib.

                                          Now I see it has a feature set much closer to Rust’s (tagged unions, patterns, generics), but Rust “sold” them better. Rust used these features for Result which is a simple yet powerful construct. Cyclone could do that, but didn’t. It kept nullable pointers and added Null_Exception.

                                          1. 12

                                            Ada provided everything except ADTs and linear types

                                            Unfortunately for this argument, ADTs, substructural types and lifetimes are more exciting than that “everything except”. Finally the stuff that is supposed to be easy in theory is actually easy in practice, like not using resources you have already cleaned up.

                                            Ada got a built-in formal verification toolkit in its latest revision

                                            How much of a usability improvement is using these tools compared to verifying things manually? What makes types attractive to many programmers is not that they are logically very powerful (they are usually not!), but rather that they give a super gigantic bang for the buck in terms of reduction of verification effort.

                                            1. 17

                                              I would personally not compare Ada and Rust directly as they don’t even remotely fulfill the same use-cases.

                                              Sure, there have been languages that have done X, Y, Z before Rust (the project itself does not lay false claim to inventing those parts of the language which may have been found elsewhere in the past), but the actual distinguishing factor for Rust that places it into an entirely different category from Ada is how accessible and enjoyable it is to interact with while providing those features.

                                              If you’re in health or aeronautics, you should probably be reaching for the serious, deep toolkit provided by Ada, and I’d probably be siding with you in saying those people probably should have been doing that for the last decade. But Ada is really not for the average engineer. It’s an amazing albeit complex language, that not only represents a long history of incredible engineering but a very real barrier of entry that’s simply incomparable to that of Rust’s.

                                              If, for example, I wanted today to start writing from scratch a consumer operating system, a web browser, or a video game as a business venture, I would guarantee you Ada would not even be mentioned as an option to solve any of those problems, unless I wanted to sink my own ship by limiting myself to pick from ex-government contractors as engineers, whose salaries I’d likely be incapable of matching. Rust on the other hand actually provides a real contender to C/C++/D for people in these problem spaces, who don’t always need (or in some cases, even want) formal verification, but just a nice practical language with a systematic safety net from the memory footguns of C/C++/D. On top of that, it opens up these features, projects, and their problem spaces to many new engineers with a clear, enjoyable language free of confusing historical baggage.

                                              1. 6

                                                Have you ever used Ada? Which implementation?

                                                1. 15

                                                  I’ve never published production Ada of any sort and am definitely not an Ada regular (let alone pro) but I studied and had a fondness for Spark around the time I was reading “Type-Driven Development with Idris” and started getting interested in software proofs.

                                                  In my honest opinion the way the base Ada language is written (simple, and plain operator heavy) ends up lending really well to extension languages, but it also can make difficult for beginners to distinguish the class of concept used at times, whereas Rust’s syntax has a clear and immediate distinction between blocks (the land of namespaces), types (the land of names), and values (the land of data). In terms of cognitive load then, it feels as though these two languages are communicating at different levels. Like Rust is communicating in the mode of raw values and their manipulation through borrows, while the lineage of Ada languages communicate at a level that, in my amateur Ada-er view, center on the expression of properties of your program (and I don’t just mean the Spark stuff, obviously). I wasn’t even born when Ada was created, and so I can’t say for sure without becoming an Ada historian (not a bad idea…), but this sort of seems like a product of Ada’s heritage (just as Rust’s so obviously written to look like C++).

                                                  To try and clarify this ramble of mine, in my schooling experience, many similarly young programmers of my age are almost exclusively taught to program at an elementary level of abstract instructions with the details of those instructions removed, and then after being taught a couple type-level incantations get a series of algorithms and their explanations thrown at their face. Learning to consider their programs specifically in terms of expressing properties of that program’s operations becomes a huge step out of that starting box (that some don’t leave long after graduation). I think something that Rust’s syntax does well (if possibly by mistake) is fool the amateur user into expressing properties of their programs on accident while that expression becomes part of what seems like just a routine to get to the meat of a program’s procedures. It feels to me that expressing those properties are intrinsic to the language of speaking Ada, and thus present a barrier intrinsic to the programmer’s understanding of their work, which given a different popular curriculum could probably just be rendered as weak as paper to break through.

                                                  Excuse me if these thoughts are messy (and edited many times to improve that), but beyond the more popular issue of familiarity, they’re sort of how I view my own honest experience of feeling more quickly “at home” in moving from writing Rust to understanding Rust, compared to moving from just writing some form of Ada, and understanding the program I get.

                                              2. 5

                                                Other statically-typed languages allow whole-program type inference. While convenient during initial development, this reduces the ability of the compiler to provide useful error information when types no longer match.

                                                Only in languages that cannot unabiguously infer the principal type. Whether to make a tradeoff between that and support for ad hoc polymorphism or not is subjective.

                                                OCaml can unambiguously infer the principal type, and I still find myself writing the type of top level functions explicitly quite often. More than once have I been guided by a type error that only happened because I wrote the type of the function I was writing in advance.

                                                At the very least, I check that the type of my functions match my expectations, by running the type inference in the REPL. More than once have I been surprised. More than once that surprise was caused by a bug in my code. Had I not checked the type of my function, I would catch the bug only later, when using the function, and the error message would have made less sense to me.

                                                1. 2

                                                  At the very least, I check that the type of my functions match my expectations, by running the type inference in the REPL

                                                  Why not use Merlin instead? Saves quite a bit of time.

                                                  That’s a tooling issue too of course. Tracking down typing surprises in OCaml is easy because the compiler outputs type annotations in a machine-readable format and there’s a tool and editor integrations that allow me to see the type of every expression in a keystroke.

                                                  1. 2

                                                    Why not use Merlin instead? Saves quite a bit of time.

                                                    I’m a dinosaur, that didn’t take the time to learn even the existence of Merlin. I’m kinda stucks in Emacs’ Tuareg mode. Works for me for small projects (all my Ocaml projects are small).

                                                    That said, my recent experience with C++ and QtCreator showed me that having warnings at edit time is even more powerful than a REPL (at least as long as I don’t have to check actual values). That makes Merlin look very attractive all of a sudden. I’ll take a look, thanks.

                                              3. 5

                                                Rust can definitely credibly replace C++. I don’t really see how it can credibly replace C. It’s just such a fundamentally different way of approaching programming that it doesn’t appeal to C programmers. Why would a C programmer switch to Rust if they hadn’t already switched to C++?

                                                1. 43

                                                  I’ve been a C programmer for over a decade. I’ve tried switching to C++ a couple of times, and couldn’t stand it. I’ve switched to Rust and love it.

                                                  My reasons are:

                                                  • Robust, automatic memory management. I have the same amount of control over memory, but I don’t need goto cleanup.
                                                  • Fearless multi-core support: if it compiles, it’s thread-safe! rayon is much nicer than OpenMP.
                                                  • Slices are awesome: no array to pointer decay. Work great with substrings.
                                                  • Safety is not just about CVEs. I don’t need to investigate memory murder mysteries in GDB or Valgrind.
                                                  • Dependencies aren’t painful.
                                                  • Everything builds without fuss, even when supporting Windows and cross-compiling to iOS.
                                                  • I can add two signed numbers without UB, and checking if they overflow isn’t a party trick.
                                                  • I get some good parts of C++ such as type-optimized sort and hash maps, but without the baggage C++ is infamous for.
                                                  • Rust is much easier than C++. Iterators are so much cleaner (just a next() method). I/O is a Read/Write trait, not a hierarchy of iostream classes.
                                                  1. 6

                                                    I also like Rust and I agree with most of your points, but this one bit seems not entirely accurate:

                                                    Fearless multi-core support: if it compiles, it’s thread-safe! rayon is much nicer than OpenMP.

                                                    AFAIK Rust:

                                                    • doesn’t guarantee thread-safety — it guarantees the lack of data races, but doesn’t guarantee the lack of e.g. deadlocks;
                                                    • guarantees the lack of data races, but only if you didn’t write any unsafe code.
                                                    1. 20

                                                      That is correct, but this is still an incredible improvement. If I get a deadlock I’ll definitely notice it, and can dissect it in a debugger. That’s easy-peasy compared to data races.

                                                      Even unsafe code is subject to thread-safety checks, because “breaking” of Send/Sync guarantees needs separate opt-in. In practice I can reuse well-tested concurrency primitives (e.g. WebKit’s parking_lot) so I don’t need to write that unsafe code myself.

                                                      Here’s an anecdote: I wrote some single threaded batch-processing spaghetti code. Since it each item was processed separately, I decided to parallelize it. I’ve changed iter() for par_iter() and the compiler immediately warned me that in one of my functions I’ve used a 3rd party library which used an HTTP client library which used an event loop library which stored some event loop data in a struct without synchronization. It pointed exactly where and why the code was unsafe, and after fixing it I had an assurance the fix worked.

                                                      1. 6

                                                        I share your enthusiasm. Just wanted to prevent a common misconception from spreading.

                                                        Here’s an anecdote: I wrote some single threaded batch-processing spaghetti code. Since it each item was processed separately, I decided to parallelize it. I’ve changed iter() for par_iter() and the compiler immediately warned me that in one of my functions I’ve used a 3rd party library which used an HTTP client library which used an event loop library which stored some event loop data in a struct without synchronization. It pointed exactly where and why the code was unsafe, and after fixing it I had an assurance the fix worked.

                                                        I did not know it could do that. That’s fantastic.

                                                      2. 9

                                                        Data races in multi-threaded code are about 100x harder to debug than deadlocks in my experience, so I am happy to have an imperfect guarantee.

                                                        guarantees the lack of data races, but only if you didn’t write any unsafe code.

                                                        Rust application code generally avoids unsafe.

                                                        1. 4

                                                          Data races in multi-threaded code are about 100x harder to debug than deadlocks in my experience, so I am happy to have an imperfect guarantee.

                                                          My comment was not a criticism of Rust. Just wanted to prevent a common misconception from spreading.

                                                          Rust application code generally avoids unsafe.

                                                          That depends on who wrote the code. And unsafe blocks can cause problems that show in places far from the unsafe code. Meanwhile, “written in Rust” is treated as a badge of quality.

                                                          Mind that I am a Rust enthusiast as well. I just think we shouldn’t oversell it.

                                                        2. 7

                                                          guarantees the lack of data races, but only if you didn’t write any unsafe code.

                                                          As long as your unsafe code is sound it still provides the guarantee. That’s the whole point, to limit the amount of code that needs to be carefully audited for correctness.

                                                          1. 2

                                                            I know what the point is. But proving things about code is generally not something that programmers are used to or good at. I’m not saying that the language is bad, only that we should understand its limitations.

                                                          2. 1

                                                            I find it funny that any critique of Rust needs to be prefixed with a disclaimer like “I also like Rust”, to fend off the Rust mob.

                                                        3. 11

                                                          This doesn’t really match what we see and our experience: a lot of organisations are investigating their replacement of C and Rust is on the table.

                                                          One advantage that Rust has is that it actually lands between C and C++. It’s pretty easy to move towards a more C-like programming style without having to ignore half of the language (this comes from the lack of classes, etc.).

                                                          Rust is much more “C with Generics” than C++ is.

                                                          We currently see a high interest in the embedded world, even in places that skipped adopting C++.

                                                          I don’t think the fundamental difference in approach is as large as you make it (sorry for the weak rebuttal, but that’s hard to quantify). But also: approaches are changing, so that’s less of a problem for us, as long as we are effective at arguing for our approach.

                                                          1. 2

                                                            It’s just such a fundamentally different way of approaching programming that it doesn’t appeal to C programmers. Why would a C programmer switch to Rust if they hadn’t already switched to C++?

                                                            Human minds are sometimes less flexible than rocks.

                                                            That’s why we still have that stupid Qwerty layout: popular once for mechanical (and historical) reasons, used forever since. As soon as the mechanical problems were fixed, Sholes imself devised a better layout, which went unused. Much later, Dvorak devised another better layout, and it is barely used today. People thinking in Qwerty simply can’t bring themselves to take the time to learn the superior layout. (I know: I’m in a similar situation, though my current layout is not Qwerty).

                                                            I mean, you make a good point here. And that’s precisely what’s make me sad. I just hope this lack of flexibility won’t prevent C programmers from learning superior tools.

                                                            (By the way, I would chose C over C++ in many cases, I think C++ is crazy. But I also know ML (OCaml), a bit of Haskell, a bit of Lua… and that gives me perspective. Rust as I see it is a blend of C and ML, and though I have yet to write Rust code, the code I have read so far was very easy to understand. I believe I can pick up the language pretty much instantly. In my opinion, C programmers that only know C, awk and Bash are unreasonably specialised.)

                                                            1. 1

                                                              I tried to switch to DVORAK twice. Both times I started to get pretty quick after a couple of days but I cheated: if I needed to type something I’d switch back to QWERTY, so it never stuck.

                                                              The same is true of Rust, incidentally. Tried it out a few times, was fun, but then if I want to get anything useful done quickly it’s just been too much of a hassle for me personally. YMMV of course. I fully intend to try to build something that’s kind of ‘C with lifetimes’, a much simpler Rust (which I think of as ‘C++ with lifetimes’ analogously), in the future. Just have to, y’know, design it. :D

                                                              1. 3

                                                                I too was tempted at some point to design a “better C”. I need:

                                                                • Generics
                                                                • Algebraic data types
                                                                • Type classes
                                                                • coroutines, (for I/O and network code, I need a way out of raw poll(2))
                                                                • Memory safety

                                                                With the possible exception of lifetimes, I’d end up designing Rust, mostly.

                                                                1. 2

                                                                  I agree that you need some way of handling async code, but I don’t think coroutines are it, at least not in the async/await form. I still feel like the ‘what colour is your function?’ stuff hasn’t been solved properly. Any function with a callback (sort with a key/cmp function, filter, map, etc.) needs an async_ version that takes a callback and calls it with await. Writing twice as much code that’s trivially different by adding await in some places sucks, but I do not have any clue what the solution is. Maybe it’s syntactic. Maybe everything should be async implicitly and you let the compiler figure out when it can optimise things down to ‘raw’ calls.


                                                                  Worth thinking about at least.

                                                                  1. 4

                                                                    Function colors are effects. There are two ways to solve this problem:

                                                                    1. To use polymorphism over effects. This is what Haskell does, but IMO it is too complex.
                                                                    2. To split large async functions into smaller non-async ones, and dispatch them using an event loop.

                                                                    The second approach got a bad reputation due to its association with “callback hell”, but IMO this reputation is undeserved. You do not need to represent the continuation as a callback. Instead, you can

                                                                    1. Define a gigantic sum type of all possible intermediate states of asynchronous processes.
                                                                    2. Implement each non-async step as an ordinary small function that maps intermediate states (not necessarily just one) to intermediate states (not necessarily just one).
                                                                    3. Implement the event loop as a function that, iteratively,
                                                                      • Takes states from an event queue.
                                                                      • Dispatches an appropriate non-async step.
                                                                      • Pushes the results, which are again states, back into the event queue.

                                                                    Forking can be implemented by returning multiple states from a single non-async step. Joining can be implemented by taking multiple states as inputs in a single non-async step. You are not restricted to joining processes that were forked from a common parent.

                                                                    In this approach, you must write the event loop yourself, rather than delegate it to a framework. For starters, no framework can anticipate your data type of intermediate states, let alone the data type of the whole event queue. But, most importantly, the logic for dispatching the next non-async step is very specific to your application.


                                                                    1. Because the data type of intermediate states is fixed, and the event loop is implemented in a single centralized place, it is easier to verify that your code works “in all cases”, either manually or using tools that explicitly model concurrent processes using state machines (e.g., TLA+).

                                                                    2. Because intermediate states are first-order values, rather than first-class functions, the program is much easier to debug. Just stop the event loop at an early time and pretty-print the event queue. (ML can automatically pretty-print first-order values in full detail. Haskell requires you to define a Show instance first, but this definition can be generated automatically.)


                                                                    1. If your implementation language does not provide sum types and/or pattern matching, you will have a hard time checking that every case has been covered, simply because there are so many cases.

                                                                    2. The resulting code is very much non-extensible. To add new asynchronous processes, you need to add constructors to the sum type of intermediate states. This will make the event loop fail to type check until you modify it accordingly. (IMO, this is not completely a drawback, because it forces you to think about how the new asynchronous processes interact with the old ones. This is something that you eventually have to do anyway, but some people might prefer to postpone it.)

                                                                    1. 3

                                                                      I agree that you need some way of handling async code, but I don’t think coroutines are it

                                                                      Possibly. I actually don’t know. I’d take whatever let me write code that looks like I’m dispatching an unlimited number of threads, but dispatches the computation over a reasonable number of threads, possibly just one. Hell, my ideal world is green threads, actually. Perhaps I should have lead with that…

                                                                      Then again, I don’t know the details of the tradeoffs involved. Whatever let me solve the 1M connections cleanly and efficiently works for me.

                                                            2. 5

                                                              I agree with @milesrout. I don’t think Rust is a good replacement for C. This article goes into some of the details of why - https://drewdevault.com/2019/03/25/Rust-is-not-a-good-C-replacement.html

                                                              1. 17

                                                                Drew has some very good points. Its a shame he ruins them with all the other ones.

                                                                1. 25

                                                                  Drew has a rusty axe to grind: “Concurrency is generally a bad thing” (come on!), “Yes, Rust is more safe. I don’t really care.”

                                                                  Here’s a rebuttal of that awful article: https://telegra.ph/Replacing-of-C-with-Rust-has-been-a-great-success-03-27 (edit: it’s a tongue-in-cheek response. Please don’t take it too seriously: the original exaggerated negatives, so the response exaggerates positives).

                                                                  1. -3

                                                                    Drew is right and this article you link to is just blatant fanboyism. It’s the classic example of fanboyism because it tries to respond to every point, yet some of them are patently true. Like, really? You can’t argue that Rust is more portable than C on the basis that there’s a little bit of leaky abstraction over Windows-specific stuff in its standard library. C is just demonstrably more portable.

                                                                    It criticises C for not changing enough, but change is bad and C89 is all C ever needed in terms of standardisation for the most part. About the only useful thing added since then was stdint.h. -ftrapv exists and thus wanky nonsense about signed overflow being undefined is invalid.

                                                                    I love this bit in particular:

                                                                    In C I could use make, gnu make, cmake, gyp, autotools, bazel, ninja, meson, and lots more. The problem is, C programmers have conflicting opinions on which of these is the obvious right choice, and which tools are total garbage they’ll never touch.

                                                                    In Rust I can use Cargo. It’s always there, and I won’t get funny looks for using it.

                                                                    In C you can use whatever you like. In Rust, if you don’t like Cargo, you just don’t use Rust. That’s the position I’m in. This isn’t better.

                                                                    1. 11

                                                                      I didn’t read that post as blatant fanboyism, but if someone’s positive and successful experience with Rust is fanboyism, let’s agree to disagree for now.

                                                                      It criticises C for not changing enough, but change is bad and C89 is all C ever needed in terms of standardisation for the most part.

                                                                      Change isn’t necessarily bad! With a few exceptions for libraries/applications opting into unstable features, you can compile and use the same Rust code that was originally authored in 2015. However, some of the papercuts that people faced in the elapsed time period were addressed in a backwards-compatible way.

                                                                      About the only useful thing added since then was stdint.h. -ftrapv exists and thus wanky nonsense about signed overflow being undefined is invalid.

                                                                      Defaults matter a great deal. People have spent a heroic amount of work removing causes of exploitable behavior in “in-tree” (as much as “in-tree” exists in C…) with LLVM/ASAN, and even more work out-of-tree with toolkits like CBMC, but C is still not a safe language. There’s a massive amount of upfront (and continuous!) effort needed to keep a C-based project safe, whereas Rust works for me out of the box.

                                                                      In C you can use whatever you like. In Rust, if you don’t like Cargo, you just don’t use Rust. That’s the position I’m in. This isn’t better.

                                                                      My employer has a useful phrase that I’ll borrow: “undifferentiated heavy lifting”. I view deciding which build system I should use for a project as “undifferentiated heavy lifting”, as Cargo covers 90-95% of the use cases I need. The remainder is either patched over using ad-hoc scripts or there is an upcoming RFC addressing that. This allows me to focus on my project instead spinning cycles wrangling build systems! That being said, I’ll be the first to admit that Cargo isn’t the perfect build system for every use case, but for my work (and increasingly, for several organizations at my employer), Cargo and Rust are an excellent replacement for C.

                                                                      1. 9

                                                                        let’s imagine I download some C from github. How do I build it?

                                                                        hopefully it’s ./configure && make && make install, but maybe not! Hopefully I have the dependencies, but maybe not! Hopefully if I don’t have the dependencies they are packaged for my distro, but maybe not!

                                                                        let’s imagine I download some rust from github. How do I build it?

                                                                        cargo build --release


                                                                        I know which one of those I prefer, personally

                                                                        1. -1

                                                                          You read the README. It says what you need to do.

                                                                          cargo build –release

                                                                          This ease-of-use encourages stuff like needing to compile 200+ Rust dependencies just to install the spotifyd AUR package. It’s a good thing for there to be a bit of friction adding new dependencies, in my opinion.

                                                                          1. 13

                                                                            So the alternative that you propose is to:

                                                                            1. Try to figure out which file(s) (if any) specify the dependencies to install
                                                                            2. Figure out what those dependencies are called on your platform, or even exist.
                                                                            3. Figure out what to do when they don’t exist, if you can compile them from source, how, etc
                                                                            4. Figure out which versions you need, because the software may not work with the latest version available on your platform
                                                                            5. Figure out how to install that older version without breaking whatever your system may have installed, making sure all your linker flags and what not are right, etc
                                                                            6. Figure out how to actually configure/install the darn thing, which at this point is something you have probably lost interest in.

                                                                            Honestly your argument that ease of use leads to 200+ dependencies is a weak argument. Even if all projects suffered from this, from the user’s perspective it’s still easier to just run cargo build --release and be done with it. Even if it takes 10 minutes to build, that’s probably far less time than having to do all the above steps manually.

                                                                            1. 7

                                                                              Dude everyone here has had to install C software in some environment at some point. Like we all know it’s not “just read the docs”, and you know we know. What’s the point of pretending it’s not a nightmare as a rule?

                                                                          2. 7

                                                                            Sorry you got downvoted to oblivion. You make some good points, but you also tend to present trade-offs and opinions as black-and-white facts. You don’t like fanboyism, but you also speak uncritically about C89 and C build systems.

                                                                            For example, -ftrapv exists and indeed catches overflows at run time, but it also doesn’t override the C spec that defines signed overflow is UB. Optimizers take advantage of that, and will remove naive checks such as if (a>0 && b>0 && a+b<0), because C allows treating it as impossible. It’s not “wanky nonsense”. It’s a real C gotcha that has lead to exploitable buffer overflows.

                                                                            1. 6

                                                                              -ftrapv exists and thus wanky nonsense about signed overflow being undefined is invalid.

                                                                              Nope, the existence of this opt in flag doesn’t make the complaints about signed overflow nonsensical. When I write a C library, I don’t control how it will be compiled and used, so if I want any decent amount of portability, I cannot assume -ftrapv will be used. For instance, someone else might be using -fwrapv instead, so they can check overflows more easily in their application code.

                                                                              In C you can use whatever you like.

                                                                              So can I. So can they. Now good luck integrating 5 external libraries, that uses, say CMake, the autotools, and ninja. When there’s one true way to do it, we can afford lots of simplifying assumption that make even a non-ideal one true way much simpler than something like CMake.

                                                                              (By the way, it seems that in the C and C++ worlds, CMake is mostly winning, as did the autotools before, and people will look at you funny for choosing something else.)

                                                                              1. -2

                                                                                I think I’m done discussing anything remotely controversial on this website. I’m going to get banned or something because people keep flagging my comments as ‘incorrect’ when they’re literally objective fact just because they can’t handle that some people don’t like Rust. It’s just sad. I thought this site was meant to be one where people could maturely discuss technical issues without fanboyism but it seems like while that’s true of most topics, when it comes to Rust it doesn’t matter where you are on the internet: the RESF is out to get you.

                                                                                It’s not like I’m saying ‘RUST BAD C GOOD’ or some simplistic nonsense. I’ve said elsewhere in the thread I think it’s a great alternative to C++, but it’s just so fundamentally different from C in so many ways that it doesn’t make sense to think of it as a C replacement. I’d love to see a language that’s more like ‘C with lifetimes’ than Rust which is ‘C++ with lifetimes’. Something easier to implement, more portable, but with those memory safety guarantees.

                                                                                1. 12

                                                                                  I thought this site was meant to be one where people could maturely discuss technical issues

                                                                                  It is. Maturity implies civility, in which almost every comment I read of yours is lacking, regardless of topic. Like, here, there are plenty of less abrasive ways of wording what you tried to say (“wanky nonsense” indeed). Then you assume that you are being downvoted because you hurt feelings with “objective facts” and everyone who disagreed with you is a fanboy, without considering that you could simply be wrong.

                                                                                  Lobste.rs has plenty of mature technical discussion. This ain’t it.

                                                                                  1. 5

                                                                                    Drew is right and this article you link to is just blatant fanboyism.

                                                                                    Is not at all objective. You are leaning far out of the window and people didn’t appreciate.

                                                                                    It’s fine to be subjective, but if you move the discussion to that field, be prepared for the response to be subjective.

                                                                                    1. 3

                                                                                      I’d like you to stay.

                                                                                      Before clicking “Post” I usually click “Preview” and read what I wrote. If you think this is a good idea, feel free to copy it :)

                                                                                      1. 2

                                                                                        A lot of the design of Rust seems to be adding features to help with inherent ergonomics issues with the lifetimes systems; out of interest, what are some of things Rust does (or doesn’t do) that you would change to make it more minimalistic?

                                                                                        I think it’s right not to view Rust as a C replacement in the general case. I kind of view it as an alternative to C++ for programmers who wanted something ‘more’ than C can provide but bounced of C++ for various reasons (complexity, pitfalls, etc).

                                                                                  2. 11

                                                                                    So many bad points from this post.

                                                                                    • We can safely ignore the “features per year”, since the documentation they are based on don’t follow the same conventions. I’ll also note that, while a Rust program written last year may look outdated (I personally don’t know Rust enough to make such an assessment), it will still work (I’ve been told breaking changes are extremely rare).

                                                                                    • C is not really the most portable language. Yes, C and C++ compilers, thanks to having decades of work behind them, target more devices than everything else put together. But no, those platforms do not share the same flavour of C and C++. There are simply too many implementation defined behaviours, starting with integer sizes. Did you know that some platforms had 32-bit chars? I worked with someone who worked on one.

                                                                                      I wrote a C crypto library, and went out of my way to ensure the code was very portable. and it is. Embedded developers love it. There was no way however to ensure my code was fully portable. I right-shift negative integers (implementation defined behaviour), and I use fixed width integers like uint8_t (not supported on the DSP I mentioned above).

                                                                                    • C does have a spec, but it’s an incomplete one. In addition to implementation defined behaviour, C and C++ also have a staggering amount of undefined and unspecified behaviour. Rust has no spec, but it still tries to minimise undefined behaviour. I expect this point will go away when Rust stabilises and we get an actual spec. I’m sure formal verification folks will want to have a verified compiler for Rust, like we currently have for C.

                                                                                    • *C have many implementations… and that’s actually a good point.

                                                                                    • C has a consistent & stable ABI… and so does Rust, somewhat? OK, it’s opt-in, and it’s contrived. My point is, Rust does have an FFI which allows it to talk to the outside world. It doesn’t have to be at the top level of a program. On the other hand, I’m not sure what would be the point of a stable ABI between Rust modules. C++ at least seems to be doing fine without that.

                                                                                    • Rust compiler flags aren’t sable… and that’s a good point. They should probably stabilise at some point. On the other hand, having one true way to manage builds and dependencies is a god send. Whatever we’d use stable compile flags for, we probably don’t want to depart from that.

                                                                                    • Parallelism and Concurrency are unavoidable. They’re not a bad thing, they’re the only thing that can help us cheat the speed of light, and with it single threaded performance. The ideal modern computer is more likely a high number of in-order cores, each with a small amount of memory, and an explicit (exposed to the programmer) cache hierarchy. Assuming performance and energy consumption trumps existing C (and C++) programs. Never forget that current computers are optimised to run C and C++ programs.

                                                                                    • Not caring about safety is stupid. Or selfish. Security vulnerabilities are often mere externalities, which you can ignore if it doesn’t damage your reputation to the point of affecting your bottom line. Yay Capitalism. More seriously, safety is a subset of correctness, and correctness is the main point of Rust’s strong type system and borrow checker. C doesn’t just make it difficult to write safe programs, it makes it difficult to write correct programs. You wouldn’t believe how hard that is. My crypto library had to resort to Valgrind, sanitisers, and the freaking TIS interpreter to eke out undefined behaviour. And I’m talking about “constant time” code, that has fixed memory access patterns. It’s pathologically easy to test, yet writing tests took as long as writing the code, possibly longer. Part of the difficulty comes from C, not just the problem domain.

                                                                                    Also, Drew DeVault mentions Go as a possible replacement for C? For some domains, sure. But the thing has a garbage collector, making it instantly unsuitable for some constrained environments (either because the machine is small, or because you need crazy performance). Such constrained environment are basically the remaining niche for C (and C++). For the rest, the only thing that keeps people hooked on C (and C++) are existing code and existing skills.

                                                                                    1. 4

                                                                                      Rust compiler flags aren’t sable… and that’s a good point. They should probably stabilise at some point. On the other hand, having one true way to manage builds and dependencies is a god send. Whatever we’d use stable compile flags for, we probably don’t want to depart from that.

                                                                                      This is wrong, though. rustc compiler flags are stable, except flags behind the -Z flag, which intentionally separates the interface between stable and unstable flags.

                                                                                      1. 2

                                                                                        Okay, I stand corrected, thanks.

                                                                                      2. 0

                                                                                        But the thing has a garbage collector, making it instantly unsuitable for some constrained environments (either because the machine is small, or because you need crazy performance).

                                                                                        The Go garbage collector can be turned off with debug.SetGCPercent(-1) and triggered manually with runtime.GC(). It is also possible to allocate memory at the start of the program and use that.

                                                                                        Go has several compilers available. gc is the official Go compiler, GCC has built-in support for Go and there is also TinyGo, which targets microcontrollers and WASM: https://tinygo.org/

                                                                                        1. 5

                                                                                          Can you realistically control allocations? If we have ways to make sure all allocations are either explicit or on the stack, that could work. I wonder how contrived that would be, though. The GC is on by default, that’s got to affect idiomatic code in a major way. To the point where disabling it probably means you don’t have the same language any more.

                                                                                          Personally, to replace C, I’d rather have a language that disables GC by default. If I am allowed to have a GC, I strongly suspect there are better alternatives than Go. (My most major objection being “lol no generics”. And if the designers made that error, that kind of cast doubt over their ability to properly design the rest of the language, and I lose all interest instantly. Though if I were writing network code, I would also say “lol no coroutines” at anything designed after 2015 or so.)

                                                                                          1. 1

                                                                                            I feel like GC by default vs no GC is one of the biggest decision points when designing a language. It affects so much of how the rest of a language has to be designed. GC makes writing code soooo much easier, but you can’t easily put non-GC’d things into a GC’d language. Or maybe you can? Rust was originally going to have syntax for GC’d pointers. People are building GC’d pointers into Rust now, as libraries - GC manages a particular region of memory. People are designing the same stuff for C++. So maybe we will finally be able to mix them in a few years.

                                                                                            1. 1

                                                                                              Go is unrealistic not only because of GC, but also segmented stacks, thick runtime that wants to talk to the kernel directly, implicit allocations, and dynamism of interface{}. They’re all fine if you’re replacing Java, but not C.

                                                                                              D lang’s -betterC is much closer, but D’s experience shows that once you have a GC, it influences the standard library, programming patterns, 3rd party dependencies, and it’s really hard to avoid it later.

                                                                                              1. 1

                                                                                                Can you realistically control allocations? If we have ways to make sure all allocations are either explicit or on the stack, that could work.

                                                                                                IIRC you can programmatically identify all heap allocations in a given go compilation, so you can wrap the build in a shim that checks for them and fails.

                                                                                                The GC is on by default, that’s got to affect idiomatic code in a major way.

                                                                                                Somewhat, yes, but the stdlib is written by people who have always cared about wasted allocations and many of the idioms were copied from that, so not quite as much as you might imagine.

                                                                                                That said - if I needed to care about allocations that much, I don’t think it’d be the best choice. The language was designed and optimized to let large groups (including many clever-but-inexperienced programmers) to write reliable network services.

                                                                                        2. 1

                                                                                          I don’t think replacing C is a good usecase for Rust though. C is relatively easy to learn, read, and write to the level where you can write something simple. In Rust this is decidedly not the case. Rust is much more like a safe C++ in this respect.

                                                                                          I’d really like to see a safe C some day.

                                                                                          1. 6

                                                                                            Have a look at Cyclone mentioned earlier. It is very much a “safe C”. It has ownership and regions which look very much like Rust’s lifetimes. It has fat pointers like Rust slices. It has generics, because you can’t realistically build safe collections without them. It looks like this complexity is inherent to the problem of memory safety without a GC.

                                                                                            As for learning C, it’s easy to get a compiler accept a program, but I don’t think it’s easier to learn to write good C programs. The language may seem small, but the actual language you need to master includes lots of practices for safe memory management and playing 3D chess with the optimizer exploiting undefined behavior.

                                                                                        1. 10

                                                                                          I love C++, the language. If programming in C++ was only about the language, I wouldn’t stop using it. However, the biggest pain point in C++, to me, is the hopelessly outdated ecosystem.

                                                                                          • First of all, the build system is a complete mess. Make, CMake and automake alleviate some of the problems, but the fact that every single library author invented their own hierarchy with their custom include paths guarantees a bumpy ride whenever you want to use a library, even if it’s a small, simple one. The C++ committee dropped the ball big time because it took them until now to even introduce modules, and in my eyes, it is too late now. I simply don’t care about build systems. Every second spent on build systems is wasted because it’s not spent on writing actual business logic.

                                                                                          • A related issue is cross-platform builds. What a mess! Every platform has its own requirements and quirks. git clone X && cd X && make is guaranteed to fail spectacularly. But that is how a build system is supposed to work. Transitive dependency resolution is really not hard at all. In terms of computer science, this is a solved problem for at least half a century. The fact that building is such a big problem is insane. In addition to that, 80% of C++ developers think Linux is the only OS in existence, and broken builds on Windows and QNX are the norm, rather than the exception.

                                                                                          • Another issue is that almost all C++ projects for Windows need proprietary software (Visual Studio) to be built. Support for libre / free build systems on Windows (like mingw64) is limited and often not well maintained and supported.

                                                                                          • Eclipse CDT is so buggy and many bugs simply aren’t fixed at all. Some bugs that I encounter daily exist in Eclipse for a decade, which is remarkable. I expect nothing less of a Java-IDE-turned-C++.

                                                                                          • There was a time in C++ history where nobody was sure if standard library functions should be in the std:: namespace. Common convention was that if you write #include <stdio.h> then it’s not in the std:: namespace, but if you write #include <cstdio> then it must be in the std:: namespace. But every compiler / standard library does it differently, ensuring build breakage across the board. Some require std::, some make it optional, some require omitting std::, depending on how you include the header. The standard is now clearer, but the damage has been done.

                                                                                          • There are some pain points in the language. Locales in C and C++ are a complete mess (see this epic rant, what he says is entirely true and will bite you: https://github.com/mpv-player/mpv/commit/1e70e82baa9193f6f027338b0fab0f5078971fbe ). I have a strong dislike for Exceptions because they obscure the library APIs and control flow of the language and introduce needless ABI complexities. The fact that C++ still treats strings as “a bunch of bytes” is just completely unacceptable.

                                                                                          These days I use Rust which addresses all the issues I mentioned here (and many more), the only thing I haven’t fully figured out yet is good IDE support in Rust. I largely use simple text editors with highlighting.

                                                                                          1. 7

                                                                                            I believe the build systems and cross platform issues are mainly a language issue.

                                                                                            If C (yes, C) had proper modules from the start, instead of this header thingy that’s basically about leveraging a subset of m4 to avoid manual copy pasta of every interfaces, building a project, and managing dependencies would have become much easier down the line.

                                                                                            Cross compilation problem comes from C and C++ being woefully under specified. Changing platforms often changes behaviour, and that’s a source of bugs. That, and the huge gaps in the standard library, which forces you to call system specific APIs every time you want to do any kind of I/O (drawing pixels and networking for instance). Thank goodness we have middleware like Qt, SDL, or Libuv, but they don’t have the advantage of being a standard library that ships with the compiler.

                                                                                            1. 2

                                                                                              It seems you are not satisfied with existing editors or IDEs for Rust. What’s your dissatisfaction and expectations? No offense. I just wonder what kind of ideas people have for good IDE.

                                                                                              1. 2

                                                                                                Oh, I’m not dissatisfied at all. I just didn’t have the time to evaluate all the IDE options I have with Rust, so I just used simple text editors with highlighting. My expectation is basically that the IDE does not get in my way, behaves predictably, is stable, has decent performance, does code highlighting (obviously) and allows me to follow function calls, definitions and references.

                                                                                                1. 1

                                                                                                  It would be cool to have code completion/navigation that understands cfg_if! {} directives or lazy_static! {}

                                                                                                  (unless it understands them already, I’ve tried Rust plugin to IDEA and it was several months ago)

                                                                                                2. 1

                                                                                                  The biggest language failure for me was slow compiles. Common Lisp, the most powerful PL, compiled really fast with interactive and incremental compilation. Before anyone argues the differences, a guy that wrote C++ compilers made a successor, D, have comparable capabilities with super-fast compiles. Along Lisp’s lines, they even have two compilers: one with Go’s philosophy for super-fast iterations; one targeting LLVM for fastest, runtime code. Eventually, I saw enough complex languages compile faster than C++ that it was clear it was just bad, language design causing the problem.

                                                                                                  One, Modula-3 in SPIN OS, also had type-safe linking, too. My stint with C++ was too brief to know if its ecosystem had linker problems. I did find a solution for C, though. That and SPIN made me think linker errors might also just be bad, language design.

                                                                                                1. 10

                                                                                                  Is your Make­file a GNU make­file, or BSD make­file?

                                                                                                  This question is why I recommend mk (the successor to Make).

                                                                                                  1. 8

                                                                                                    The best successor to Make I’ve seen is redo. State maintenance is much more explicit and the shell DSL is beautiful.

                                                                                                    1. 3

                                                                                                      djb writes embarassingly good software.

                                                                                                      1. 3

                                                                                                        This DJB writes embarrassingly good drafts. Then he leaves the rest of us to actually turn that into a product. 3 examples:

                                                                                                        • NaCl signature code is to this day marked as “experimental”
                                                                                                        • TweetNaCl has two uncorrected instances of undefined behaviour (left shifts of negative integers, lines 281 and 685).
                                                                                                        • the Redo link above is not from DJB, it’s from someone who re-implemented DJB’s idea.

                                                                                                        This is not a criticism. He’s very good at leading the way, and time taken to polish software is time not taken to lead the way.

                                                                                                        1. 2

                                                                                                          I’ve been thinking about this for a while. Why is so much of what he makes so embarrassingly good? I think it’s a combination of two things:

                                                                                                          1. He is able to rid himself of all preconceived notions to reach a goal. Salsa20 and Chacha20 is probably the most striking example there: An ARX cipher that mortals have an actual chance of implementing correctly and securely without any notable bumps in the way, despite it also being highly performant. Similarly, djbhash came from the other end: Starting with the simplest possible construction and fiddling with the operations and constants until the result worked well.
                                                                                                          2. He has seen “both ends” extensively, the perspective of the implementor and the mathematical perspective of the algorithm and its properties. The result is that his designs end up being extremely pragmatic, especially as to how much complexity you can expect from people to be able to actually follow. A striking example is probably the Poly1305 paper, where it’s clear that he thinks both in “how would an implementation actually go about executing this” and in mathematical terms.
                                                                                                          3. He has a highly analytical mind. His papers (especially his design papers) thend to be very accessible, even for people not deep into the subject matter. This is only possible because he understands what he’s saying at a fundamental level: This allows him to decompose complex thoughts and constructions into simpler parts. As a side effect of this, he will probably often recognize redundancy or opportunity for simplification.

                                                                                                          This started getting increasingly clear to me as I read the Salsa20 and ChaCha20 papers. The Curve25519 documentation still seems a bit obtuse, which can just be blamed on elliptic curves and their moon math themselves; maybe someday, someone, somewhere will come up with a scheme that’s a bit easier to follow. Looking at post-quantum cryptography efforts, however, the trend seems to be going in the opposite direction.

                                                                                                      1. 4

                                                                                                        Better than 14 mutually incompatible implementations of the same standard.

                                                                                                        1. 7

                                                                                                          Yeah, but you can solve that with a make file. ;)

                                                                                                        2. 5

                                                                                                          Sure, and we should be okay with that. Competition is healthy.

                                                                                                          1. 4

                                                                                                            In what way do build systems “compete”? The fragmented ecosystem of open source build systems appears nothing like a market to me, it’s really strange to ascribe the ideals of markets onto that ecosystem, especially when people just use/build the toolchain that makes them happy and nearly never worry about other toolchains. There’s no real social impetus between each system.

                                                                                                            1. 1

                                                                                                              They compete for your attention, that’s really the point in writing a “new, improved” build system. It doesn’t have anything to do with market economics, aside from the concept being relevant in both it’s not something exclusive to markets

                                                                                                              1. 2

                                                                                                                If nobody is paying attention, what attention are you competing for? What’s the point? How is that good?

                                                                                                                1. 1

                                                                                                                  So why is it healthy?

                                                                                                                  I want competition when the existing solutions are poor. When the existing solutions are good, or even fine, I would much prefer standardization.

                                                                                                                  1. 1

                                                                                                                    It’s healthy because it leads to people implementing solutions that are better than the preexisting solutions. If something becomes standardised across an industry, I think we call that winning…

                                                                                                                    1. 1

                                                                                                                      So if existing solutions are good, but nonetheless 30 more less effective but specialized solutions pop up you consider that winning?

                                                                                                              2. 3

                                                                                                                I’m not saying that it’s bad, just that the way you phrased your comment sounded a lot like “If you have to choose between A and B, take C” for any value of A, B, C.

                                                                                                                But otherwise, I’ve never really recognized any major benefit that GNU Makefiles (since I use those the most) offer over Plan9’s mk. A quick look at Hume’s paper on the topic didn’t really convince me that it’s so much more advanced, especially when considering that GNU has Functions like Guile Integration.

                                                                                                            2. 2

                                                                                                              Could you share a link to it? I tried searching, and all I get are Micheal Kors, Mario Kart, Macedonia related articles and some android build system…

                                                                                                              1. 7

                                                                                                                mk is available in plan9port (my preferred version). There is a standalone version written in Go, but is marginally incompatible (changes regex, allows for extra whitespace) which I don’t recommend (Go regex sux) but would be fine with becoming the default.

                                                                                                                1. 2

                                                                                                                  Neat, thanks. Indeed it is way simpler and with clean semantics, which I appreciate a lot. Make has so many special cases that after a month of not using it, I have to reach for documentation to understand even simple things like assignment :/

                                                                                                                  I would note a nice parallel in implementing Mk in go — both assume multiple platforms and are small.

                                                                                                                  1. 2

                                                                                                                    Go regexps are guaranteed to not be stuck in an eternal loop, which is nice.

                                                                                                                    1. 1

                                                                                                                      It’s a bit sad to reflect that mk already has two incompatible variants, despite being much newer and less adopted than Make.

                                                                                                                      (Not meaning to bash mk specifically here, this is not a make-specific problem as much as a universal problem.)

                                                                                                                      1. 3

                                                                                                                        mk appeared in Unix version 9, more than 30 years ago. Not that much newer :-)

                                                                                                                        1. 1

                                                                                                                          Ah, right. It’s doing OK, then :)

                                                                                                                        2. 2

                                                                                                                          honestly I think the developer of the Golang version didn’t want to implement a plan 9 regex engine (probably the simplest regex i’ve ever used)

                                                                                                                  1. 2

                                                                                                                    Can’t load page over HTTPS :-\

                                                                                                                    1. 1

                                                                                                                      Yeah, I still haven’t got around to setting it up, sorry.

                                                                                                                    1. 4

                                                                                                                      on the off chance that one day the stored passwords are stolen.

                                                                                                                      You mean “even taking into account the almost certainty that your password will already have been stolen a dozen times”.

                                                                                                                      Please keep making sure your stored passwords are hard to crack. Most users still use password that are too simple and use the same password multiple times. A few extra watts are worth a few fewer sad people.

                                                                                                                      1. 2

                                                                                                                        a few fewer sad people.

                                                                                                                        A few less sad people, I believe you meant.

                                                                                                                        1. 3

                                                                                                                          Would the people be less sad, or would there be fewer sad people?
                                                                                                                          Or would there be fewer less sad people? (more sad people or sadder people?) ;)

                                                                                                                          1. 1

                                                                                                                            Thanks, I actually meant “Having fewer sad people is worth a few extra Watts”, but I completely botched that sentence.

                                                                                                                        1. 13

                                                                                                                          Okay, the real solution is protocols like SRP or the new OPAQUE draft. The even more real solution is something better than passwords. It’s a shame SQRL did not take off (I’m not aware of any public services using it exactly, but Yandex does support a very very similar but custom scheme). But the push for U2F is very good, push notification confirmations are also not bad…

                                                                                                                          But when you use the classic password auth, just use scrypt/argon2, abandoning good password hashes for silly concerns about computation time is not a good idea.

                                                                                                                          1. 6

                                                                                                                            Okay, the real solution is protocols like SRP or the new OPAQUE draft.

                                                                                                                            You may find this thread on /r/crypto interesting, in particular since some people seem to believe an adjusted B-SPEKE is the better PAKE than OPAQUE or SRP. CC @Loup-Vaillant since you asked about that originally and probably have a somewhat educated opinion by now.

                                                                                                                            If PAKE functions take off, I sincerely hope it won’t require JavaScript in browsers. The NoScript crowd is the one that cares most about security—thus ironically also the one most likely to resist using a JavaScript-based method of authentication. This has already been raised as an issue in the WebAuthn spec, but not yet addressed there.

                                                                                                                            But the push for U2F is very good, push notification confirmations are also not bad…

                                                                                                                            $36 for two Yubico Security Keys (let’s be real, you need two of them, one to use, one in your bank safe in case the first one is lost or breaks) is a non-trivial investment for the masses. Though I suppose Windows Hello (and whatever browser vendors accept from Apple) will help out with adoption. The JavaScript requirement is still iffy.

                                                                                                                            1. 4

                                                                                                                              (Mentioning me didn’t trigger any notification like replies do…)

                                                                                                                              As far as I can tell, the only way to avoid having the server perform a slow hash, is client side computation. On the web, that means JavaScript, WebAssembly, or some standard added to HTML itself. No way around it. Personally, I think using JavaScript in this case would be justified. It sucks, but good PAKEs have advantages that benefits the user directly, such as not giving away their password to the server.

                                                                                                                              The (modified) B-SPEKE that was proposed on the thread I started on /r/crypto is excellent. I’m sold. The biggest advantage over OPAKE is that it doesn’t require point addition. This means we can Montgomery curves, which take less code to implement than Edwards curves, without killing efficiency. And I love small crypto libraries (sorry, couldn’t resist). Now it does require some non trivial primitives:

                                                                                                                              • Scalar multiplication (which you need for key exchange anyway)
                                                                                                                              • Hash to point (which you have if you use Elligator2 to hide the fact that you’re transmitting a public key)
                                                                                                                              • Inversion (modulo the order of the curve), for blinding. Not needed elsewhere, but fairly straightforward.

                                                                                                                              I personally plan to add it to Monocypher.

                                                                                                                              1. 3

                                                                                                                                some standard added to HTML itself

                                                                                                                                Or to HTTP instead! It would be awesome if HTTP Authentication supported one of these modern PAKEs in addition to Basic and Digest.

                                                                                                                              2. 1

                                                                                                                                The NoScript crowd

                                                                                                                                Does it really exist anymore? Do people still try to disable all JS? (heck, back when the NoScript addon was a thing, you’d usually configure it to only block 3rd party scripts or only block everything on random blogs and stuff where you don’t ever log in)

                                                                                                                                There’s a simple solution for the hypothetical “you’re stuck on an island with w3m” situation:

                                                                                                                                  <b>WARNING WARNING WARNING you have JS disabled!
                                                                                                                                  this fallback form is reduced security
                                                                                                                                  only use if stuck on an island without a JS capable browser</b>
                                                                                                                                  <form action="/login-legacy-style-with-the-pake-client-on-the-server">…</form>
                                                                                                                                1. 2

                                                                                                                                  These people do still exist, but they’re very rare. More likely reasons for (transient) lack of JavaScript execution are enumerated in: https://kryogenix.org/code/browser/everyonehasjs.html


                                                                                                                                  That would go contrary to server relief (since bad actors could stress the server again through that).

                                                                                                                                  1. 2

                                                                                                                                    server relief

                                                                                                                                    Just rate limit it. I honestly haven’t heard concerns about “server relief” from anyone who actually runs scrypt/etc :D

                                                                                                                                    Also isn’t PAKE client side lighter than scrypt/etc?

                                                                                                                              3. 3

                                                                                                                                The even more real solution is something better than passwords. It’s a shame SQRL did not take off (I’m not aware of any public services using it exactly, but Yandex does support a very very similar but custom scheme).

                                                                                                                                We find it amazing that only 20 years ago, there was very little encryption on most web sites. Most of the time, the only pages anyone bothered to encrypt in-transit were credit card forms and often not even then. What fools we were! I feel like 20 years from now, we will look back and shake our heads with a sensible chuckle and wonder how anyone was ever expected to remember one long high-entropy password, let alone dozens at a time.

                                                                                                                                FWIW, SQRL is not dead, it’s just now finally being considered “done” by its creator. The reference implementation and docs are done and Steve Gibson is traveling and doing talks about it. I believe his intent is to hand off maintenance and further development to the SQRL community so he can get back to working on things that make him money.

                                                                                                                                1. 1

                                                                                                                                  Another approach from the engineering side of things (rather than using more advanced crypto, a la OPAQUE) is to use something like Tidas which effectively just takes the iOS password manager out of the loop and lets you auth directly with public key authentication using touchID/faceID.

                                                                                                                                1. 2

                                                                                                                                  I get this is a contrived example, but it’s still a really bad example. The proper way to make this class testable to inject dependencies just so you can neuter the side effects and isolate the local computation. The proper way is to isolate the local computation.

                                                                                                                                  public static int orderCost(List<OrderItem> items) {
                                                                                                                                      var totalCost = 0;
                                                                                                                                      for (var item : items) {
                                                                                                                                          Supply.reduce(item.id, item.amount);
                                                                                                                                          totalCost += item.cost * item.amount;
                                                                                                                                      return cost;
                                                                                                                                  public static void acceptOrder(Customer customer, List<OrderItem> items) {
                                                                                                                                      // Purely functional computation. Easily tested separately.
                                                                                                                                      var totalCost    = orderCost(items);
                                                                                                                                      var mailContents = formulateOrderAcceptedMail(customer, items);
                                                                                                                                      // Side effects. Simple enough to be tested in one go
                                                                                                                                      Bank.chargeMoney(customer, totalCost);
                                                                                                                                      Mailer.sendMail(customer, mailContents);

                                                                                                                                  Now the complicated stuff is isolated in a way that you can easily test, and once you’re confident it works, you just use it to perform the side effects you want (charge money, and send the mail). Why would you inject any dependency?

                                                                                                                                  Now you may still want to do a dry run. And for that, you will want to disable any actual charging of money, or actual sending of mail. But then I think this class is the wrong one to inject dependencies into. Have the outer system disable the required functionality (have a mock server), or just test a fake user with a maibox and bank account you control.

                                                                                                                                  The first step remains: separate pure computation from side effects. It’s more readable and easier to test that way. And it also helps portability, for programs that need it: the separation helps gathering the effects in fewer places, which means gathering the platform specific stuff in fewer places as well.

                                                                                                                                  1. 4

                                                                                                                                    So, many people are spending a huge effort to make C++ better. I’m not sure this sounds as good as it is meant to sound. Also, C++ evolves quickly, giving you new advanced features every few years, while keeping backwards compatibility. I’m really not sure this sounds good either.

                                                                                                                                    My takeaway is that C++ is huge, and all the effort going around it is a sign that it requires that kind of effort. The vibe I got from this post was that it feels good to be part of a big community doing big things. Sure thing. But one thing that definitely does not feel good is huge artefacts. The C++ specification is at least an order of magnitude bigger than it would need to be if the language was invented right now. That language sits on a huge pile of legacy code and languages, starting with K&R C. It inherited almost all of its mistakes, forcing the community to spend herculean effort to compensate for them. I commend the effort, but I do condemn the need.

                                                                                                                                    C++ is best at one thing: using existing C++ code. The other niches have better alternatives.

                                                                                                                                    I still use Qt to write GUI applications, mind you. Because Qt is a huge freaking useful pile of code, I’d be stupid to ignore it. At the same time, let’s not forget that’s how path dependence happens. If we were to rewrite Qt from scratch now (will never happen, I know), C++ would likely not be the best choice.

                                                                                                                                    1. 19

                                                                                                                                      I’m going to address the title directly:

                                                                                                                                      What can software authors do to help Linux distributions and BSDs package their software?

                                                                                                                                      Step one is actually build your software on more than Linux. Shake out the Linuxisms by building on the BSDs. This will uncover often trivial but common issues like:

                                                                                                                                      • Assuming make Is GNU make
                                                                                                                                      • Using behaviours of GNU sed
                                                                                                                                      • Assuming /bin/sh is `bash’
                                                                                                                                      • Hardcoding paths to tools that are in different locations on BSD for example /usr/bin vs. /usr/local/bin
                                                                                                                                      • Relying on systemd being present. Consider OpenRC, runit, BSD init systems.
                                                                                                                                      • Relying on Docker for development workflows. Instead document the process for when Docker is not an option.
                                                                                                                                      1. 3

                                                                                                                                        Assuming make Is GNU make

                                                                                                                                        I wrote a crypto library. Single file, zero dependency. Not even libc. I do need to perform some tests, though. And even for such a simple thing, not assuming GNU make is just too hard. How come the following does not even work?

                                                                                                                                        test.out : lib/test.o  lib/monocypher.o lib/sha512.o
                                                                                                                                        speed.out: lib/speed.o lib/monocypher.o lib/sha512.o
                                                                                                                                        test.out speed.out:
                                                                                                                                        	$(CC) $(CFLAGS) -I src -I src/optional -o $@ $^

                                                                                                                                        If I recall correctly, $^ is not available on good old obsolete pmake. So what do I do, put every single dependency list in a variable?

                                                                                                                                        Another little hassle:

                                                                                                                                        ifeq ($(findstring -DED25519_SHA512, $(CFLAGS)),)

                                                                                                                                        Depending on that pre-processor flag, I need to include the optional code… or not. Is there any “portable” way to do so?

                                                                                                                                        Sure, assuming GNU everywhere is not a good idea. But some GNU extensions are just such a no brainer. Why don’t BSD tools adopt some of them? Why not even $^?

                                                                                                                                        1. 6

                                                                                                                                          Feel free to assume use GNU make (IMO), just make sure your sub-invocations are $(MAKE) instead of “make”.

                                                                                                                                          1. 3

                                                                                                                                            That could work indeed. My current makefiles aren’t recursive, but I’ll keep that in mind, thanks.

                                                                                                                                            1. 2

                                                                                                                                              Same with bash instead of sh: it’s fine to use bash, just be conscious about it (“I want bash feature X”) and use #!/usr/bin/env bash. The problem is assuming /bin/sh is /bin/bash, which can lead to some rather confusing errors.

                                                                                                                                            2. 3

                                                                                                                                              So my suggestion wasn’t to not use GNU make it was to not assume make == GNU make. What this means is if building requires GNU make then call that out explicitly in build instructions. This makes it easier for packagers to know they need to add a dependency on gmake.

                                                                                                                                              If there are scripts or other automation that call make then allow the name of the command to easily be overridden or check for gmake and use that if found rather than calling make directly and assuming it’s gnu make.

                                                                                                                                            3. 1

                                                                                                                                              Why would any generic userland application want to build their software beyond Linux environments?! Eg, is Slack or Skype actively doing this? If anything I would assume attempting to assure my application builds against legacy Linux build tools (and yes even assuming GNU make) is a good thing…

                                                                                                                                              To ask the question another, what segment of my user base is BSD based? I suppose you’re answering wrt the BSD adoption portion of the parent question. I guess my own comment is that unless ones software application is massively popular all the genericity considerations of the build tooling you’ve described sounds like massive overkill.

                                                                                                                                              1. 4

                                                                                                                                                I think if you take this argument one step further, you end up building only for Windows. That’s not a fun world. Would we then just invest a bunch of effort into Wine? It’s what we used to do.

                                                                                                                                                Portability allows for using your favorite platform. It’s something we all have valued from time to time.

                                                                                                                                                If you make the right choices, you can develop very portable software these days in most languages. So, the way I read it, learning how to make those choices is what the OP is suggesting.

                                                                                                                                            1. 5

                                                                                                                                              I’ve read the manual, this is fantastic. At last a formalism I can use, provided the engine underneath doesn’t explode (combinatorially). Nothing like ProVerif, or even Tamarin. Keep up the good work, we need it.

                                                                                                                                              I have a question regarding the Monokex analysis I’m about to try: Monokex doesn’t use AEAD. Instead, it mixes the transcript (and the DH shared secrets) into a hash, then use a HKDF-like function to generate a tag (and a new hash, independent from the tag). Verification is done by checking that the received tag is the same as the generated tag.

                                                                                                                                              • Is there a way to check that what we receive in a message and what we compute locally are equal?
                                                                                                                                              • If not, could we cheat by using AEAD somehow?
                                                                                                                                              1. 4

                                                                                                                                                In the interest of fostering discussion in the appropriate venue, I’d rather you post what you just wrote on the Verifpal Mailing List. I’ll respond there.

                                                                                                                                                1. 2


                                                                                                                                              1. 5

                                                                                                                                                Neat, yet another crypto verification tool I can try for Monokex. Hope this one will work out. The other I’ve tried were too unwieldy.

                                                                                                                                                1. 3

                                                                                                                                                  Thanks, Loup. Monokex is awesome work and I hope I’ll be able to assist with your usage of Verifpal.

                                                                                                                                                  1. 3

                                                                                                                                                    I’ll hold you to that, thanks. I’ll have a stab at it the next few days.

                                                                                                                                                    1. 2

                                                                                                                                                      Don’t forget to check out the User Manual, which contains helpful information on getting started.

                                                                                                                                                      1. 2

                                                                                                                                                        helpful information and a cool manga, starting on page 13 ;)

                                                                                                                                                1. 1

                                                                                                                                                  So, once left recursion is handled, can it handle both left and right recursion? For example, given a grammar

                                                                                                                                                  start := expr
                                                                                                                                                  expr := expr + expr
                                                                                                                                                           | 1

                                                                                                                                                  Would it be able to parse 1+1 ?

                                                                                                                                                  1. 2

                                                                                                                                                    You can sidestep the whole issue by using Earley parsing underneath. Then you implement a PEG syntax on top. Earley parsing supports prioritised choice, so you can keep that. It only works for context free grammars though. Real PEG is syntax sugar for recursive descent, and as such can easily handle some contextual grammars.

                                                                                                                                                    1. 2

                                                                                                                                                      Thank you for the great tutorial. It helped us quite a bit when we were implementing our own tutorial for Earley parsing, with Leo’s fix for right recursion. (notebook here).

                                                                                                                                                      1. 2

                                                                                                                                                        You’ve got a tutorial on Leo’s fix? I’m going to check that out as soon as I resume my parsing work (the last couple years I’ve been side tracked by cryptography). Bookmarked, thanks a lot.

                                                                                                                                                  1. 4

                                                                                                                                                    as we move towards computers which can use whatever endianness is appropriate for the situation

                                                                                                                                                    What are the appropriate situations when you want to run your whole system in big endian? It might be my lack of imagination, but other than compatibility with buggy C programs that assume big endian, I can’t think of any. It would be nice to leave this part of computer history behind, like 36-bit words and ones’ complement arithmetic.

                                                                                                                                                    1. 12

                                                                                                                                                      I’ve been running big endian workstations for years. It’s slightly faster at network processing, and it’s a whole lot easier to read coredumps and work with low-level structures. Now that modern POWER workstations exist, I no longer even have an x86 on my desk.

                                                                                                                                                      Many formats are big-endian and that won’t change. TIFF, JPEG, ICC colour profiles, TCP, etc…

                                                                                                                                                      Ideally, higher level languages would make this irrelevant to most people, so we could just run everything in BE and nobody would notice except the people doing system-level work where it’s relevant. Unfortunately, we haven’t gotten there yet. So it’s best for user freedom to let the user decide what suits their workload.

                                                                                                                                                      1. 5

                                                                                                                                                        Modern x86 has a special instruction for byte-swapping moves, MOVBE: https://godbolt.org/z/juJ6VL

                                                                                                                                                        I disagree that low level languages are a problem when it comes to this. Even higher level languages need to deal with endianness when working with those formats you mentioned, so we’ll never be rid of it on that level. On the other hand, it’s possible to do it properly in low level languages as well; don’t read u16/u32/u64 data directly and avoid ntohl()/htonl(), etc. The C function in my link works on both big and little endian systems because it expresses the desired result without relying on the native endianness.

                                                                                                                                                        1. 3

                                                                                                                                                          I wish more people would know the proper ways to do that in C.

                                                                                                                                                          1. 5

                                                                                                                                                            Simple: “reading several bytes as if they were a 32 bit integer is implementation defined (or even undefined if your read is not aligned). Now here’s the file format specification, go figure a way to write a portable program that reads it. #ifdef is not allowed.”

                                                                                                                                                            From there, reading bytes one by one and shift/add them is pretty obvious.

                                                                                                                                                        2. 5

                                                                                                                                                          I’ve been running big endian workstations for years.

                                                                                                                                                          I’m curious about your setup? What machines are you running with MIPS? I guess I haven’t really looked into “alternative architectures” since the early 2000s, so I’m quite intrigued at what people are actually running these days.

                                                                                                                                                          1. 3

                                                                                                                                                            My internal and external routers are both MIPS BE.

                                                                                                                                                            My main workstation is a Raptor Talos II, POWER9 in BE mode. Bedroom PC is a G5.

                                                                                                                                                            My media computer is an old Mac mini G4. I haven’t felt the need to replace it.

                                                                                                                                                            1. 3

                                                                                                                                                              I suspected that you had a Talos machine. The routers make total sense, too. Thanks for taking the time to reply!

                                                                                                                                                              1. 1

                                                                                                                                                                My internal and external routers are both MIPS BE.

                                                                                                                                                                May I ask what the make and model codes are?

                                                                                                                                                                1. 3

                                                                                                                                                                  Netgear WNR3500L.

                                                                                                                                                                  1. 2

                                                                                                                                                                    Thank you @awilfox!

                                                                                                                                                            2. 3

                                                                                                                                                              Many formats are big-endian and that won’t change. TIFF, JPEG, ICC colour profiles, TCP, etc…

                                                                                                                                                              All those standards use Big-Endian for various reasones related to hardware down to the chip level.

                                                                                                                                                              • IP for example, is used for routing based on prefixes where you only look at the first few bits to decide which port a packet of data should exit through. In more than 99,9% of the cases, it simply does not make sense to look at the low end of the numbers.
                                                                                                                                                              • TIFF, JPEG and ICC colour profiles all deal with pixels and some form of light sensors which are connected some form of analog to digital converter-circuit. Such a circuit is essentially a string of resistors interlaced with digital comparators that output 1 if the input voltage is above a certain threshold. If the first half of all comparators returns 1, you switch on the MSB, if not, you switch it off, however, the MSB (which would be upfront in Big Endian notation) denotes 50% of the input signal’s strength and is therefore more important to “get right” than the lower numbers.

                                                                                                                                                              So why is Little Endian winning on modern CPU’s? Well that’s because we have different concerns when we are running computer programs in which a pattern like this

                                                                                                                                                              for(int i=0; i<length; i++) {}

                                                                                                                                                              is common.

                                                                                                                                                              It would make no sense to start comparing numbers bitwise from the high end, because that almost never changes. The low end however, changes all the time. This makes it easier to put the low-end bytes upfront and only check the higher bytes when we have overflowed on a low-end byte.

                                                                                                                                                              So it’s a story about: Different concerns -> different hardware.

                                                                                                                                                              As for Debian: They must have looked through the results of their package popularity contest and have judged that the amount of work required to maintain the mips architecture cannot be justified by the small number of users that uses it.

                                                                                                                                                              This is also why I always opt for yes when I’m asked to vote in the popcon. Because they can’t see you if you don’t vote!

                                                                                                                                                              1. 2

                                                                                                                                                                Ideally, higher level languages would make this irrelevant to most people

                                                                                                                                                                See Erlang binary patterns. It provides what you want.

                                                                                                                                                                1. 1

                                                                                                                                                                  It’s slightly faster at network processing

                                                                                                                                                                  New protocols these days tend to have a little endian wire format. TCP/IP is still big endian, but whatever lies on top of it might not be. Maybe that explains the rise of dual endian machines: little endian has won, but some support for big endian still comes in handy.

                                                                                                                                                                  1. 1

                                                                                                                                                                    Yes and no. The z-cash algorithm (used by ethereum et al) serialises numbers to BE always. But some LE protocols and formats exist. I think the real winner is not LE, nor BE, but systems that can let you use both.

                                                                                                                                                                    1. 4

                                                                                                                                                                      And everything designed by DJB is little Endian: Salsa/Chacha, Poly1305, Curve25519… And then there’s Blake/Blake2, Argon2, and more. I mean, the user hardly cares about the endianness of those primitives (it’s mostly about mangling bytes), but their underlying structure is clearly little endian. Older stuff like SHA-2 is still big endian, though.

                                                                                                                                                                      Now sure, we still see some big endian stuff. The so called “network byte order” is far from dead. Hence big endian support in otherwise little endian systems. But I think it is fair to say that big endian by default is mostly extinct by now. New processors are little endian first, they just have additional support for big endian formats.

                                                                                                                                                                      And if you were to design a highly constrained microcontroller now (that must not cost more than a few cents), and your instruction set is not big enough to support both endianness efficiently, which endianness would you chose? Personally, I would think very hard before settling on big endian.

                                                                                                                                                              1. 3

                                                                                                                                                                Agree with the analysis and as the other comment mentions, there are clear reasons for this.

                                                                                                                                                                “Number of people killed by bad software” is an interesting one. There are certainly the classic stories of thing gone wrong (Therac-25 comes to mind), but I imagine that a software failed occurs somewhat invisibly to the people whose lives depend on it - hidden amongst a tumult of other failures.

                                                                                                                                                                The boeing MAX planes also come to mind. Certainly compared to other things that can kill us software is a negligible slice, but it’s not zero.

                                                                                                                                                                As we continue to have more software in the world and depend on it for more and more, the number can only go up. I wonder though if it will rise disproportionately with the distribution of new software systems: will we care less about safety as we grow?

                                                                                                                                                                1. 3

                                                                                                                                                                  The boeing MAX planes also come to mind.

                                                                                                                                                                  I believe that wasn’t really a software failure. Oh, it was definitely an engineering clusterfuck because they wanted to save money on re-certification:

                                                                                                                                                                  • Aerodynamically unstable design (so they could make bigger, more fuel efficient reactors).
                                                                                                                                                                  • Botched redundancy (left computer used left sensor, right computer used right sensor, and no way to tell which computer is right when one sensor (inevitably) go south).
                                                                                                                                                                  • Limited pilot training, that hides the differences of the MAX under the carpet.
                                                                                                                                                                  • Difficult to override automatic controls (the pilots basically have to lift weights to be able to counter the nosedive).
                                                                                                                                                                  • […]

                                                                                                                                                                  Forgot where I saw it, but a pilot wrote a painstakingly detailed review of the debacle. If someone can find the link…

                                                                                                                                                                  That said, whether it was a software failure or something else doesn’t really matter. We make stuff, and bad things happen when it breaks. Software shouldn’t be treated any differently. (And in the case of the MAX, they certainly expected software to compensate for the physical shortcomings of the plane. Too bad it didn’t, I guess…)

                                                                                                                                                                  1. 3

                                                                                                                                                                    so they could make bigger, more fuel efficient reactors

                                                                                                                                                                    I think you just meant to write “engines” here.

                                                                                                                                                                    1. 2

                                                                                                                                                                      Crap, I did.

                                                                                                                                                                      1. 1

                                                                                                                                                                        I figured you did :D An older name for a jet plane in Swedish is “reaplan” (“plan” is plane, and “rea” is from “reaktionsmotor”) and it has the same root.

                                                                                                                                                                    2. 1

                                                                                                                                                                      Fair. I was considering the software overcompensation for a physical failure as a software failure, but as mentioned, “tumult of other failures” might be over-blaming the software.

                                                                                                                                                                      1. 2

                                                                                                                                                                        Note: I believe the software people ought to have noticed this: see, each computer relied on one sensor, and then you have to resolve the conflict whenever they disagree. With only two systems —not three as is commonly seen in vote based redundancy systems. Actually, I’m pretty sure a number of engineers, software or otherwise, did notice something fishy was going on. They probably told their hierarchy too. Yet someone somewhere still decided they were going to go through with this.

                                                                                                                                                                  1. 7

                                                                                                                                                                    I find RISC misguided. The RISC design was created because C compilers were stupid and couldn’t take advantage of complex instructions, and so a stupid machine was created. The canonical RISC, MIPS, is ugly and wasteful, with its branch-delay slots and large instructions that do little.

                                                                                                                                                                    RISC, no different than UNIX, claims simplicity and small size, but accomplishes neither and is worse than some ostensibly more complex designs.

                                                                                                                                                                    This isn’t to write all RISC designs are bad; SuperH is nice from what I’ve seen, having many neat addressing modes; the NEC V850 is also interesting with its wider variety of instruction types and bitstring instructions; RISC-V is certainly a better RISC than many, but that still doesn’t save it from the rather fundamental failings of RISC, such as its arithmetic instructions designed for C.

                                                                                                                                                                    I think, rather than make machines designed for executing C, the ’‘future of computing’’ is going to be in specialized machines that have more complex instructions, so CISC. I’ve read of IBM mainframes that have components dedicated to accepting and parsing XML to pass the machine representation to a more general component; if you had garbage collection, bounds checking, and type checking in hardware, you’d have fewer and smaller instructions that achieved just as much.

                                                                                                                                                                    The Mill architecture is neat, from what I’ve seen, but I’ve not seen much and don’t want to watch a Youtube video just to find out. So, I’m rather staunchly committed to CISC and extremely focused CISC as the future of things, but we’ll see. Maybe the future really is making faster PDP-11s forever, as awful as that seems.

                                                                                                                                                                    1. 6

                                                                                                                                                                      “The RISC design was created because C compilers were stupid and couldn’t take advantage of complex instructions”

                                                                                                                                                                      No. Try Hennesy/Patterson Computer architecure for a detailed explanation of the design approach. I