1. 53
  1. 6

    Really good article although there was a lengthy discussion in #lobsters what constitues portability :P

    Short version: is it now more portable because it’s easier for the fewer Rust-supported archs or is it less portable because C as a target would support more platforms with more work?

    1. 2

      If you include a language with a single implementation within your definition of portability, then this definitely is more portable than C.

      Still, this goes without saying that we should simplify build-systems in general. A simple Makefile and a config.mk file for optional tweaks that don’t need to be done on 99% of systems suffices in most cases.

      1. 25

        Have you noticed that there no portable way to print “Hello World” in C?

        There is a solid standard for source code that passively promises it could do it when built, but you can’t run this source code in a standard, portable way. I see people treat gcc on POSIX as this sort of standard, but it’s not. There are other compilers, other platforms, and even within gcc-flag-copying compilers on close-enough-to-POSIX systems it’s endless papercuts.

        I had a simple Makefile with config.mk, and ended up in a situation where I couldn’t build my own project on my own machine. After days of debugging I think the simplest solution would be to recompile GCC from scratch with -mno-outline-atomics… but I just don’t want to deal with these things. In C everything is theoretically possible, but nothing is easy, nothing is reliable.

        I’m completely unmoved by the theoretical possibility of porting my project to DEC Alpha or a DSP with 12-bit bytes, when I can’t even run it on macOS and Windows.

        1. 1

          I know about C’s inconsistencies and quirks, but it’s reasonable to just target a POSIX compliant system, which is a standard and clearly defines interfaces you can use to print “Hello World”.

          I do not depend on the mercy of the Rust developers to support a certain platform,and when I check the Rust officially supported platforms I don’t see that many. Even Microsoft itself acknowledges that by offering WSL.

          1. 15

            Your position is odd. On one hand you seem to value support for all platforms, even ones that are too obscure too be in LLVM. But at the same time you say it’s reasonable to just drop the biggest desktop platform.

            Porting Rust needs an LLVM back-end and libc, which is pretty much the same as porting Clang. You don’t need a permission from an ISO committee to do this. LLVM and Rust are open. In practice the list includes every modern platform people care to support. There’s also a GCC back-end in the works, so a few dead platforms will get support too.

            1. 2

              I think FRIGN is arguing that targeting POSIX does not mean dropping Windows, because WSL exists. I don’t agree, but it is an arguable position.

              1. 7

                WSL is Linux. I would not call that targeting the Windows platform.

                1. 1

                  Why not? WSL is part of Windows. What do you gain by targeting non-WSL Windows? (I think there is a gain, I am just curious what you think it is.) Is it that WSL is an additional install? (JVM is an additional install too, and frankly WSL is easier to install than JVM.) Is it support for versions older than Windows 10? Is it overhead, which I think is tiny?

                  Would you stop targeting Windows if all of the following happens: 1) WSL becomes part of default install 2) all versions of Windows not supporting WSL become irrelevant 3) performance overhead improves enough to be immaterial? What else would you want?

                  1. 13

                    Windows can also run Web apps and Android applications, or even emulate an old Mac, but in a discussion of portability I think it’s important to make a distinction between getting software to run somehow vs being able to directly use the native vendor-recommended APIs of the platform.

                    Is Visual Basic 6 portable and can target Linux? There is WINE after all, and it’s probably as easy to install as WSL.

                    The WSL tangent feels like I’m saying “I can’t eat soup with a fork”, and you “You can! If you freeze it solid or if you spin the fork really really fast!”

                    1. 2

                      I also think depending on Wine instead of porting to Linux a defensible position.

                    2. 4

                      Why not? WSL is part of Windows. What do you gain by targeting non-WSL Windows?

                      I worked at a place that wouldn’t allow WSL because the security team couldn’t install the network monitoring tool they used. Ultimately, it is an optional component and can be disabled.

                      1. 3

                        Why not? WSL is part of Windows. What do you gain by targeting non-WSL Windows?

                        WSL is a brand name covering two things:

                        • WSL1 uses picoprocesses in the NT kernel with a Linux system call compatibility layer (similar in concept to the *BSD Linux compat layers)
                        • WSL2 just runs a Linux VM with Hyper-V.

                        There is some integration with the host system, a WSL application in either version can access the host filesystem (with WSL1, there are filter drivers over NTFS that present POSIX semantics, with WSL2 it uses 9p over VMBus, both are slow). They can create pipes with Windows processes. But they can’t use any of the Win32 API surface. This means no GUI interaction, no linking with any Windows DLLs. For something like an image library (as in the article), providing a Linux .so is of no help to someone writing a Windows application. The most that they could do with it is write a Linux command-line tool that decoded / encoded an image over a pipe, and the run that win WSL, on the subset of Windows machines that have enabled WSL (unless it’s changed recently, neither WSL1 or 2 is enabled by default).

                2. 3

                  I think it does boil down to the question whether targeting POSIX is reasonable or not. Many people, including myself and the author of this article, find it unreasonable. But I admit it is a defensible position.

              2. 7

                That works unless it involves one of threading, Windows, or cross-compilation. All three works out of the box with Rust. C is more capable, but Rust is more convenient.

                1. 1

                  That’s a fair point, but this only works because there is one single Rust implementation that is ported, and that’s what you depend on.

                  I’m not arguing about the convenience.

                2. 4

                  A simple Makefile and a config.mk file for optional tweaks that don’t need to be done on 99% of systems suffices in most cases.

                  I’d argue the opposite, that cargo new and cargo build suffice in most cases, and you don’t need the capabilities of C most of the time unless you’re doing something weird, or something with hardware.

                  1. 1

                    But think about the massive complexity behind cargo. Why does anyone think it’s reasonable to pull in dozens of online (!) dependencies for trivial code (node.js all over again), and with Rust you can’t get around this bloat.

                    And I’m not even saying that this was about C vs. Rust. I am aware of the advantages Rust offers.

                    But the bloat is unnecessary. Consider Ada for example, which is way more elegant and provides even more security than Rust.

                    1. 14

                      Cargo is simple technically. Probably simpler than cmake and dpkg.

                      Rust uses many dependencies, but that isn’t necessarily bloat. They’re small focused libraries. Slicing a pizza into 100 slices instead of 6 doesn’t make it any larger. I have 45 transitive dependencies in libimagequant, and together they are smaller than 1 OpenMP dependency I’ve had before.

                      Cargo uses many small deps, because it’s so easy and reliable, that even trivial deps are worth using. I don’t think pain should be the driving factor technical decisions — deps in C are a pain whether they’re used justifiably or not. Even despite dependency friction C has, applications still use many deps. Here’s a recent amusing example of dependency bloat in vim.

                      I’ve considered Ada, but it offers no safety in presence of dynamic memory allocation without a GC. The usual recommendation is “then just don’t allocate memory dynamically”. That’s what I’m aiming for in C and Rust too, but obviously sometimes I do need to allocate, and then Rust offers safety where Ada doesn’t.

                      1. 6

                        The usual recommendation is “then just don’t allocate memory dynamically”

                        Ada allows you to declare anything pretty much in any declaration block, including run-time sized arrays, functions, variables, tasks, etc. so a lot of time you’re logically “allocating” for work, but it’s usually stack allocated though the compiler can implicitly allocate/free from the heap (IIRC). Being able to return VLAs on the secondary stack also simplifies things a bit like returning strings. The standard library is pretty extensive too, so you usually just use stuff from there which will be RAII controlled. Between RAII via “Controlled types” and limits on where access types (pointers) can be used, I don’t think I’ve ever actually seen raw allocations being passed around. Implicitly passing by reference when needed, and allowing declaration of different pointer types to the same thing, but declared in different places seems to really cut down on pointer abuse.

                        One of my Ada projects is ~7000 LoC (actual, not comments or blanks) and I have one explicit allocation where I needed polymorphism for something, wrapped in a RAII smart pointer which also allocates a control block.

                        Looking at Alire, ~28600 LoC (actual, not comments or blanks), shows only 1 allocation which doesn’t go through the program lifetime, which is inside an RAII type. If you include all 1554 files of it and its dependencies, Unchecked_Dellocation only appears in 60 of them.

                        I understand the disbelief. I get that it’s weird.

                        (shrug), I dunno, it’s sort of hard to explain, but the language and standard library means you just often don’t need to allocate explicitly.

                      2. 9

                        This is a strawman argument, because Cargo’s package management features aren’t what we were talking about. But you can use Rust without needing to depend on any other packages, in fact you can use it without Cargo at all if you want, just create a Makefile that calls rustc. Saying that you “can’t get around” this bloat is not really factual.

                        1. 1

                          How far is this going to get you when the standard library doesn’t even include the most basic of things?

                          Packagers (rightfully) won’t touch rust-packaging as the majority uses cargo and everything is so scattered. When you are basically forced to pull in external dependencies, it’s reckless, on the other hand, not to rely on cargo.

                          Ada does it much better and actually reflects these things, and C benefits from a very good integration into package management and systems overall.

                          Rust could’ve had the chance to become easily packagable, but it isn’t. So while the “move fast”-philosophy definitely helped shape the language, it will lead to long-term instability.

                          1. 12

                            As someone who occasionally packages rust things (for Guix) I don’t really know why everyone thinks it’s so hard. Cargo metadata for every project means you can often automate much of the packaging process that in C I do by hand.

                            1. 2

                              Given you seem to have experience with this, how simple is it to create a self-contained tarball of a program source with all dependencies (as source or binary) that does not require an internet connection to install?

                              1. 14
                                1. 2

                                  Nice, I didn’t know about that one. Thanks!

                                2. 6

                                  Note that this isn’t a requirement for all packaging systems. The FreeBSD ports tree, for example, requires only that you be able to download the sources as a separate step (so that the package builders don’t need network access). There’s infrastructure in the ports tree for getting the source tarballs for cargo packages from crates.io. These are all backed up on the distfiles mirror and the package build machines have access to that cache. All of the FreeBSD packages for Rust things use this infrastructure and are built on machines without an Internet connection.

                                  1. 1

                                    Very interesting, thanks for pointing that out!

                              2. 5

                                Just because you like C and Ada doesn’t mean every other language is terrible.

                                1. 1

                                  I totally agree. For instance I love Julia for numerics.

                      3. 2

                        However, when I changed them to &mut [] slices, it got a speed boost! Could it be thanks to the mythical no-alias guarantee of slices?

                        I’m pretty sure the no-alias optimization is still turned off pending fixing the associated bugs in LLVM codegen.

                        1. 5

                          It’s merged

                          1. 3

                            Oh, nice! For some reason I thought that one had been reverted too, but I guess it’s still going strong. 🎉

                        2. 2

                          There’s no standard for making a “useless” memory read happen without causing a compiler warning.

                          Not in front of a compiler right now, but is this non-standard?

                          // global
                          volatile int foo;
                          foo = *bar;


                          1. 3

                            That also issues a useless write to memory, which is less than ideal. (I suspect the overhead in this case is minimal, but it’s still unnecessary work, which is annoying if nothing else.)

                            1. 1

                              The following seemed to work for me:

                              volatile int foo;
                              1. 1

                                The right way to do it is to cast the pointer to be read to volatile, e.g., int * p = ...; *(volatile int *)p;.

                              2. 1

                                It’s a value, not a pointer. I’ve never seen a volatile value.

                                I suppose it could work. My problem was that I did volatile reads, not writes, so the compiler could always complain about the read value being unused (there are ways to pretend you use it, but there’s no guarantee that a smarter compiler won’t see through it). But use of a global volatile could be opaque enough. There’s also a problem that theoretically it’s UB, because it’s an unsynchronized global write.

                                1. 3

                                  Volatile values are quite useful in microprocessors where you need to stay consistent with what the interrupts see.

                              3. 1

                                I feel CMake would have gotten you most of the way there (and abandoning “simple” makefiles) for ease of cross-compiling (which IMHO, is a false economy, but I digress) and building, while keeping the C side of the portability equation. Of course, it wouldn’t be as much fun, nor give you some of the safety properties.

                                1. 1

                                  I have cmake in MozJPEG project I maintain, and I’m struggling to make it find and link zlib and libpng properly for macOS (which is supposed to use dynamic zlib regardless where it gets libpng from). I get absurd errors like “zlib not found (found version 1.2)”.

                                  Apart from that, I agree – it is probably the most sensible solution for C currently.

                                  1. 1

                                    I agree. Debugging is by far the worst part of CMake, even worse than autotools. config.log is mostly easy to reason with, unless you hit an M4 landmine.