1. 7

    An extreme example of unportable C is the book Mastering C Pointers: Tools for Programming Power, which was castigated recently. To be fair, that book has other flaws rather than being in a different camp, but I think that fuels some of the intensity of passion against it.

    This… rather grossly undersells how much is wrong with that book. The author didn’t understand scope, for crying out loud, and never had a grasp of how C organized memory, even in the high-level handwavy “C Abstract Machine” sense the standard is written to.

    There are better examples of unportable C, such as pretty much any non-trivial C program written for MS-DOS, especially the ones which did things like manually writing to video memory to get the best graphics performance. Of course, pretty much all embedded C would fit here as well, but you’ll actually be able to get and read the source of some of those MS-DOS programs.

    In so doing, the committee had to converge on a computational model that would somehow encompass all targets. This turned out to be quite difficult, because there were a lot of targets out there that would be considered strange and exotic; arithmetic is not even guaranteed to be twos complement (the alternative is ones complement), word sizes might not be a power of 2, and other things.

    Another example would be saturation semantics for overflow, as opposed to wraparound. DSPs use saturation semantics, so going off the top end of the scale plateaus, instead of causing a weird jagged waveform.

    As for the rest, it’s a hard problem. Selectively turning off optimization for specific functions would be useful for some codebases, but aggressive optimization isn’t the only problem here: Optimization doesn’t cause your long type to suddenly be the wrong size to hold a pointer on some machines but not others. Annotating the code with machine-checked assumptions about type size, overflow behavior, and maybe other things would allow intelligent warnings about stupid code, but… well… try to get anyone to do it.

    1. 6

      Re “Mastering C Pointers,” that’s fair. I included it because it’s one of the things that got me thinking about the unportable camp, but I can see how its (agreed, very serious) flaws might detract from the overall argument I’m making and that there might be a better example.

      Re saturating arithmetic, well, Rust has it :)

      1. 2

        My interpretation is that the point of C is that simple C code should lead to simple assembly code. Needing to write SaturatedArithmetic::addWithSaturation(a, b) instead of just a + b in all arithmetic DSP code would be quite annoying, and would simply lead to people using another language.

        You could say ‘oh they should add operator overloading’, but then that contravenes the first point, that simple C code (like a + b) should not hide complex behaviour. The only construct in C that can hide complexity is the function call, which everyone recognises. But if you see some arithmetic, you know it’s just arithmetic.

        1. 1

          You could say ‘oh they should add operator overloading’, but then that contravenes the first point, that simple C code (like a + b) should not hide complex behavior. The only construct in C that can hide complexity is the function call, which everyone recognizes. But if you see some arithmetic, you know it’s just arithmetic.

          Not to mention that not everything can be overloaded, causing inconsistencies, and some operations in mathematics have operators other than just “+-/*”. Vector dot product “·”, for example. Even if CPP (or any other language) extends to support more operators, these operators can’t be reached without key composition (“shortcuts”), making it almost unwanted. vec_dot() might require typing more, but it’s reachable to everyone, and operators don’t need to have hidden meanings.

          1. 2

            Eh, perl6 seems to do just fine with 60 bajillion operators.

            1. 2

              Perl does have more operators than C, but all of them are operators that can be typed using simple key composition, such as [SHIFT+something]. String concatenation for example.

              My point, added with what @milesrout said, is that some operators (math operators) aren’t easy to type with just [SHIFT+something]. As result, operator overloading in languages that offer operator overloading will always stay in a unfinished state, because it will only compromise those operators that are easily composed.

      2. 1

        Mastering C Pointers: Tools for Programming Power has several four-star reviews on Amazon uk.

        Herbert Schildt’s C: The Complete Reference is often touted as the worst C book ever and here.

        Perhaps Mastering C Pointers is the worst in its niche (i.e., pointers) and Schildt’s is a more general worst?

        1. 2

          Mastering C Pointers: Tools for Programming Power has several four-star reviews on Amazon uk.

          So? One of the dangers of picking the wrong textbook is thinking it’s great, and using it to evaluate subsequent works in the field, without knowing it’s shit. Per hypothesis, if it’s your first book, you don’t know enough to question it, and if you think it’s teaching you things, those are the things you’ll go on to know, even if they’re the wrong things. It’s a very pernicious bootstrap problem.

          In this case, the book is objectively terrible. Other books being bad don’t make it better.

          I do agree that Schildt’s book is also terrible.

      1. 1

        hackernews and reddit, mostly.

        1. 28

          Possibly unpopular opinions (and a large block of text) incoming:

          C++, Go, Swift, D and Rust all fail to adequately replace C for me. When given the choice, I would likely choose to just stick with C (for the moment; I’ll talk about what I’m considering at the end).

          C++ has so much historical baggage and issues, it’s already an immediate turn-off. More than that, it’s explicitly understood that you should carve out a subset of C++ to use as your language because trying to use the whole thing is a mess of sadness (given that it’s massive at this point). I appreciate the goal of zero-cost abstraction and having pleasant defaults in the standard library, but there are just too many problems for me to take it as a serious choice. Plus, I still have to deal with much of the unfortunate UB from C (not all of it, and honestly, I don’t mind UB in some cases; but a lot of the cases of UB in C that just make no reasonable sense come across to C++). It should be noted that I do still consider C++ occasionally in a C-with-templates style, but it’s still definitely not my preference.

          Despite how often people place it here, I do not believe Go belongs in this group of candidates The garbage collection alone makes it unfit for systems programming. I see Go as a very reasonable choice to replace Java (but I don’t use Java whenever I have the choice, so I might not be the best person to ask). There are many other parts of this language that rub me the wrong way, but mostly, I just think it’s not a good systems language (but is instead a great intro language for higher-level stuff).

          Swift is really easy to rule out: It’s not cross-platform. Even if it were, it has all sorts of terrible issues (have they fixed the massive compile times yet?) that make it a no-go.

          D, as far as I can tell, manages to be C++ without the warts in a really lovely way. Having said that, it seems like we’re talking about good replacements for C, not C++, and D just doesn’t cut it for me. GC by-default (being able to turn that off is good, but I’ll still have to do it every time), keeping name mangling by-default, etc. -betterC helps with some of this, but at that point, there’s just not enough reason for me to switch (especially with all the weirdness of there being two de facto standard libraries from different organizations, one of which I think is still closed-source? sounds like I might need to take another look at D; though, again, its emulation of C++ still suggests to me that it won’t quite cut it).

          Rust is the only language in this list that I think is actually a reasonable contender. Sadly, it still bites a lot of these issues: names are still mangled by-default, the generated binaries are huge (I’m still a little bugged that C’s hello-world with glibc dynamically links to 6KB), et al.

          But more than all of these things I’ve listed, the problems I have with these languages is that they all have a pretty big agenda (to borrow a term from Jon Blow). They all (aside from C++ which has wandered in the desert for many years) have pretty specific goals with how they are designed to try and carve out their ideal of what programming should be instead of providing tools that allow for people to build what they need.

          So, as for languages that I think might (someday, not soon really) actually replace C (for me):

          Zig strikes a balance between C (plus compile-time execution to replace the preprocessor) and LLVM IR internals which allow for incredibly fine-grained control (Hello arbitrarily-sized, fixed-width integers, how are you doing?). It also manages to have the best C interop story I have ever seen in a language so far (you can import from C header files no problem, and zig libraries can have their code called from C also no problem; astounding).

          Myr is still really new (so is Zig really), and has a lot left to figure out (e.g., its C interop story is not quite so nice yet). However, it manages to be incredibly simple and terse for a braced language. My guess is that, in the long run, myr will actually replace shell languages for me, but not C.

          Jai looks incredibly cool and embraces a lot of what I’ve mentioned above in that it is not a big agenda language, but provides a lot of really useful tools so that you can use what you make of it. However, it’s been in development for four years and there is no publicly available implementation (and I am worried that it may end up being closed-source when it is released, if ever). I’m hoping for the best here, but am expecting dark storms ahead.

          Okay, sorry for the massive post; let me just wrap up a few things. I do not mean to imply with this post that any of the languages above are inherently bad or wrong, only that they do not meet my expectations and needs in replacing C. For a brief sampling of languages that I love which suffer from all the problems I mentioned above and more, see here:

          • Haskell and Idris, but really Agda
          • Ada
          • Lua
          • APL (specifically, the unicode variants)
          • Pony

          They are all great and have brilliant ideas; they’re just not good for replacing C. :)

          Now then, I’ll leave you all to it. :)

          1. 15

            (especially with all the weirdness of there being two de facto standard libraries from different organizations, one of which I think is still closed-source?)

            That was resolved a few years ago. D just has one stdlib, it’s fairly comprehensive, and keeps getting better with each release.

            1. 12

              a few years ago

              a bit of an understatement! The competing library was dropped in 2007.

            2. 10

              Despite how often people place it here, I do not believe Go belongs in this group of candidates The garbage collection alone makes it unfit for systems programming. I see Go as a very reasonable choice to replace Java (but I don’t use Java whenever I have the choice, so I might not be the best person to ask). There are many other parts of this language that rub me the wrong way, but mostly, I just think it’s not a good systems language (but is instead a great intro language for higher-level stuff).

              Agreed with this. Go is my go-to when I need to introduce dependencies to a Python script (and thus fuck with pip --user or virtualenv or blah blah blah) for high level glue code between various systems (e.g. AWS APIs, etc.)

              I think there’s a reason Go is dominating the operations/devops tooling world - benefits of static compilation, high level, easy to write.

              Look at the amount of hacks Docker needs to do things like reexec etc. to work properly in Go, that would be trivial to do in C.

              1. 3

                Note that Zig is x86-only at the moment. Check “Support Table” on Zig’s README.

                For that matter, Rust is x86-only too, if you want Tier-1 support.

                1. 3

                  I’m a big d fan, but I agree that it’s the wrong thing to replace c. Betterc doesn’t really help in this respect, because it doesn’t address the root reason of why d is the wrong thing to replace c (which being that the language itself is big, not that the runtime or standard library are). Personally, I think zig is the future, but rust has a better shot at ‘making it’, and the most likely outcome is that c stays exactly where it currently is (which I’m okay with). I haven’t looked at myr (yet), and afaik isn’t jai targeted at game development? It might be used for systems programming, but I think it might not necessarily do well there.

                  It also manages to have the best C interop story I have ever seen in a language so far (you can import from C header files no problem, and zig libraries can have their code called from C also no problem; astounding)

                  I think nim does this, and d for sure does, with d++ (I think this may also help with the c++ emulation? Also, I’m not sure why you’re knocking it for its lack of quality c++ emulation when it’s afaik the only language that does even a mediocre job at c++).

                  1. 1

                    I agree!

                    As for knocking D for emulating C++, I did not mean to suggest that doing so is a count against D as a language, but rather just as a count against replacing C. I already ruled out C++, if a given language is pretty close to C++, it’s probably also going to be ruled out.

                    It’s been a long time since I looked at nim, but generating C code leaves a really poor taste in my mouth, especially because of some decisions made in the language design early on (again, I haven’t looked in a while, perhaps some of those are better now).

                    As for Jai, yes it’s definitely targeted at game development; however, the more I look at it, the more it looks like it might be reasonable for a lot of other programming tasks. Again though, at the moment, it’s just vaporware, so I offer no guarantee on that point. :)

                  2. 2

                    However, it’s been in development for four years and there is no publicly available implementation

                    I’m amazed that behind the project is Jonathan Blow, he is a legend programming original video games.

                  1. 11

                    Nice. If you distribute pre-compiled binaries, please gpg-sign them and perhaps provide sha512 checksums of them as well.

                    1. 5

                      Thank you. I was planning on GPG signing and using SHA256. Is that OK?

                      I also hope to make the build reproducible on linux, using debian’s reproducible build tools.

                      1. 3

                        Reproducible builds would be awesome.

                        As for SHA256 vs. SHA512, from a performance point of view, SHA512 seems to perform ~1.5x faster than SHA256 on 64-bit platforms. Not that that matters much in a case like this, where we’re calculating it for a very small file, and very infrequently. Just thought I’d put it out there. So, yeah, SHA256 works too if you want to go with that :)

                        1. 2

                          Also remember defaulting on SHA-1 or SHA-256 means hardware acceleration might be possible for some users.

                          1. 2

                            SHA-1 has been on the way out for a while, and browsers refuse SHA-1 certificates these days. It might be a good idea to just skip SHA-1 entirely and rely on the SHA-2 family.

                            1. 1

                              True. I was just noting there’s accelerators for it in many chips.

                            2. 2

                              Isn’t SHA-512 faster on most modern hardware? ZFS uses SHA-512 cut down to SHA-256 for this reason, AFAIK.

                              A benchmark: https://crypto.stackexchange.com/questions/26336/sha512-faster-than-sha256

                              1. 1

                                Oh idk. I havent looked at the numbers in a while. I recall some systems, esp cost- or performance-sensitive, stuck with SHA-1 over SHA-256 years ago when I was doing comparisons. It was fine if basic collisions weren’t an issue in thd use case.

                                1. 4

                                  Anecdotal, but I just timed running sha 512 and 256 10 times each, on a largeish (512MB) file. Made sure to run them a couple of times before starting the timer to make sure it was in cache. Results for sha-512 were:

                                  27.66s user 2.86s system 99% cpu 30.562 total
                                  

                                  And 256:

                                  42.18s user 2.72s system 99% cpu 44.943 total
                                  

                                  So it looks like sha-512 pretty clearly wins. (CPU is an i3-5005u).

                                  1. 2

                                    Cool stuff. Modern chips handle good algorithms pretty well. What I might look up later is where the dirt-cheap chips are on offload performance and if they’ve upgraded algorithms yet. That will be important for IoT applications as hackers focus on them more.

                                  2. 0

                                    You should probably be sure to have your facts straight before giving security advice.

                                    1. 1

                                      I said there’s hardware accelerators for SHA-1 and SHA-2. Both are in use in new deployments with one used sometimes for weak-CPU devices or legacy support. Others added more points to the discussion with current, performance stats something I couldnt comment on.

                                      Now, which of my two claims do you think is wrong?

                                      1. 3
                                        1. As noted, SHA-1 has been on its way out for awhile and shouldn’t be suggested.
                                        2. I don’t know if your claim on weak-CPU devices or legacy support is true, plus you mentioned IoT in response elsewhere, it clearly doesn’t apply in the context of filezilla, an FTP app people will be running on desktops/laptops. Even if one is using the a new ARM laptop that is somewhat under powered…
                                        3. As the comment you responded to points out, one installs new software quite infrequently, so the suggestion based on performance seems odd, especially since the comment you responded to already points out that SHA-512 is generally faster to compute than SHA-256. In any case, suggesting SHA-1 for performance reasons seems unsecure.
                          2. 2

                            Ideally, OP would also get a code signing certificate from Microsoft to decrease the amount of warnings Windows spouts about the executable.

                          1. 10

                            Something interested I learned while working on a shell: GNU readline was extracted from bash, while curses was at least partly extracted from vi!

                            In other words, the libraries were extracted from applications, rather than written from the ground-up as libraries. That seems to be a more fool-proof way to design a complete and usable (if not optimal) interface.

                            https://en.wikipedia.org/wiki/Curses_(programming_library)

                            1. 1

                              Huh, I always thought curses was written from whole cloth for rogue?

                              1. 1

                                Hm yeah it’s possible I misread / overstated. The wikipedia says that vi predates curses, and some code in curses was borrowed from vi. But I guess it’s also true that rogue was the first application that used curses?

                                Sometimes it is incorrectly stated that curses was used by the vi editor. In fact the code in curses that optimizes moving the cursor from one place on the screen to another was borrowed from vi, which predated curses

                                The first curses library was written by Ken Arnold and originally released with BSD UNIX, where it was used for several games, most notably Rogue

                            1. 6

                              Something I hope to be covered is text reflowing, where you can resize your terminal and have the text flow to the new size. I’ve found it difficult to find a minimal terminal like Terminator which also supports this feature.

                              Something I can’t ever shake the feeling of, is that iTerm2 for macOS is the best terminal emulator, and consistently innovates and pushes the boundaries on what a terminal emulator can do … but without feeling bloated.

                              1. 4

                                FYI, terminator isn’t really “minimal.” In interface, sure, but it uses the heavy/featureful vte, notably used in gnome-terminal.

                              1. 15

                                Given how much confusion is created by systems which do allow “foo.bar” and “foobar” to be different email addresses in the same domain, for different users, Gmail saying “we won’t allow that” is wonderful. Given how often people don’t correctly write down dots or whatever when copying email addresses, Gmail’s behavior is also good for getting the mail to just flow.

                                Saying Netflix shouldn’t have to have insider knowledge misses that (1) they made assumptions which required that insider knowledge, and (2) most sites make insider assumptions. Continuing with 2 for now: every site is allowed to have whatever rules they want for the left-hand-side (LHS), and per the standards the left-hand-side is case-sensitive. If I want “bar@” and “bAr@” to be different email addresses, that’s my business. Any email handling system which generally loses case of the LHS is, technically, broken. The federation used by email allows whatever systems are responsible for a given domain to have complete control over the semantics of the LHS.

                                In practice, the most widely deployed LHS canonicalization is almost certainly “be case-insensitive”, followed by “have sub-addresses with + or perhaps -”. IMO, the Gmail dot handling is incredibly sane and everyone running mail-systems should seriously consider it.

                                If I went out filing bugs against systems which made the case-insensitive assumption, then I’d be dismissed as a crazy person. In practice we (almost) all accept that some assumptions will be made. If you want to be safe, or not have to make assumptions, then validate the email addresses used at signup.

                                A friend had some issues with his wife because four different people had signed up for Ashley Madison using his email address (first-name @ gmail.com) and A-M never validated. Perhaps the potential consequences here highlight why not validating email addresses at sign-up or email address change should be interpreted (legally) as reckless negligence. If you’re going to decide that you don’t need to validate, then you assume responsibility for knowing about the canonicalization performed by every recipient domain. So the author of this piece is flat wrong: the moment Netflix decided to not bother validating email addresses, while also using email addresses as authentication identifiers, they assumed complete responsibility for the security consequences of having correct information about canonicalization used in every domain, to keep their authentication identifiers distinct.

                                (disclosure: as well as the hat, I’m also a former Gmail SRE, but had nothing to do with this feature)

                                1. 1

                                  Why not just disallow . in email addresses?

                                  1. 1

                                    About 40 years too late to decide to start restricting what can be on the LHS. That’s entirely up to the domain. You can have empty strings, SQL injection attacks, path attacks and more, because you can have fairly arbitrary (length-restricted) strings, if you use double-quotes. The LHS without quotes is an optimization for simple cases.

                                    Given that there exist today domains where the dot matters, and fred.bloggs != fredbloggs, instead those belonging to different people, any site which disallows dots in sign-up will cut off legitimate users.

                                    Just validate.

                                1. 1

                                  would love to know what extensions he’s using in sublime for compile checking

                                  1. 3

                                    I have no idea if they’ve been using it, but a possibility could be Facebook’s fbinfer or Findbugs. I have used neither, so I can’t testify, but from what I’ve seen, these seem to do their task quite all right.

                                    1. 1

                                      nice, ya i’ll give them a go

                                    2. 2

                                      Open another terminal window. Run make. For each error that pops out, fix it with the editor. Rerun make as required.

                                      It’s not unusual for me to have three terminals open each with my preferred text editor, another term open for making the project, and another term open for running and debugging. I’ve yet to find an IDE that won’t crash on me (for over 25 years I’ve yet to find an IDE that lasts longer than 5 minutes without crashing).

                                      1. 1

                                        And with the Acme editor this process is even simpler, for every error that pops up in make (running in an acme window), you just right click on the file name and it will open or switch to the file with the cursor on the right line/column.

                                        1. 4

                                          Actually I found recently that Vim can do this, if you do:

                                          make > errors.log
                                          lint > errors2.log  # another good example
                                          vim errors.log
                                          

                                          Then just do “gf” over the error messages, and it parses “foo.py:32” in the correct way and jumps there! Works surprisingly well.

                                          Unix!

                                          1. 2

                                            That’s really cool, though the part I like about acme is that it does all this composition of programs in a way that allows even a computer illiterate to use it. I mean anybody can right click on some text.

                                          2. 1

                                            Yes, my text editor can do that. I even played around with that feature for a bit. I still find it faster using separate terminal windows.

                                            1. 1

                                              I tend to use acme windows as my terminal, so opening another acme window is sort of like running a separate terminal.

                                              Though I do want to explore the idea of the “Universal user interface”, acme is really nice, but it’s a shame that it manages it’s own windows, it would be interesting to have a version of acme that worked a bit more like Kakoune. The text editor could simply be a graphical shell that talks to a text editing server and to other programs and doesnt actually do anything, that way it could probably be lightning fast, super flexible and easier to prove correct.

                                              1. 2

                                                it’s a shame that it manages it’s own windows, it would be interesting to have a version of acme

                                                I’m fantasticating about this from the other perspective: it would be nice to have a rio evolution that allow you to edit, click and run anything on your screen.

                                                I imagine an editable text toolbar on top of my screen containing things like

                                                F1|Mail F2<>Edit Ctrl+F3^Open …

                                                Where F1 pipe selected text to a mail program, F2 open selected text in a super simple editor that (simpler than acme and sam), once saved replace the selection, Ctrl+F3 send the selected text to the plumber with the Open command…

                                                I guess you get the point…

                                                1. 1

                                                  it would be nice to have a rio evolution that allow you to edit, click and run anything on your screen.

                                                  I feel like rio could be a program that only manages windows and offers a tag window for every window (or only one on top) . The tag window itself could be running a version of acme that doesn’t do window management.

                                                  1. 1

                                                    Frankly, this is something I have just fantasticated for a while.

                                                    The idea is that you should have the ultimate IDE available from boot.

                                                    Give a try to Oberon-07 if you want an idea of what I mean: Oberon was actually an inspiration for Acme UI design.

                                                    But I’m still working at protocol/kernel level issues in my os, so this is not something I investigated seriously.

                                                    1. 2

                                                      Yeah I’m familiar with Oberon, though I’ve never actually experimented with it. I’m thinking that a modern acme could just be a graphical window with mouse chording, the text editing could be handled by another program, like the way xi-editor frontends work.

                                                  2. 1

                                                    Yes, my text editor can do that as well. I can highlight some text, hit the right key (it’s not F2 but some other key my fingers remember) to feed the selection through an arbitrary command and then …

                                                    And it’s not even a new text editor, but one I’ve been using for over twenty years.

                                            2. 1

                                              This is my workflow as well, although I sometimes have the terminal running inside a text editor for ease of navigation.

                                              entr is a godsend in this regard, as it offers a language and tooling agnostic way to re-run tests/builds when files change.

                                              1. 1

                                                I do the same. Do consider using something like screen or tmux, however, for a substantially nicer workflow.

                                                1. 1

                                                  The last time I used screen in my workflow was when I had to dial up via modem into the school network (through a terminal server). Once ISPs and being part of the Internet became a thing, my use of screen has dropped. Yes, I still create new terminal windows when logging into a remote server.

                                                  The only time I use screen these days is if I have to remote into a server and keep a programming running while not logged in. It’s useful, but not a common use case.

                                                  1. 1

                                                    The last time I used screen in my workflow was when I had to dial up via modem into the school network (through a terminal server). Once ISPs and being part of the Internet became a thing, my use of screen has dropped. Yes, I still create new terminal windows when logging into a remote server.

                                                    The only time I use screen these days is if I have to remote into a server and keep a programming running while not logged in. It’s useful, but not a common use case.

                                              1. 1

                                                I am planning to run freebsd on my laptop, but can’t yet because it doesn’t support my integrated graphics card yet.

                                                1. 1

                                                  While it’s not generally recommended, depending on how adventurous you feel you might try your luck with FreeBSD 12, so the CURRENT/HEAD branch. It’s what TrueOS uses and while I am not a fan of it, looking at what revision they currently use might give you a reasonably stable system to play around with. While the latest revision of FreeBSD is usually pretty usable and people do use it, don’t use it for anything critical. You don’t want to find out that your RNG wasn’t actually secure or something.

                                                  For playing around and maybe seeing if you actually would enjoy running it in future it might be enough though.

                                                  1. 1

                                                    Oh, I’ve used freebsd in the past, I know I want to run it. When I tried trueOS, the installer didn’t install a bootloader (??) and in any case, it seems a bit bloated. Just waiting for 11.2 and drm-next-kmod.

                                                1. 12

                                                  It’s an intriguing proposal, and I think we will see some experiments in this direction in the near future.

                                                  To be skeptical on one point, though, I’m not ready to say this particular claim is wrong, but it does make me raise an eyebrow:

                                                  Rust only imposes one opinion on your program: that it should be safe. I would argue that the only real difference at the runtime level between programs you can write in LLVM and programs you can write in Rust is that Rust requires all of your programs to be safe, i.e. not have memory errors/data races.

                                                  Is that really the only opinion it imposes on your program? I mean if it were, that would be no real restriction, because you can always drop into unsafe. But consider C as a target language. It doesn’t even attempt to require your program to be memory-safe, and yet a number of compiler implementers have found C as a target language limiting. Is Rust less limiting than C as a target language? It might be, but this post doesn’t lay out an argument in that direction.

                                                  For example, the Haskell compiler GHC deprecated its C backend and now targets LLVM primarily. As I understand it, one of the primary issues was over garbage collection. The post does mention that Rust “needs a battle-hardened GC”, but it’s not necessarily that the target language needs one GC. That would be useful as a way of getting going and might suffice in many cases, but to avoid limiting the kinds of languages and performance profiles you can support, a compiler IR needs to provide low-level enough access for the source language to plug in its own GC strategies, and ideally own memory representations too. It’s possible the Rust internals that are alluded to provide this better than C does; admittedly I know nothing about what kinds of useful-for-compilers internals the Rust toolchain exposes.

                                                  1. 4

                                                    Well, to be clear, the opinion that Rust imposes is that whatever you generate must be a valid Rust program. It uses Rust syntax, static, and semantics. However, by contrast with C, Rust’s type system is expressive enough to permit a wide range of translations. No respectable functional language would use C’s type system as their own, but potentially Rust’s would suffice. In contrast to LLVM, I e that Rust provides a number of benefits (package manager, type system, for loops, etc.), but that unlike compiling to a higher level target (e.g. Java, like Scala/Clojure, or JavaScript, like all web languages), compiling to Rust doesn’t preclude you from most low-level optimizations.

                                                    It’s a fair criticism that Rust might not be able to support a garbage collector efficient enough for practical use, since that hypothesis has yet to be empirically validated. I’m optimistic, although the experience of the GHC devs does point the other direction.

                                                    1. 5

                                                      expressive enough to permit a wide range of translations

                                                      That’s a bit less exciting than “rust imposes no limitations other than safety” (which isn’t even true – a transpiler that did its own safety checks might have an easier time generating rust that rustc couldn’t verify and then wrapping it with unsafe{}).

                                                    2. 2

                                                      GC is one concern, I don’t want to use libgc everywhere and call it good enough for example. Good enough to start sure. Another example is exception implementation. If C is the target then you’ll implement them with setjmp/longjmp with runtime record keeping overhead. If llvm is the target then presumably you can plug into the same zero cost exceptions that are used in c++. I don’t know what the exception handling story is with rust, haven’t looked to be honest.