1. 35

  2. 14

    The biggest issue I have with the defaults, and the borrow checker is that places in FP where you would normally >pass by copy — pass by value, in Rust instead it assumes you want to pass by reference. Therefore, you need to >clone things by hand and pass the cloned versions instead. Although it has a mechanism to do this automatically, it’s >far from ergonomic.

    The argument of pass by reference, or borrowing is that it’s more performant than cloning by default. In general, >computers are getting faster, but systems are getting more complex.

    It’s actually not the case that computers are getting faster in general anymore - Moore’s law has been slowing as we get closer to fundamental physical limits in terms of how small we can build transistors, and actual effective clock speeds for hardware haven’t been increasing significantly for about a decade now. Consequently programmers should be more leery than they are in practice in using non-performant but easy-to-write programming languages and constructs - even ignoring the fact that Moore’s law gains can no longer be counted on, it’s easy for people writing in the middle or towards the top of a large software stack to write non-performant code that stacks on top of other people’s non-performant code, leading to user-visible slowdown and latency even on modern, fast hardware. This is one of the huge issues with applications built on the modern web (in fact, my browser is chugging a little as I write this in the text box, which really shouldn’t be happening on a 2018 computer, and I think it’s the result of a shitty webapp in another tab).

    In any case, one of Rust’s explicit design goals is to be a useful modern language in contexts where minimal use of computing resources like CPU time and memory is important, which is exactly why Rust generally avoids copies unless you explicitly tell it to with .clone() or something similar. Personally, I’ve written a fair amount of Rust code where I do make inefficient copies to avoid complexity (especially while developing an algorithm that I plan to make more efficient later), and I don’t find it particularly onerous to stick a few .clone()s here and there to make the compiler happy.

    1. 6

      I agree with you, and would go further and say that resource usage always matters. In my opinion, performance is an accessibility issue; programs that care about performance can be used on cheaper/older hardware. Not everyone can afford the latest, greatest hardware.

    2. 6

      A minor nit: thread-local storage is part of real C since C11, and is no longer a GNU extension.

      1. 4

        The argument linked to on the LKML for not using C++ is from 2004. I’m not so sure that it’s accurate anymore. So calling it “good reasons” for not using C++ may be going a bit far.

        1. 6

          I don’t understand why OP didn’t actually progress from the “evaluating Rust based on online documentation” stage to the “trying to implement a solution in Rust” stage. The exact issue being looked at, responding to certain OS signals, is entirely trivial due to this crate, which came up with a simple DDG search.

          If the language has the problem that people are fighting with the language in order to become productive with it, perhaps something is wrong with the language, and not the programmers?

          This misses the point. Rust is opinionated about how code should handle sharing access to data, and about when data should be shared rather than copied. It has been observed that people often take time to adjust to this, which is noted in the docs. I don’t see why that’s an issue.

          1. 9

            I don’t understand why OP didn’t actually progress from the “evaluating Rust based on online documentation” stage to the “trying to implement a solution in Rust” stage.

            Author provided a solution in Rust…did you not read the whole article?

            1. 3

              He did a somewhat superficial research on what libraries or crates could be used though.

              It is the same with ocaml/reason actually: he complains about the lack of a build system but the vast majority of packages are now using dune/jbuilder that is trivial to learn and use, well documented and extremely powerful, and he does not even mention it. He picks Containers as standard library replacement but it seems unclear to me if he has even evaluated base or core for example (even if I like containers myself). He also mention the introduction of multicore as something present even though it is nit yet part of the runtime yet

              1. 7

                somewhat superficial research

                I think that’s pretty realistic for almost anyone in the same situation though. If I’m trying to decide between 3 or 4 different languages for a task that needed to be done yesterday, how much time do I really have to learn all the nuances of the ecosystem?

                So, yeah, complain that he didn’t know about X, or didn’t know about Y… But, really, if you are a member of the Z community for which X and Y is relevant, instead of complaining that people missed this well known tool, help make sure that a newcomer’s first introduction to Z informs them of X and Y. That’d be super helpful.

                1. 1

                  I agree with you, but I think the three communities mentioned above are doing the best to make sure that this is the case.

                  And about the reason/ocaml stuff, what I mention appears in most reason tutorials, almost every package and any recent blog or forum thread, and there are dozen of discussions on the current lack of multicore, so I still believe the research was superficial.

                  This said, I wasn’t claiming that the post is bad per se, I have found it quite interesting and I agree that the current situation is not bad but still suboptimal. I was actually pleasantly surprised to see that he dod actually implement some quite non trivial example code in all those, and even contributed to pony!

                  EDIT: updated twice to try and clarify my complaints

              2. 2

                Your question confused me for quite a while, as I was unable to find this Rust solution in the article, despite multiple reloads. The page looks like this to me: https://leotindall.com/static/noRustVersion.png

                Eventually, I looked at it on my mobile phone where, mysteriously, there is a code snippet. I was baffled until I realized that, indeed, the only difference between my mobile device and my laptop is… I have JavaScript disabled in my browser on my laptop.

                So, my apologies to OP on this one. I was unfair. On the other hand, this serves as a great cautionary tale: make sure your site works without JavaScript.

                1. 2

                  make sure your site works without JavaScript.

                  Why draw the line at Javascript?

                  1. 1

                    Not him, so I can only answer for myself.

                    JavaScript is turing complete, so it can do everything and can communicate results outside my control, so I want the ability to stop that.

                    CSS is also turing complete, but AFAIK it can’t communicate outside without JavaScript, and unlike JavaScript it’s really hard to make it do something crazy like mine bitcoin (which with JavaScript is very easy to do).

                    1. 6

                      Now I want to write a Bitcoin miner in pure-CSS.

                      CSS is also turing complete, but AFAIK it can’t communicate outside without JavaScript

                      Enter: CSS Exfil and friends.

                      1. 0

                        Good lord.

                    2. 1

                      You’re right, I should revise that statement: make sure your site works without requiring me to allow you to execute arbitrary code just to read an article you wrote.

                      To be clear, there is an obvious tradeoff here: syntax highlight on the server side is annoying. My blog solves this by having an acceptable no-JS fallback that is less pretty but is not missing content.

                    3. 1

                      That sure would be confusing! It looks like the author embedded a gist, rather than using Medium’s code tool. I assume you can embed code in Medium without JavaScript….

                2. 4

                  “At the end of the day, nearly all of us run software on the Linux Kernel. The Linux Kernel is written in C, and has no intention of accepting C++ for good reasons. There are alternative kernels in development that use languages other than C such as Fuschia and MirageOS, but none have yet reached the maturity to run production workloads.”

                  First counter is all the software not running on Linux kernel. Microsoft, Apple, IBM, Unisys, etc still exist with their ecosystems. Second, there’s two things here author is conflating with the C justifications which are too common: effect of legacy development; what happens at a later point. For legacy, the software gets written in one language a long time ago, becomes huge over time, and isn’t rewritten for cost/time/breakage reasons. That’s the OS’s for sure given their size and heritage. Although legacy kernels are in C, much of Windows is in C++ and Mac OS X in Objective-C. They intentionally avoided C for those new things.

                  Outside those, Burroughs MCP is done in an ALGOL, IBM uses macro-ASM’s + PL/S for some of theirs, and OpenVMS has a macro-asm + BLISS in theirs. Those are still sold commercially and in production. For others in past, there were LISP machines in LISP variants, Modula-2/Oberon systems w/ Astrobe still selling it for microcontrollers, Pascal, FreeBASIC, Prime had a Fortran OS, Ada used from embedded to mainframe OS’s, and several in Rust now. There’s also model of systems stuff in high-level language that depends on tiny amount of low-level stuff done in assembly or low-level language. OS projects were done that way in Haskell and Smalltalk.

                  Lot of options. You don’t need C for OS’s. The sheer amount of OS work, libraries, compiler optimizations, and documentation make it a good choice. It’s not the only one in widespread use or strictly necessary, though. It’s possible other things will work better for OS developers/users depending on their needs or background.

                  1. 2

                    I don’t believe the Mac OS X Kernel uses Obj-C. It’s mostly C with the drivers in C++.

                    1. 4

                      True (source: I used to write kernel extensions for macOS to do filesystem things). NeXTStep/OpenStep had Driver Kit, an ObjC framework for writing device drivers in-kernel, which appeared in early Mac OS X developer previews but not in the released system.

                      Specifically IOKit (the current macOS driver framework) is Embedded C++, which is C++ minus some bits like exceptions.

                      BTW Cocoa is also a “legacy part”, but it comes from NeXT’s legacy, not Apple’s. IOKit (the C++ driver framework) was new to Mac OS X (it didn’t come from NeXT or Mac OS 8, though clearly it was a thin port of DriverKit) and the xnu kernel was an update of NeXT’s version of CMU Mach 2.5.

                      1. 3

                        Yeah, that’s the legacy part. They just keep using what it’s already in. It was the newer stuff like Cocoa that used Objective-C IIRC. I should’ve been more clear.

                        1. 1

                          that has nothing to do with the kernel negates the point. It amplifies the C point because darwin is written in C

                          1. 1

                            It doesn’t at all. I already addressed why the kernel stays in C: someone long ago chose C over the other systems languages with so much code they can’t justify rewriting it. It’s an economic rather than a language effect. Thinking it’s strictly the properties of C is like saying Sun Microsystems wasn’t the cause of the rise of Java, Microsoft wasn’t behind rise of .NET, and so on. People are still being trained in those languages to add to massive codebases that are too big to rewrite. Same thing that always happens.

                            1. 2

                              big or not (and xnu is biggish), risk is more of a concern than cost. the kernel needs to stay up a few years in between panics and they’ve already got it there for the C code.

                              1. 1


                        2. 2

                          In addition to all that, I think the systems level domain also has a new growth sector by replacing the kernel with a hypervisor+library combination. Aka, the topic from this talk[1].

                          [1] https://www.youtube.com/watch?v=L-rX1_PRdco

                          1. 2

                            Although still listening to talk, their robur collective has a great, landing page. Lots of good stuff on it. I also like her team is working on a home router. That’s what I told separation kernel vendors to build since it’s (a) hugely important, (b) huge market, and (c) can be done incrementally starting with virtualized Linux/BSD.

                        3. 2

                          I would add Zig to the list of promising systems language, and also probably Jonathan Blow’s Jay language.

                          1. 1

                            If the language has the problem that people are fighting with the language in order to become productive with it, perhaps something is wrong with the language, and not the programmers?

                            Perhaps - but, given how often those “productive” programmers produce software with major flaws, perhaps not.