1. 12
  1.  

  2. 16

    Having spent the last 6 months working Rust, I’d disagree with his conclusion around Rust being prideful about pushing the borrow checker.

    The language is explicit in how you should use it - the rules are different than someone coming from C++ or C# might expect. It is to be expected that you fight the compiler - it’s a different way of thinking. That doesn’t mean Rust is right but it also doesn’t mean it’s wrong. It’s an opinion.

    This is similar to new users of Haskell fighting purity. Just because Haskell is pure doesn’t mean it’s right or wrong - it’s a design choice that you have to learn to work with.

    1. 1

      I think we have to admit that, for most people, the just get it to compile->runtime error->printf debugging loop is preferable. Even if only because it feels more productive.

      1. 8

        There is quite a large class of bugs that Rust purports to fix in which you only get a runtime error if you’re lucky. At least, when compared to C or C++.

        1. 0

          You’re right. It’s closer to a (wait for the world to explode->printf debugging->wait again) loop. This goes along with the metaphor of building software as construction. There’s a whole group of people who just want to duct tape that leak under the sink. They aren’t building a house or even a shed, as much as they are temporary tenants. You know you won’t be living in the house forever, so it’s not worth it to actually fix the problem.

          1. 2

            I do think most source codes live too short to care, but shouldn’t systems be built to last? I think Rust is a better systems programming language than C or C++ in that sense.

            1. 3

              I do think most source codes live too short to care

              Even then, Wirth’s languages showed you could get fast compiles, be safe by default, have clean module system, and support concurrency built-in. C the language is still worse if one aims to quickly throw together code that doesn’t crash or get hacked as often.

            2. 1

              I’d contend that maybe those folks shouldn’t be building systems :). Until you have to deal with servicing a huge number of client machines, the guarantees don’t really set in as to how much they help.

          2. 2

            I’d disagree. That feels considerably less productive for systems programming. In fact, it’s infuriating. I mostly work in client-side software developed on a large-to-huge scale. Runtime failures are the last thing I want to deal with - it means I have to update upwards of 500k clients.

            While it might be acceptable to deal with compile->runtime error->printf debug on the server side. It’s hardly a good solution on client side – even if it was how we dealt with it for many years.

            1. 1

              Yes, I agree that certain tasks require different tools. I was trying to specifically point out that generalization is for most people. E.g. quick data analysis jobs, internal web UI, etc. Obviously dynamic or interpreted languages are better for such tasks, than something like C. Personally I see the future of C being for microcontroller projects or toy ISAs, where you care about ease of implementation, and support for better defined languages take over primary systems. That may take another half-century at this rate, though.

            2. 1

              Well, there are those who feel that compile/type errors hold back their unbounded creativity, but that doesn’t mean those analyses are bad.

            1. 2

              Not sure why this was posted again so soon…

            2. 5

              C matched the programming metaphors in UNIX because both occurred contemporaneously. That is why it was such a useful systems programming language, just as PL/1 did for MULTICS.

              If we evaluate systems programming languages against current need inside kernel/OS/utilities/libraries, won’t we come up short because the metaphors don’t match well anymore? Sure, we’ve stretched things with extensions to C, but that just makes C suffice for need, and no one will attempt to match metaphors to any new language because it’s shooting at TWO moving targets (the language and what it’s used for).

              Perhaps the success of C/UNIX derivatives forestalls their necessary concurrent replacement?

              1. 5

                After watching all the language wars play out, I started pushing the idea that a C replacement should keep its style and compatibility as much as possible. The same data types, calling conventions, automatic C FFI, extract to C for its compilers, whatever. Maybe close on the syntax like some competitors were. Then, on top of that, build something better without its shortcomings. Might be better module system, concurrency, macros, live coding a la Lisp/Smalltalk, and so on. It should be something that leverages the experience of those converting that plugs right into the ecosystem getting all its benefits. Plus, allows incremental rewrites of legacy codebases.

                Clay and Cyclone at least attempted something like this on safety side. I found another one that did on productivity side I’m submitting Sunday. I don’t see many attempts, though. Most take people far away from C model to then try to do something in a C model system.

                1. 4

                  I’m keeping an eye on zig.

                  1. 3

                    D.

                    1. 2

                      That’s a quite reasonable concern. Share it as well. Because minimalism has a fundamental advantage in and of itself in composition.

                      I think what replaces all of C (and all of UNIX) is in like manner such “composable minimalism”. But I’m not convinced that it will be “C like” or “UNIX like” at all, because the metaphors that are incompletely fit in this modern environment.

                      I greatly enjoyed working in Python for its clarity and focus on writing concise and understandable code, but what with the PEP 572 that is compelling van Rossum to step down as Python BDFL, once can see the limits of how far that can be taken. (He’s very emotional about what I regard as an “overreach”.)

                      I’ve been wrestling with Rust, attempting to rewrite my earlier kernel work in it in place of C, and it does have definite advantages. However, unlike C and Python, too much is “lost in translation” - code becomes obscure. Gets back to Ken Thompson’s comment in UNIX V5/6 “you are not expected to understand this” in doing his backwards coroutine context switch.

                      So again we are at a crossroads - we might need a new metaphor, but not have it in sight yet.

                      You want more low level hardware “involvement”, but wish to have the logic become more densely abstract to deal with complexity. You want greater “stop on a dime” debugging, but also “obviousness” in exposition to avoid much need for awkward comments. I’ve been thinking about AR/ML means to do augmented development as a means to bridging these, but we’ll see.

                      1. 1

                        Far as low-level Rust, you might find this work interesting given it’s about composing abstractions to deal with low-level stuff in embedded. The Tock people are also publishing interesting stuff. For now, I’m not sure if you were having problems due to the language itself, abstractions you were using, or some combination. Rust programmers are still in early exploration stage for that stuff.

                        There’s also the possibility, which I encourage, of using safe/proven C or assembly for unsafe stuff with safe Rust for the rest that can support better abstraction. By safe/proven, I’m talking types like Cyclone/Clay or automated solvers like Frama-C/SPARK. Even if manual, there’s probably only a small amount of code we’d need specialists for. If doing generic components with reuse, then even that might be reduced. To be clear, I’m just brainstorming here based on all the reuse I’m seeing in Why3-based, CompCert-based and Myreen et al-based work.

                        re AR/ML. I’ve been thinking of them for verification more than debugging. I just don’t have the depth required to know more past “attempt to apply the generic methods that work on everything to see what each accomplishes.” Monkey try thing, monkey see it work, monkey do more of that. That suggestion isn’t worth a Lobsters or IEEE submission, though. ;)

                  2. 4

                    It is really surprising he didn’t find jbuilder which is the build system that seems to be displacing all the other build systems in OCaml and has a passable learning curve.

                    Maybe because he uses Reason syntax, which is probably not supported very well, if at all.

                    1. 3

                      It’s called Dune now: https://github.com/ocaml/dune

                      1. 1

                        Dune has no releases yet, so for now it still is jbuilder.

                      2. 1

                        Dune supports Reason syntax out of the box and will pick up all *.re files automatically.