1. 4

      Aren’t the safety features designed to carry zero runtime cost? The costs are incurred at compile time, so I fail to see how performance of Rust programs would suffer compared to C due to any added safety.

      1. 2

        They have runtime cost because the language won’t let you write the most performant solution. And like, there’s mundane stuff like bounds checking.

        1. 5

          The language does let you write the most performant solution, you just might need to use unsafe, depending on what problem you’re trying to solve. I do a lot work in Rust on text search and parsing, for example, and I rarely if ever need to write unsafe to get performance that is comparable to C/C++ programs that do similar work. On the contrary, I wrote some Rust code that does compression/decompression, and in order to match the performance of the C++ reference implementation, I had to use quite a bit of unsafe.

          And like, there’s mundane stuff like bounds checking.

          Well, firstly, bounds checking is trivial to disable on a case-by-case basis (again, using unsafe). Secondly, bounds checking so rarely makes a difference at all. Notice that the OP asked “nearly as fast as C.” I think bounds checking meets that criteria.

          1. 1

            I wrote some Rust code that does compression/decompression, and in order to match the performance of the C++ reference implementation, I had to use quite a bit of unsafe.

            Why specifically? That kind of code is common in real world. Figuring out how to do it safely in Rust or with an external tool is worthwhile.

            1. 2

              Unaligned loads and stores. Making it safe seems possible, but probably requires doing a dance with the code generator. That’s the nice part about Rust. If I want to do something explicit and not rely on the code generator, I have the option to do so.

              It’s worth pointing out that the library encapsulates unsafe, so that users of the library need not worry about it (unless there’s a bug). That’s the real win IMO.

              1. 2

                That makes sense. I recall that some projects mocked up assembly in their safe language or prover giving the registers and so on types to detect errors. You thought about doing such a routine in a Rust version of x86 assembly to see if borrow checker or other typing can catch problems? Or would that be totally useless you think?

                1. 3

                  It sounds interesting, and I hope someone works on it, but it’s not personally my cup of tea. :-) (Although, I might try to use it if someone else did the hard work building it! :P)

                  1. 3

                    I’ll keep that in mind. For your enjoyment, here’s an example of what’s possible that’s more specialist rather than just embedding in Rust or SPARK. They go into nice detail, though, of what benefits it brought to Intel assembly, though.

                    https://lobste.rs/s/kc2brf/typed_assembly_language_1998

                    They also had a compiler from a safe subset of C (Popcorn) to that language so one could mix high-level and low-level code maintaining safety and possibly knocking out abstraction gaps in it.

                  2. 1

                    “typed assembly language” makes me think of LLVM IR :P

            2. 3

              Bounds checking almost never matters on modern CPUs. Even ignoring the fact that Rust can often lift the bounds-check operation so it’s not in the inner loop, the bounds-check almost never fails, so the branch predictor will just blow right past. It might add one or two cycles per loop, but maybe not.

          2. 4

            Or, in many cases: Rust.

            1. 0

              I didn’t mention it for “more readable code” as question asked. Rust has a steep learning curve like Ada does. I didn’t mention it either. Neither seems to fit that requirement.

              1. 1

                Can you compare and contrast your own personal learning experience between Ada and Rust?

                1. 3

                  Lots most of my memory to a head injury. It’s one of those things I don’t remember. I’m going to have to relearn one or both eventually. I have a concept called Brute Force Assurance that tries to reduce verification from an expensive, expert problem to a pile of cheap labor problem. The idea is a language like Scheme or Nim expresses the algorithms in lowest-common denominator form that’s automatically converted to equivalent C, Rust, Ada, and SPARK. Their tools… esp static analyzers or test generators in C, borrow-checker in Rust, and SPARK’s prover… all run on the generated code. Errors found there are compared against the original program with it changed for valid errors. Iterate until no errors and then move on to next work item.

                  I’ll probably need to either have language experts doing it together or know the stuff myself when attempting it. I’m especially interested in how much a problem it will be if underlying representations, calling conventions or whatever are different. And how that might be resolved without modifying the compilers by just mocking it up in the languages somehow. Anyway, I’ll have to dig into at least Rust in the future since I probably can’t recreate a borrow-checker. Macroing to one from a Wirth- or Scheme-simple language will be much easier.

            1. 9

              I submit as a counterpoint that, in yosefk’s experience, low-level is easy.

              1. 4

                Yeah I like this. One observation I’ve had: writing C or C++ is relatively easy, but the corresponding Makefiles and autotools are harder.

                Does anyone else feel that way? Once you are situated and understand pointers and all that, it feels pretty easy to write the actual C or C++ code. Particularly if you are only using libc and libstdc++ and not huge abstractions like APR or OpenGL or something (low-level is easy).

                But the hard parts seem like getting the compilers and build system to do what you want… debugging preprocessor directives can be harder than debugging the program. Getting the debugger to run against a binary with the right build flags (for an arbitrary open source project) is harder than debugging it.

                For one, a Unix-style build system for a mature project requires understanding 3 or 4 different languages: shell, make, sometimes awk, {m4/autotools, cmake, Ninja} etc. Whereas C and C++ is 1 or 2 languages.

                1. 3

                  Learning how to use all tools required to be a truly effective C programmer may indeed be approximately as hard as learning basic C.

                  1. 3

                    Where I work, embedded code that runs on our cameras is generally easy to understand, and reads like a captivating yet approachable exploration in low-level systems programming. Our marginally technical product managers can understand this code, and some indeed do send in PRs against it.

                    On the other hand, our externally facing APIs and various data pipelines are a morass of abstractions and concessions to past scalability problems that make even the gnarliest embedded code look like child’s play. There’s a joke around the office that goes something like “embedded tends to suck in everyone from time to time”, but I suspect that people go there to be productive in a simpler, more straightforward world.

                    1. 2

                      One observation I’ve had: writing C or C++ is relatively easy, but the corresponding Makefiles and autotools are harder.

                      Particularly if you are only using libc and libstdc++ and not huge abstractions like APR or OpenGL or something (low-level is easy).

                      Those two statements are almost contradictory.

                      If your project only depends on libc and libstdc++ and its not using any third party libraries then it’s nobody’s fault but your own if your Makefile is very complicated. Using autotools in that situation buys you nothing but extra complexity and a fancy Makefile.

                      IME it’s much easier to create a Makefile template with the targets you want, and then reuse it for small projects. The Makefile syntax itself can be complicated, but 99.99% of the time it’s best to keep it simple.

                      1. 2

                        Point taken, but if the thesis is “low level is easy” then you can make the comparison between plain C code with no deps, and a huge generated makefile for something with deps.

                        In both cases, the higher level layers are what make things hard. I concede that using and understanding the big platform layers in C are certainly more difficult than a makefile.

                        I guess the overall point is that on a typical big C project you have two sources of complexity and dofficulty – build time and runtime. Runtime is to some extent essential, but the mess at build time can be fixed.

                        1. 2

                          Even with simple code and no dependencies, you need a huge messy Makefile if you want to support debug/asan/release builds on win32/win64/Linux/OSX. (doing make clean and changing CFLAGS whenever you want to switch builds is not good enough)

                        2. 2

                          writing C or C++ is relatively easy

                          I don’t feel this way. Writing C that never trips undefined behaviour does not feel particularly easy to me. Also I have to churn out a lot of C code to get it to do fairly trivial things, and then I have to find all the stupid mistakes I left in.

                          I don’t think the actual writing of low-level software is itself easier. You’re implementing algorithms on your hands and knees all the time. You so much as look at cc funny and you get undefined behaviour you might not discover for years. The debugging tools are often broken.

                          I think yosefk is arguing that being a software developer working on low-level software is overall easier, because the part where you actually write the software is somewhat harder, but everything else involved in the job is easier. You have far less difficulties caused by, for example, people handing you ill-defined requirements to implement.

                          I think the important bit of the article I linked is this:

                          As a low-level programmer, you have to convince people not to be afraid when you give them something. As a high-level programmer, you have to convince them that you can’t just give them more and more and MORE.

                          1. 2

                            Well, that’s on you for using autotools and their ilk :)

                            Projects under 50,000 lines or so only need a build script like this:

                            cd build && clang -Wall -flto ../*.c ../*.cpp
                            

                            For larger projects, modern build systems that output ninja are relatively painless.

                            1. 3

                              It’s relatively painless, but not even close to actually painless.

                              You shouldn’t need a several hundred line script to output a several hundred line build config that a ten thousand line build tool parses to run your compiler. But, C doesn’t include any way to specify how your code should be built and our compilers are crap so incremental and parallel compilation are must haves.

                              Unity builds solve the problem of needing massive build tools, but your code has to be written to work as a unity build from the start and using 3rd party libraries becomes a big pain. It also doesn’t really solve the performance issues but it does help a bit.

                              Compiling your code is one of the biggest pain points with C, and the Jai language is aiming to resolve this by making the build specification part of the language and having a compiler that can build hundreds of thousands of lines per second.

                          2. 2

                            Do you think systems programming is low-level? This is an honest question – whenever I read something about OS or systems development, like distributed platform development, I see fairly high-level things. Hardware development is low-level though… but what are systems if not hardware abstractions, so by definition, a higher level thing?

                            1. 3

                              Yes.

                              Specifically about this James Mickens' essay here: the essay is about the implementations of kernels, virtual machines, device drivers, databases and the like. The pain this essay discusses is specifically the pain of low-level programming, such as how bugs in the thing you’re implementing can also break the tools that you’d like to use to debug it, such as a stray pointer breaking your logging subsystem and now, in Mickens' words, “I HAVE NO TOOLS BECAUSE I’VE DESTROYED MY TOOLS WITH MY TOOLS.”

                              By definition: I think systems programming is “writing stuff intended for applications to be built on top of” so it’s more low-level than applications programming, which consumes the software that systems programmers produce.

                              1. 3

                                An example I have seen is someone fixing a localization bug, and added logging to debug/verify that it’s working. However, logging actually called into his localization code which infinitely indirectly recursed, causing the system to hang on boot. This ain’t quite an example of low level debugging but it was low enough to break tools your tools rely on.

                              2. 3

                                In yosefk’s sense, low-dependency code is easy. This describes quite a lot of, but not nearly all, systems code. (E.g. a special case in the PostgreSQL query planner probably isn’t low-dependency.)

                            1. 1

                              I did something similar to docopts in Scala, which took away the need to learn yet another API and seems to be working well so far. Have a few CLI tools that use the lib at work.

                              1. 5

                                This line of work isn’t for everyone, which is true for many other professions as well. Look about your place of business, and try to determine how many of your tech-adapted colleagues would do well as police officers, construction workers, bus drivers, airline pilots, or childcare workers. Chances are you’ll find yourself mentally pushing pegs into holes not meant for them.

                                And that’s okay! Not every profession is for everyone. There’s no need to lament the fact with a lengthy essay, no need to chase an illusory “inclusion” that would serve to undermine and defang the very selection process which weeds out those who wouldn’t do well in the job anyway!

                                1. 2

                                  Configuration management tools which are themselves configured via declarative means are, while cumbersome at times, still a step up compared to “just provision servers” libraries that manage to somehow shamble along without an I/O system or a strong type system or referential transparency.

                                  1. 4

                                    On a related note: I’ve tackled the problem of overly verbose type class inheritance encoding in Scala by building a type class syntax using macros. https://github.com/maxaf/dandy

                                    My conclusion was the same: one type class can build upon another not by subtyping but by injecting an instance of the other type class.

                                    1. 2

                                      This is cool. What are the non-goals you mention at the end? Would projects that use Simulacrum such as Cats miss them?

                                      1. 4

                                        I’m specifically not quite enchanted by the chase after ops syntax (i.e. implementing operations such as |+| exposed by semigroups). Likewise, Simulacrum is not after reducing type class syntax boilerplate in general. They’ve been mostly focused on operators, which I personally see as secondary.

                                        My README doesn’t make very clear the intent to data model the hell out of type classes, then expose a concise syntax for instantiating the underlying model. In such an arrangement it would be possible to implement any arbitrary set of type class features.

                                    1. 1

                                      I’m showing off to friends & family my take on a @typeclass Scala macro, writing docs & tests for it. I continue to yield readily to my latent fascination with macros and all the ways in which a language (in this case, Scala) can be enriched by the use of macros.

                                      As far as personal research projects go, this one’s pretty tame, but might also turn out generally applicable, so I’m taking the time to polish it a little before a wider release.

                                      1. 3

                                        NYC! \m/

                                        1. 2

                                          I’m in upstate New York, home of 30% of the state’s population and 90% of its surface area. =)

                                        1. 6

                                          I’ve been hacking together a “virtualized GSM phone” (this game really does not have a name) using a Huawei 3G modem, a Raspberry Pi and some Scala code. I’m scratching an itch with this project, namely that paying for roaming is silly.