1. 5

    Anyone know if it is possible to make multiple dynamically linked rust libs that are usable from C without including the rust runtime multiple times?

    1. 6
      • Rust doesn’t guarantee a stable ABI. You can link Rust’s libstd dynamically, but the library will be very version-specific. You can get away with this if only you distribute it and libraries that depend on it all together.

      • You can make one dynamic C ABI library that consists of multiple static Rust libraries. LTO will dedupe them.

      1.  

        You can make one dynamic C ABI library that consists of multiple static Rust libraries. LTO will dedupe them.

        Right, but if two of your downstream dependencies both write a .so file in rust, won’t it all break down as you have two copies of the std allocator, one in each .so file?

        1. 5

          No. It’s no different than having Rust’s allocator (which may not be your system’s allocator) and C’s allocator. When you cross ffi boundaries, you generally try not to assume any particular allocator. It’s up to each library to allocate memory internally and expose their own free routines. The same goes for Rust when exposing a C interface.

      2.  

        Rust doesn’t have a runtime, in the sense that Python or Go have a runtime, but it does have a standard library that gets statically linked into the resulting binary.

        1.  

          That is a runtime, I’m confused how this can work without causing multiple redefinitions of the stdlib symbols.

          1.  

            As long as these aren’t publicly exposed (so are defined as static in C terms) it is fine to go. Also you can always use no_core and do not use any of the Rust stdlib in the library.

            1.  

              If you are going to do this, you might want to define you exposed API surface area with a common naming convention (my_api_* or something), then use a linker version script to ensure that only the symbols you expect to export are actually exported.

              This looks possible in rust: https://users.rust-lang.org/t/linker-version-script/26691 For information about the linker scripts: https://ftp.gnu.org/old-gnu/Manuals/ld-2.9.1/html_node/ld_25.html

              I do this with C++ libraries. I’ll statically link libstdc++ into libwhatever.so, then hide all of the stdc++ symbols, libgcc symbols, etc. My libraries usually only have dynamic dependencies on libc. I have no idea if you can replicate this pattern in rust, I haven’t tried yet, but doing this has been tremendously helpful for deploying libraries in the environment I am deploying them too.

          2.  

            The obvious option is to dynamically link the rust standard library as well, rustc test.rs --crate-type=dylib -C prefer-dynamic (or similarly passing those options through cargo). Or to use no_std.

            I don’t know if there is a more elegant way though.

            1.  

              Does that work in the presence of generics?

              1.  

                It should, though it’s probably duplicating the asm versions of the monomorphized methods in both libs.

                1.  

                  No it does not, because you cannot represent generics in C’s ABI. You can probably compile the standard lib as a Rust dylib, but then you don’t get a stable ABI and it’s of limited use.

            1. 30

              To me the big deal is that Rust can credibly replace C, and offers enough benefits to make it worthwhile.

              There are many natively-compiled languages with garbage collection. They’re safer than C and easier to use than Rust, but by adding GC they’ve exited the C niche. 99% of programs may work just fine with a GC, but for the rest the only practical options were C and C++ until Rust showed up.

              There were a few esoteric systems languages or C extensions that fixed some warts of C, but leaving the C ecosystem has real costs, and I could never justify use of a “weird” language just for a small improvement. Rust offered major safety, usability and productivity improvements, and managed to break out of obscurity.

              1. 39

                Ada provided everything except ADTs and linear types, including seamless interoperability with C, 20 years before Rust. Cyclone was Rust before Rust, and it was abandoned in a similar state as Rust was when it took off. Cyclone is dead, but Ada got a built-in formal verification toolkit in its latest revision—for some that stuff alone can be a reason to pick instead of anything else for a new project.

                I have nothing against Rust, but the reason it’s popular is that it came at a right time, in the right place, from a sufficiently big name organization. It’s one of the many languages based on those ideas that, fortunately, happened to succeed. And no, when it first got popular it wasn’t really practical. None of these points makes Rust bad. One just should always see a bigger picture especially when it comes to heavily hyped things. You need to know the other options to decide for yourself.

                Other statically-typed languages allow whole-program type inference. While convenient during initial development, this reduces the ability of the compiler to provide useful error information when types no longer match.

                Only in languages that cannot umabiguously infer the principal type. Whether to make a tradeoff between that and support for ad hoc polymorphism or not is subjective.

                1. 15

                  I’ve seen Cyclone when it came out, but at that time I dismissed it as “it’s C, but weird”. It had the same basic syntax as C, but added lots of pointer sigils. It still had the same C preprocessor and the same stdlib.

                  Now I see it has a feature set much closer to Rust’s (tagged unions, patterns, generics), but Rust “sold” them better. Rust used these features for Result which is a simple yet powerful construct. Cyclone could do that, but didn’t. It kept nullable pointers and added Null_Exception.

                  1. 10

                    Ada provided everything except ADTs and linear types

                    Unfortunately for this argument, ADTs, substructural types and lifetimes are more exciting than that “everything except”. Finally the stuff that is supposed to be easy in theory is actually easy in practice, like not using resources you have already cleaned up.

                    Ada got a built-in formal verification toolkit in its latest revision

                    How much of a usability improvement is using these tools compared to verifying things manually? What makes types attractive to many programmers is not that they are logically very powerful (they are usually not!), but rather that they give a super gigantic bang for the buck in terms of reduction of verification effort.

                    1. 17

                      I would personally not compare Ada and Rust directly as they don’t even remotely fulfill the same use-cases.

                      Sure, there have been languages that have done X, Y, Z before Rust (the project itself does not lay false claim to inventing those parts of the language which may have been found elsewhere in the past), but the actual distinguishing factor for Rust that places it into an entirely different category from Ada is how accessible and enjoyable it is to interact with while providing those features.

                      If you’re in health or aeronautics, you should probably be reaching for the serious, deep toolkit provided by Ada, and I’d probably be siding with you in saying those people probably should have been doing that for the last decade. But Ada is really not for the average engineer. It’s an amazing albeit complex language, that not only represents a long history of incredible engineering but a very real barrier of entry that’s simply incomparable to that of Rust’s.

                      If, for example, I wanted today to start writing from scratch a consumer operating system, a web browser, or a video game as a business venture, I would guarantee you Ada would not even be mentioned as an option to solve any of those problems, unless I wanted to sink my own ship by limiting myself to pick from ex-government contractors as engineers, whose salaries I’d likely be incapable of matching. Rust on the other hand actually provides a real contender to C/C++/D for people in these problem spaces, who don’t always need (or in some cases, even want) formal verification, but just a nice practical language with a systematic safety net from the memory footguns of C/C++/D. On top of that, it opens up these features, projects, and their problem spaces to many new engineers with a clear, enjoyable language free of confusing historical baggage.

                      1. 6

                        Have you ever used Ada? Which implementation?

                        1. 15

                          I’ve never published production Ada of any sort and am definitely not an Ada regular (let alone pro) but I studied and had a fondness for Spark around the time I was reading “Type-Driven Development with Idris” and started getting interested in software proofs.

                          In my honest opinion the way the base Ada language is written (simple, and plain operator heavy) ends up lending really well to extension languages, but it also can make difficult for beginners to distinguish the class of concept used at times, whereas Rust’s syntax has a clear and immediate distinction between blocks (the land of namespaces), types (the land of names), and values (the land of data). In terms of cognitive load then, it feels as though these two languages are communicating at different levels. Like Rust is communicating in the mode of raw values and their manipulation through borrows, while the lineage of Ada languages communicate at a level that, in my amateur Ada-er view, center on the expression of properties of your program (and I don’t just mean the Spark stuff, obviously). I wasn’t even born when Ada was created, and so I can’t say for sure without becoming an Ada historian (not a bad idea…), but this sort of seems like a product of Ada’s heritage (just as Rust’s so obviously written to look like C++).

                          To try and clarify this ramble of mine, in my schooling experience, many similarly young programmers of my age are almost exclusively taught to program at an elementary level of abstract instructions with the details of those instructions removed, and then after being taught a couple type-level incantations get a series of algorithms and their explanations thrown at their face. Learning to consider their programs specifically in terms of expressing properties of that program’s operations becomes a huge step out of that starting box (that some don’t leave long after graduation). I think something that Rust’s syntax does well (if possibly by mistake) is fool the amateur user into expressing properties of their programs on accident while that expression becomes part of what seems like just a routine to get to the meat of a program’s procedures. It feels to me that expressing those properties are intrinsic to the language of speaking Ada, and thus present a barrier intrinsic to the programmer’s understanding of their work, which given a different popular curriculum could probably just be rendered as weak as paper to break through.

                          Excuse me if these thoughts are messy (and edited many times to improve that), but beyond the more popular issue of familiarity, they’re sort of how I view my own honest experience of feeling more quickly “at home” in moving from writing Rust to understanding Rust, compared to moving from just writing some form of Ada, and understanding the program I get.

                      2. 5

                        Other statically-typed languages allow whole-program type inference. While convenient during initial development, this reduces the ability of the compiler to provide useful error information when types no longer match.

                        Only in languages that cannot unabiguously infer the principal type. Whether to make a tradeoff between that and support for ad hoc polymorphism or not is subjective.

                        OCaml can unambiguously infer the principal type, and I still find myself writing the type of top level functions explicitly quite often. More than once have I been guided by a type error that only happened because I wrote the type of the function I was writing in advance.

                        At the very least, I check that the type of my functions match my expectations, by running the type inference in the REPL. More than once have I been surprised. More than once that surprise was caused by a bug in my code. Had I not checked the type of my function, I would catch the bug only later, when using the function, and the error message would have made less sense to me.

                        1.  

                          At the very least, I check that the type of my functions match my expectations, by running the type inference in the REPL

                          Why not use Merlin instead? Saves quite a bit of time.

                          That’s a tooling issue too of course. Tracking down typing surprises in OCaml is easy because the compiler outputs type annotations in a machine-readable format and there’s a tool and editor integrations that allow me to see the type of every expression in a keystroke.

                          1.  

                            Why not use Merlin instead? Saves quite a bit of time.

                            I’m a dinosaur, that didn’t take the time to learn even the existence of Merlin. I’m kinda stucks in Emacs’ Tuareg mode. Works for me for small projects (all my Ocaml projects are small).

                            That said, my recent experience with C++ and QtCreator showed me that having warnings at edit time is even more powerful than a REPL (at least as long as I don’t have to check actual values). That makes Merlin look very attractive all of a sudden. I’ll take a look, thanks.

                      3. 5

                        Rust can definitely credibly replace C++. I don’t really see how it can credibly replace C. It’s just such a fundamentally different way of approaching programming that it doesn’t appeal to C programmers. Why would a C programmer switch to Rust if they hadn’t already switched to C++?

                        1. 41

                          I’ve been a C programmer for over a decade. I’ve tried switching to C++ a couple of times, and couldn’t stand it. I’ve switched to Rust and love it.

                          My reasons are:

                          • Robust, automatic memory management. I have the same amount of control over memory, but I don’t need goto cleanup.
                          • Fearless multi-core support: if it compiles, it’s thread-safe! rayon is much nicer than OpenMP.
                          • Slices are awesome: no array to pointer decay. Work great with substrings.
                          • Safety is not just about CVEs. I don’t need to investigate memory murder mysteries in GDB or Valgrind.
                          • Dependencies aren’t painful.
                          • Everything builds without fuss, even when supporting Windows and cross-compiling to iOS.
                          • I can add two signed numbers without UB, and checking if they overflow isn’t a party trick.
                          • I get some good parts of C++ such as type-optimized sort and hash maps, but without the baggage C++ is infamous for.
                          • Rust is much easier than C++. Iterators are so much cleaner (just a next() method). I/O is a Read/Write trait, not a hierarchy of iostream classes.
                          1. 5

                            I also like Rust and I agree with most of your points, but this one bit seems not entirely accurate:

                            Fearless multi-core support: if it compiles, it’s thread-safe! rayon is much nicer than OpenMP.

                            AFAIK Rust:

                            • doesn’t guarantee thread-safety — it guarantees the lack of data races, but doesn’t guarantee the lack of e.g. deadlocks;
                            • guarantees the lack of data races, but only if you didn’t write any unsafe code.
                            1. 18

                              That is correct, but this is still an incredible improvement. If I get a deadlock I’ll definitely notice it, and can dissect it in a debugger. That’s easy-peasy compared to data races.

                              Even unsafe code is subject to thread-safety checks, because “breaking” of Send/Sync guarantees needs separate opt-in. In practice I can reuse well-tested concurrency primitives (e.g. WebKit’s parking_lot) so I don’t need to write that unsafe code myself.

                              Here’s an anecdote: I wrote some single threaded batch-processing spaghetti code. Since it each item was processed separately, I decided to parallelize it. I’ve changed iter() for par_iter() and the compiler immediately warned me that in one of my functions I’ve used a 3rd party library which used an HTTP client library which used an event loop library which stored some event loop data in a struct without synchronization. It pointed exactly where and why the code was unsafe, and after fixing it I had an assurance the fix worked.

                              1. 5

                                I share your enthusiasm. Just wanted to prevent a common misconception from spreading.

                                Here’s an anecdote: I wrote some single threaded batch-processing spaghetti code. Since it each item was processed separately, I decided to parallelize it. I’ve changed iter() for par_iter() and the compiler immediately warned me that in one of my functions I’ve used a 3rd party library which used an HTTP client library which used an event loop library which stored some event loop data in a struct without synchronization. It pointed exactly where and why the code was unsafe, and after fixing it I had an assurance the fix worked.

                                I did not know it could do that. That’s fantastic.

                              2. 8

                                Data races in multi-threaded code are about 100x harder to debug than deadlocks in my experience, so I am happy to have an imperfect guarantee.

                                guarantees the lack of data races, but only if you didn’t write any unsafe code.

                                Rust application code generally avoids unsafe.

                                1.  

                                  Data races in multi-threaded code are about 100x harder to debug than deadlocks in my experience, so I am happy to have an imperfect guarantee.

                                  My comment was not a criticism of Rust. Just wanted to prevent a common misconception from spreading.

                                  Rust application code generally avoids unsafe.

                                  That depends on who wrote the code. And unsafe blocks can cause problems that show in places far from the unsafe code. Meanwhile, “written in Rust” is treated as a badge of quality.

                                  Mind that I am a Rust enthusiast as well. I just think we shouldn’t oversell it.

                                2. 7

                                  guarantees the lack of data races, but only if you didn’t write any unsafe code.

                                  As long as your unsafe code is sound it still provides the guarantee. That’s the whole point, to limit the amount of code that needs to be carefully audited for correctness.

                                  1.  

                                    I know what the point is. But proving things about code is generally not something that programmers are used to or good at. I’m not saying that the language is bad, only that we should understand its limitations.

                              3. 10

                                This doesn’t really match what we see and our experience: a lot of organisations are investigating their replacement of C and Rust is on the table.

                                One advantage that Rust has is that it actually lands between C and C++. It’s pretty easy to move towards a more C-like programming style without having to ignore half of the language (this comes from the lack of classes, etc.).

                                Rust is much more “C with Generics” than C++ is.

                                We currently see a high interest in the embedded world, even in places that skipped adopting C++.

                                I don’t think the fundamental difference in approach is as large as you make it (sorry for the weak rebuttal, but that’s hard to quantify). But also: approaches are changing, so that’s less of a problem for us, as long as we are effective at arguing for our approach.

                                1.  

                                  It’s just such a fundamentally different way of approaching programming that it doesn’t appeal to C programmers. Why would a C programmer switch to Rust if they hadn’t already switched to C++?

                                  Human minds are sometimes less flexible than rocks.

                                  That’s why we still have that stupid Qwerty layout: popular once for mechanical (and historical) reasons, used forever since. As soon as the mechanical problems were fixed, Sholes imself devised a better layout, which went unused. Much later, Dvorak devised another better layout, and it is barely used today. People thinking in Qwerty simply can’t bring themselves to take the time to learn the superior layout. (I know: I’m in a similar situation, though my current layout is not Qwerty).

                                  I mean, you make a good point here. And that’s precisely what’s make me sad. I just hope this lack of flexibility won’t prevent C programmers from learning superior tools.

                                  (By the way, I would chose C over C++ in many cases, I think C++ is crazy. But I also know ML (OCaml), a bit of Haskell, a bit of Lua… and that gives me perspective. Rust as I see it is a blend of C and ML, and though I have yet to write Rust code, the code I have read so far was very easy to understand. I believe I can pick up the language pretty much instantly. In my opinion, C programmers that only know C, awk and Bash are unreasonably specialised.)

                                  1.  

                                    I tried to switch to DVORAK twice. Both times I started to get pretty quick after a couple of days but I cheated: if I needed to type something I’d switch back to QWERTY, so it never stuck.

                                    The same is true of Rust, incidentally. Tried it out a few times, was fun, but then if I want to get anything useful done quickly it’s just been too much of a hassle for me personally. YMMV of course. I fully intend to try to build something that’s kind of ‘C with lifetimes’, a much simpler Rust (which I think of as ‘C++ with lifetimes’ analogously), in the future. Just have to, y’know, design it. :D

                                    1.  

                                      I too was tempted at some point to design a “better C”. I need:

                                      • Generics
                                      • Algebraic data types
                                      • Type classes
                                      • coroutines, (for I/O and network code, I need a way out of raw poll(2))
                                      • Memory safety

                                      With the possible exception of lifetimes, I’d end up designing Rust, mostly.

                                      1.  

                                        I agree that you need some way of handling async code, but I don’t think coroutines are it, at least not in the async/await form. I still feel like the ‘what colour is your function?’ stuff hasn’t been solved properly. Any function with a callback (sort with a key/cmp function, filter, map, etc.) needs an async_ version that takes a callback and calls it with await. Writing twice as much code that’s trivially different by adding await in some places sucks, but I do not have any clue what the solution is. Maybe it’s syntactic. Maybe everything should be async implicitly and you let the compiler figure out when it can optimise things down to ‘raw’ calls.

                                        shrug

                                        Worth thinking about at least.

                                2. 5

                                  I agree with @milesrout. I don’t think Rust is a good replacement for C. This article goes into some of the details of why - https://drewdevault.com/2019/03/25/Rust-is-not-a-good-C-replacement.html

                                  1. 16

                                    Drew has some very good points. Its a shame he ruins them with all the other ones.

                                    1. 23

                                      Drew has a rusty axe to grind: “Concurrency is generally a bad thing” (come on!), “Yes, Rust is more safe. I don’t really care.”

                                      Here’s a rebuttal of that awful article: https://telegra.ph/Replacing-of-C-with-Rust-has-been-a-great-success-03-27 (edit: it’s a tongue-in-cheek response. Please don’t take it too seriously: the original exaggerated negatives, so the response exaggerates positives).

                                      1. -4

                                        Drew is right and this article you link to is just blatant fanboyism. It’s the classic example of fanboyism because it tries to respond to every point, yet some of them are patently true. Like, really? You can’t argue that Rust is more portable than C on the basis that there’s a little bit of leaky abstraction over Windows-specific stuff in its standard library. C is just demonstrably more portable.

                                        It criticises C for not changing enough, but change is bad and C89 is all C ever needed in terms of standardisation for the most part. About the only useful thing added since then was stdint.h. -ftrapv exists and thus wanky nonsense about signed overflow being undefined is invalid.

                                        I love this bit in particular:

                                        In C I could use make, gnu make, cmake, gyp, autotools, bazel, ninja, meson, and lots more. The problem is, C programmers have conflicting opinions on which of these is the obvious right choice, and which tools are total garbage they’ll never touch.

                                        In Rust I can use Cargo. It’s always there, and I won’t get funny looks for using it.

                                        In C you can use whatever you like. In Rust, if you don’t like Cargo, you just don’t use Rust. That’s the position I’m in. This isn’t better.

                                        1. 10

                                          I didn’t read that post as blatant fanboyism, but if someone’s positive and successful experience with Rust is fanboyism, let’s agree to disagree for now.

                                          It criticises C for not changing enough, but change is bad and C89 is all C ever needed in terms of standardisation for the most part.

                                          Change isn’t necessarily bad! With a few exceptions for libraries/applications opting into unstable features, you can compile and use the same Rust code that was originally authored in 2015. However, some of the papercuts that people faced in the elapsed time period were addressed in a backwards-compatible way.

                                          About the only useful thing added since then was stdint.h. -ftrapv exists and thus wanky nonsense about signed overflow being undefined is invalid.

                                          Defaults matter a great deal. People have spent a heroic amount of work removing causes of exploitable behavior in “in-tree” (as much as “in-tree” exists in C…) with LLVM/ASAN, and even more work out-of-tree with toolkits like CBMC, but C is still not a safe language. There’s a massive amount of upfront (and continuous!) effort needed to keep a C-based project safe, whereas Rust works for me out of the box.

                                          In C you can use whatever you like. In Rust, if you don’t like Cargo, you just don’t use Rust. That’s the position I’m in. This isn’t better.

                                          My employer has a useful phrase that I’ll borrow: “undifferentiated heavy lifting”. I view deciding which build system I should use for a project as “undifferentiated heavy lifting”, as Cargo covers 90-95% of the use cases I need. The remainder is either patched over using ad-hoc scripts or there is an upcoming RFC addressing that. This allows me to focus on my project instead spinning cycles wrangling build systems! That being said, I’ll be the first to admit that Cargo isn’t the perfect build system for every use case, but for my work (and increasingly, for several organizations at my employer), Cargo and Rust are an excellent replacement for C.

                                          1. 8

                                            let’s imagine I download some C from github. How do I build it?

                                            hopefully it’s ./configure && make && make install, but maybe not! Hopefully I have the dependencies, but maybe not! Hopefully if I don’t have the dependencies they are packaged for my distro, but maybe not!

                                            let’s imagine I download some rust from github. How do I build it?

                                            cargo build --release

                                            done

                                            I know which one of those I prefer, personally

                                            1.  

                                              You read the README. It says what you need to do.

                                              cargo build –release

                                              This ease-of-use encourages stuff like needing to compile 200+ Rust dependencies just to install the spotifyd AUR package. It’s a good thing for there to be a bit of friction adding new dependencies, in my opinion.

                                              1. 12

                                                So the alternative that you propose is to:

                                                1. Try to figure out which file(s) (if any) specify the dependencies to install
                                                2. Figure out what those dependencies are called on your platform, or even exist.
                                                3. Figure out what to do when they don’t exist, if you can compile them from source, how, etc
                                                4. Figure out which versions you need, because the software may not work with the latest version available on your platform
                                                5. Figure out how to install that older version without breaking whatever your system may have installed, making sure all your linker flags and what not are right, etc
                                                6. Figure out how to actually configure/install the darn thing, which at this point is something you have probably lost interest in.

                                                Honestly your argument that ease of use leads to 200+ dependencies is a weak argument. Even if all projects suffered from this, from the user’s perspective it’s still easier to just run cargo build --release and be done with it. Even if it takes 10 minutes to build, that’s probably far less time than having to do all the above steps manually.

                                                1. 5

                                                  Dude everyone here has had to install C software in some environment at some point. Like we all know it’s not “just read the docs”, and you know we know. What’s the point of pretending it’s not a nightmare as a rule?

                                              2. 5

                                                Sorry you got downvoted to oblivion. You make some good points, but you also tend to present trade-offs and opinions as black-and-white facts. You don’t like fanboyism, but you also speak uncritically about C89 and C build systems.

                                                For example, -ftrapv exists and indeed catches overflows at run time, but it also doesn’t override the C spec that defines signed overflow is UB. Optimizers take advantage of that, and will remove naive checks such as if (a>0 && b>0 && a+b<0), because C allows treating it as impossible. It’s not “wanky nonsense”. It’s a real C gotcha that has lead to exploitable buffer overflows.

                                                1. 5

                                                  -ftrapv exists and thus wanky nonsense about signed overflow being undefined is invalid.

                                                  Nope, the existence of this opt in flag doesn’t make the complaints about signed overflow nonsensical. When I write a C library, I don’t control how it will be compiled and used, so if I want any decent amount of portability, I cannot assume -ftrapv will be used. For instance, someone else might be using -fwrapv instead, so they can check overflows more easily in their application code.

                                                  In C you can use whatever you like.

                                                  So can I. So can they. Now good luck integrating 5 external libraries, that uses, say CMake, the autotools, and ninja. When there’s one true way to do it, we can afford lots of simplifying assumption that make even a non-ideal one true way much simpler than something like CMake.

                                                  (By the way, it seems that in the C and C++ worlds, CMake is mostly winning, as did the autotools before, and people will look at you funny for choosing something else.)

                                                  1.  

                                                    I think I’m done discussing anything remotely controversial on this website. I’m going to get banned or something because people keep flagging my comments as ‘incorrect’ when they’re literally objective fact just because they can’t handle that some people don’t like Rust. It’s just sad. I thought this site was meant to be one where people could maturely discuss technical issues without fanboyism but it seems like while that’s true of most topics, when it comes to Rust it doesn’t matter where you are on the internet: the RESF is out to get you.

                                                    It’s not like I’m saying ‘RUST BAD C GOOD’ or some simplistic nonsense. I’ve said elsewhere in the thread I think it’s a great alternative to C++, but it’s just so fundamentally different from C in so many ways that it doesn’t make sense to think of it as a C replacement. I’d love to see a language that’s more like ‘C with lifetimes’ than Rust which is ‘C++ with lifetimes’. Something easier to implement, more portable, but with those memory safety guarantees.

                                                    1. 10

                                                      I thought this site was meant to be one where people could maturely discuss technical issues

                                                      It is. Maturity implies civility, in which almost every comment I read of yours is lacking, regardless of topic. Like, here, there are plenty of less abrasive ways of wording what you tried to say (“wanky nonsense” indeed). Then you assume that you are being downvoted because you hurt feelings with “objective facts” and everyone who disagreed with you is a fanboy, without considering that you could simply be wrong.

                                                      Lobste.rs has plenty of mature technical discussion. This ain’t it.

                                                      1.  

                                                        Drew is right and this article you link to is just blatant fanboyism.

                                                        Is not at all objective. You are leaning far out of the window and people didn’t appreciate.

                                                        It’s fine to be subjective, but if you move the discussion to that field, be prepared for the response to be subjective.

                                                        1.  

                                                          A lot of the design of Rust seems to be adding features to help with inherent ergonomics issues with the lifetimes systems; out of interest, what are some of things Rust does (or doesn’t do) that you would change to make it more minimalistic?

                                                          I think it’s right not to view Rust as a C replacement in the general case. I kind of view it as an alternative to C++ for programmers who wanted something ‘more’ than C can provide but bounced of C++ for various reasons (complexity, pitfalls, etc).

                                                          1.  

                                                            I’d like you to stay.

                                                            Before clicking “Post” I usually click “Preview” and read what I wrote. If you think this is a good idea, feel free to copy it :)

                                                      2. 7

                                                        So many bad points from this post.

                                                        • We can safely ignore the “features per year”, since the documentation they are based on don’t follow the same conventions. I’ll also note that, while a Rust program written last year may look outdated (I personally don’t know Rust enough to make such an assessment), it will still work (I’ve been told breaking changes are extremely rare).

                                                        • C is not really the most portable language. Yes, C and C++ compilers, thanks to having decades of work behind them, target more devices than everything else put together. But no, those platforms do not share the same flavour of C and C++. There are simply too many implementation defined behaviours, starting with integer sizes. Did you know that some platforms had 32-bit chars? I worked with someone who worked on one.

                                                          I wrote a C crypto library, and went out of my way to ensure the code was very portable. and it is. Embedded developers love it. There was no way however to ensure my code was fully portable. I right-shift negative integers (implementation defined behaviour), and I use fixed width integers like uint8_t (not supported on the DSP I mentioned above).

                                                        • C does have a spec, but it’s an incomplete one. In addition to implementation defined behaviour, C and C++ also have a staggering amount of undefined and unspecified behaviour. Rust has no spec, but it still tries to minimise undefined behaviour. I expect this point will go away when Rust stabilises and we get an actual spec. I’m sure formal verification folks will want to have a verified compiler for Rust, like we currently have for C.

                                                        • *C have many implementations… and that’s actually a good point.

                                                        • C has a consistent & stable ABI… and so does Rust, somewhat? OK, it’s opt-in, and it’s contrived. My point is, Rust does have an FFI which allows it to talk to the outside world. It doesn’t have to be at the top level of a program. On the other hand, I’m not sure what would be the point of a stable ABI between Rust modules. C++ at least seems to be doing fine without that.

                                                        • Rust compiler flags aren’t sable… and that’s a good point. They should probably stabilise at some point. On the other hand, having one true way to manage builds and dependencies is a god send. Whatever we’d use stable compile flags for, we probably don’t want to depart from that.

                                                        • Parallelism and Concurrency are unavoidable. They’re not a bad thing, they’re the only thing that can help us cheat the speed of light, and with it single threaded performance. The ideal modern computer is more likely a high number of in-order cores, each with a small amount of memory, and an explicit (exposed to the programmer) cache hierarchy. Assuming performance and energy consumption trumps existing C (and C++) programs. Never forget that current computers are optimised to run C and C++ programs.

                                                        • Not caring about safety is stupid. Or selfish. Security vulnerabilities are often mere externalities, which you can ignore if it doesn’t damage your reputation to the point of affecting your bottom line. Yay Capitalism. More seriously, safety is a subset of correctness, and correctness is the main point of Rust’s strong type system and borrow checker. C doesn’t just make it difficult to write safe programs, it makes it difficult to write correct programs. You wouldn’t believe how hard that is. My crypto library had to resort to Valgrind, sanitisers, and the freaking TIS interpreter to eke out undefined behaviour. And I’m talking about “constant time” code, that has fixed memory access patterns. It’s pathologically easy to test, yet writing tests took as long as writing the code, possibly longer. Part of the difficulty comes from C, not just the problem domain.

                                                        Also, Drew DeVault mentions Go as a possible replacement for C? For some domains, sure. But the thing has a garbage collector, making it instantly unsuitable for some constrained environments (either because the machine is small, or because you need crazy performance). Such constrained environment are basically the remaining niche for C (and C++). For the rest, the only thing that keeps people hooked on C (and C++) are existing code and existing skills.

                                                        1.  

                                                          But the thing has a garbage collector, making it instantly unsuitable for some constrained environments (either because the machine is small, or because you need crazy performance).

                                                          The Go garbage collector can be turned off with debug.SetGCPercent(-1) and triggered manually with runtime.GC(). It is also possible to allocate memory at the start of the program and use that.

                                                          Go has several compilers available. gc is the official Go compiler, GCC has built-in support for Go and there is also TinyGo, which targets microcontrollers and WASM: https://tinygo.org/

                                                          1.  

                                                            Can you realistically control allocations? If we have ways to make sure all allocations are either explicit or on the stack, that could work. I wonder how contrived that would be, though. The GC is on by default, that’s got to affect idiomatic code in a major way. To the point where disabling it probably means you don’t have the same language any more.

                                                            Personally, to replace C, I’d rather have a language that disables GC by default. If I am allowed to have a GC, I strongly suspect there are better alternatives than Go. (My most major objection being “lol no generics”. And if the designers made that error, that kind of cast doubt over their ability to properly design the rest of the language, and I lose all interest instantly. Though if I were writing network code, I would also say “lol no coroutines” at anything designed after 2015 or so.)

                                                      3.  

                                                        I don’t think replacing C is a good usecase for Rust though. C is relatively easy to learn, read, and write to the level where you can write something simple. In Rust this is decidedly not the case. Rust is much more like a safe C++ in this respect.

                                                        I’d really like to see a safe C some day.

                                                        1. 5

                                                          Have a look at Cyclone mentioned earlier. It is very much a “safe C”. It has ownership and regions which look very much like Rust’s lifetimes. It has fat pointers like Rust slices. It has generics, because you can’t realistically build safe collections without them. It looks like this complexity is inherent to the problem of memory safety without a GC.

                                                          As for learning C, it’s easy to get a compiler accept a program, but I don’t think it’s easier to learn to write good C programs. The language may seem small, but the actual language you need to master includes lots of practices for safe memory management and playing 3D chess with the optimizer exploiting undefined behavior.

                                                      1. 7

                                                        Not an expert - just a person who uses search engines - high level thoughts only.

                                                        The biggest challenge for a modern search engine is “spam” filtering. I.e. removing sites whose sole purpose in life is to get clicks, and who do so by generating fake content that users aren’t interested in.

                                                        These sites come in a lot of different forms. Some of them are cloning content from other sites and putting tons of ads and tracking in it. Some of them are using code to generate completely artificial content, with the goal of looking human enough that search engines pick it up and show it for keywords. Some of them are sites which have a mix of valuable legitimate content, and (often user submitted) spam - I’m looking at Quora in particular here. Lots of them are legitimate sites that and are generating uninteresting blogs to try and convince Google to rank them higher. Etc.

                                                        I strongly believe that this is why Google has become worse, not better. Their old algorithms were too vulnerable to spam. Their new algorithms are still somewhat vulnerable, but also sacrifice a lot to avoid the worst of it. They can’t easily win because spammers are reacting to everything that they do.

                                                        I’m not sure how you handle this problem. If you could do significantly better than Google, it would be a huge competitive advantage, but It’s not clear that that is possible. Alternatively you could just try and match Google’s anti-spam and compete in other dimensions (e.g. duckduckgo seems to be taking this route, with privacy).

                                                        If I was developing a search engine I would probably place a huge emphasis on links that I can be reasonably sure are organic. Scrape sites like reddit, tumblr, twitter, wikipedia, github, stackoverflow, etc that have reasonably human (rather than robot) communities, and put emphasis on links they provide, try and piggyback on their antispam. For example, if a post/poster get’s deleted for posting spam on reddit, take note and be more cautious with the pages/domains they linked to.

                                                        I’d probably cheat and whitelist the bigger domains that we as humans recognize as not-spam, as not-spam.

                                                        I’d give my users convenient ways of giving feedback that a certain site is spam. Possibly explicitly, possibly in the form of buttons that do things like “exclude this domain from my search results, permanently”, maybe do something like copy gmails “mark spam” button, and have a “spam” tab on my search results.

                                                        1. 1

                                                          Ah yes, good points. I’ve seen a couple StackExchange clones and bogus Quora threads while searching for obscure coding issues.

                                                          I like your ideas on flagging results and piggybacking, thanks!

                                                        1. 7

                                                          It’s getting better, but for a long time it was almost impossible to code in Swift if you weren’t on a Mac.

                                                          Ever as always, Apple will care and feed its stable of developers for its platform, and remain an insular ecosystem.

                                                          Not a bad thing, they’re certainly making plenty of money off of this model, but if you’re someone who, like I am, is interested in developing applications with wider reach there’s not much draw there.

                                                          1. 2

                                                            Swift tensorflow looks like it might be a really interesting alternative to python frameworks sometime soon.

                                                            That’s the only reason I’ve found to use Swift outside of Apple’s ecosystem.

                                                          1. 4

                                                            I wonder if we should have a “paywall” tag

                                                            1. 2

                                                              I would second that. If you’re ok with dodging them, the nytimes paywall can be avoided by clicking the firefox reader mode button.

                                                              1. 1

                                                                Ah, sorry, it’s not paywalled in the UK.

                                                                1. 1

                                                                  I’m in the UK and it was pay walled for me.

                                                              1. 7

                                                                For languages that allow function application without brackets, unary operators should look just like functions, no?

                                                                Maybe that’s something to consider as a more general solution, requiring brackets for a prefix unary operator if you require brackets to apply functions.

                                                                1. 10

                                                                  I write a fair bit of Rust and have mostly ignored binary size as a thing to optimise for so far but I’d like to learn more. Assuming the extra size ends up as executable code and not just stuff in the data segment, I’m curious, what are the drawbacks to a bigger binary/more generated code? One possible reason that come to mind are less efficient use of CPU caches if code is more “spread out”. Are there RAM consumption consequences of a larger binary?

                                                                  1. 15

                                                                    There’s also the wasm case, which is going to become increasingly important.

                                                                    The most interesting single feature of the feedback from this post is the extremely wide variance in how much people care about this. It’s possible I personally care more just because my experience goes back to 8 bit computers where making things fit in 64k was important (also a pretty long stretch programming 8086 and 80286). Other people are like, “if it’s 100M executable but is otherwise high quality code, I’m happy.” That’s helping validate my decision to include the localization functionality.

                                                                    1. 5

                                                                      My experience with Rust binary size has let me to assume that it’ll be a non-starter in WASM, which is a pity. It’s going to be hard to compete with JS for startup time.

                                                                    2. 5

                                                                      There’s a finite-size instruction cache. You don’t have to fit the whole program in it, but hot-looping tasks should.

                                                                      At least for inlining, the compiler is supposed to understand the tradeoff between code size and performance.

                                                                      The biggest bloat potential comes from large generic functions used with many different parameters. For each type you get new copy of the function, so you may end up with 8 versions of the same function. cargo-bloat is useful for finding these.

                                                                      Libstd has this pattern:

                                                                      fn generic(arg: impl AsRef<Foo>) {
                                                                          non_generic(arg.as_ref());
                                                                      }
                                                                      
                                                                      fn non_generic(arg: &Foo) {…}
                                                                      

                                                                      this way even if you call generic with 10 different types, you still get 1 non_generic copy, and the rest will probably inline down to nothing.

                                                                      1. 5

                                                                        I often heard critique for C++ that classes are “slow” because of dynamic dispatch. Not sure if it’s true problem. Rust does monomorphization by default and it’s critiqued for bloated binaries. It’s not easy tradeoff. Also, it looks like dynamic dispatch with dyn trait objects is considered somewhat “non-idiomatic”.

                                                                        I think at-runtime features such as trait objects and Rc should be more widely used, at least in application code.

                                                                        1. 6

                                                                          Exactly. Rust has the reverse problem. There’s a hyperfocus on never doing dynamic dispatch and I feel it’s underused. I have yet to see a performance problem directly related to trait object outside of a narrow set up programs, mainly synthetic benchmarks.

                                                                          Also, some libraries tend to be hypergeneric, which only allows you to roll out the code when assembling the main program, hampering early compilation in crates.

                                                                        2. 1

                                                                          The biggest bloat potential comes from large generic functions used with many different parameters.

                                                                          In this case is there a drawback aside from the larger binary? I.e. is there a runtime performance impact of the larger binary?

                                                                          1. 5

                                                                            If every single program on a system requires hundreds of megabytes things just become unwieldy. Cutting waste is good, pointlessly large programs waste bandwidth everywhere from disks, ram and internet.

                                                                            Ever wonder why windows update takes minutes instead of seconds? I often do…

                                                                            1. 4

                                                                              Cool, I can get behind that. Just trying to work out if the primary motivation is disk use or something else.

                                                                              1. 3

                                                                                Disk, but also things like your CPU’s instruction cache is a scarce resource. You want a hot loop’s instructions to fit entirely in it, so each iteration of the loop doesn’t involve re-pulling the loop instructions from a slower level of memory.

                                                                                Excessive bloating from a lot of calls to different versions of monomorphized functions and inlining could potentially mean that that kind of important code won’t fit in the cache.

                                                                            2. 1

                                                                              I haven’t looked for or seen measurements showing (or not showing) this, but one assumes that this could result in more frequent instruction cache misses if you are commonly using the different monomorphized versions of the generic function.

                                                                        1. 5

                                                                          i felt right at home up to 200ms - standard experience of logging in to a terminal on a US machine from my crappy Greek internet + wifi.

                                                                          1. 9

                                                                            You should try out mosh, makes ssh over crappy internet way better.

                                                                            1. 2

                                                                              I once worked with a customer from Australia. That was worse than 200ms I think, I really feel bad for people inside Australia if it’s like that for them all the time.

                                                                              1. 2

                                                                                (Australian here) Thankfully, I rarely have cause to SSH into a box that isn’t based in Australia. The few times I do, it’s pretty bad, but I’m generally only there for a minute.

                                                                            1. 1

                                                                              Genuine question: Was calling to order a pizza not an option?

                                                                              1. 6

                                                                                No* it wasn’t.

                                                                                *Yes, but it would have cost more. They had an online only discount.

                                                                                1. -1

                                                                                  I would think so, however it’s drastically less convenient.

                                                                                1. 9

                                                                                  on the broad view, this is cool and i’m glad to have informed people in congress.

                                                                                  on the other hand, this guy is behaving one of those sorry people who ask super-technical details after a talk in order to show how smart they are, and l don’t think he added anything to the discussion at all.

                                                                                  1. 22

                                                                                    He added one bloody important point: they are using Rust nightly. This is a fairly unstable dependency, and more importantly, this is a dependency that accepts outside contributions in a way that may not be as controlled as the core Libra codebase.

                                                                                    It would be a freaking security risk if they pushed that to production. They are using Rust nightly now, but they probably should move to stable before they go to production, or at least freeze to a particular commit until the Libra association actually reviews the newer commits.

                                                                                    I was actually disappointed that the Libra guy didn’t have an answer.

                                                                                    1. 8

                                                                                      I strongly disagree that nightly rust in and of itself is a security risk. Using a conservative set of #![feature] flags and pinned version of nightly I think it’s honestly more stable than many languages.

                                                                                      All stable is is a mutually agreed upon pinned version of nightly with no feature flags.

                                                                                      1. 5

                                                                                        I strongly disagree that nightly rust in and of itself is a security risk.

                                                                                        Understand where the congressman is coming from: Libra is (will be) a currency. Depending on adoption, it can get quite strategic, on par with weapons and the electric grid. The possibility of malicious contributions to the code base itself or its dependencies should indeed be investigated. Such malicious contribution have happened before: remember that NPM package that was stealing wallet private keys from the projects that uses it? (Or maybe it was mining?) It went unnoticed for months.

                                                                                        Rust is one such dependency. Rust nightly is the least reviewed part of Rust (besides experimental branches). Of course this makes some people nervous. Even if nightly Rust is as you say not a security risk, how it is not a security risk should be explained to those nervous people.

                                                                                        1. 1
                                                                                          • “No feature flags” is a big deal, since un-flagged operations are supposed to be frozen.

                                                                                          • Stable also goes through the Beta period, when new features aren’t allowed in but regression fixes are.

                                                                                          1. 1

                                                                                            Not quite. Bug fix point releases are not uncommon.

                                                                                          2. 4

                                                                                            He was also concerned that one of the top committers for one of the Libra projects was from Nigeria. I think we assume good intent and don’t discriminate based on where one lives. But since he’s coming from a “national security” background, I can understand why he might be more suspicious (justified or not) of “foreign” contributors.

                                                                                          3. 2

                                                                                            I think its good to remember that Congresspeople’s offices do followup for written details of what couldn’t be covered in these televised hearings. I hope his office does followup and gets the answers he seeks.

                                                                                          1. 5

                                                                                            What I appreciate most of all is that nobody apparently thought about how to design USB-C plugs so they didn’t slide out.

                                                                                            1. 3

                                                                                              Sweet baby Jesus why would you want that? Personally I’m annoyed at how difficult it is to pull out a USB-C compared to the (now) old-fashioned MagSafe. When it’s finally time to replace the wife’s old laptop with whatever’s current at the time, I fear for its life.

                                                                                              1. 2

                                                                                                I loved MagSafe connectors and thought them up years before Apple introduced them: magnets get rid of mechanical wear-and-tear while making it easier to plug in! I’m guessing they weren’t used in USB-3 because of the connector size and magnetic interference.

                                                                                              2. 2

                                                                                                It seems to me that they did. My regular phone charger slides out way too easily, but my laptop charger (when either plugged into my laptop, or my phone) is quite good at staying in until I try and pull it out.

                                                                                                1. 3

                                                                                                  Have you checked your phone’s usb-c port and mauybe tried cleaning it with a toothpick? :)

                                                                                                  Not sure this is intentional, but with my Nexus 5X the lint seems to set in such a way that the usb-c cable slides out with the slightest touch once enough has accumulated. The connection is never broken so you’d notice, just the mechanical “lock”.

                                                                                                2. 2

                                                                                                  I’ve personally always experienced that issue more often with micro-A than I have with any of my devices with C.

                                                                                                  1. 1

                                                                                                    I’ve been so annoyed by this that I’m pondering whether USB-C cables can be used for electronics which don’t get a gentle treatment all of the time (badges, smaller electronic cards, …).

                                                                                                  1. 26

                                                                                                    curl https://getcroc.schollz.com | bash

                                                                                                    And the developer of this project expects me to believe them when they say their project is secure?

                                                                                                    1. 9

                                                                                                      agreed. this is malpractice.

                                                                                                      1. 6

                                                                                                        It’s no worse (in fact slightly better) than downloading an opaque binary, since the entire purpose of the script is to download an opaque binary I don’t see much of an issue.

                                                                                                        I suppose you could quibble about this curl command requires trusting getcroc.shollz.com as well as github.com and whoever uploaded the binary to github.com… but that’s a pretty minor quibble.

                                                                                                        1. 7

                                                                                                          As a disclosure I wrote this at some point when I was annoyed, I think it presents a couple of reasons why this is stupid. https://0x46.net/thoughts/2019/04/27/piping-curl-to-shell/

                                                                                                          1. 5

                                                                                                            I am also not a fan of this way of installation. However your arguments seem to miss this in various ways.

                                                                                                            The first argument boils down to “If you don’t use TLS it’s not secure”. This is true for any downloaded software (unless you download some signatures, which you again have to get in a secure manner).

                                                                                                            Hidden text attack. Again, this is independent of piping to a shell. As soon as you have any kind of example you copy and paste (eg. apt-get install foo, which for example could curl and pipe to bash) this will be true.

                                                                                                            User Agent based attacks are also independent of piping to a shell per se. It can always be switched out. Also if you do a wget or curl directly.

                                                                                                            “A simple matter of not knowing what the script is going to do”. This feels even more true of downloading any kind of binary. A script actually makes this harder for the attacker. You could just curl it and print it. So it makes more sense for an attacker to hide malicious activity in the binary.

                                                                                                            All of these statements are correct and I think this is a great post about various attacks that one should be aware of, but they are correct even for other forms of obtaining software, which is why we should not teach people to only be skeptical of curl https://example.com/script.sh | bash, but any kind of obtaining software without taking caution.

                                                                                                            1. 4

                                                                                                              Standard man-in-the-middle attacks

                                                                                                              This does use TLS.

                                                                                                              Hidden text attacks

                                                                                                              It’s a markdown file in GitHub; it should be pretty safe from that.

                                                                                                              User-Agent based attacks

                                                                                                              The malware could just be included in the binary, which you can’t easily inspect; you’re trusting them either way if you install that. If you don’t want to, there are also instructions for installing it from Homebrew, Scoop, and from source.

                                                                                                              Partial content returned by the server

                                                                                                              The functionality of this script is contained within a function, so it won’t run if you execute it cut off at a part.

                                                                                                              A simple matter of not knowing what the script is going to do

                                                                                                              You can read it; this downloads and installs the program to /usr/local/bin. Also, if you don’t need to know it beforehand, this script prints what it’s doing to the terminal.

                                                                                                              1. 1

                                                                                                                It’s a markdown file in GitHub;

                                                                                                                Not downloaded from a GitHub url, making it harder to verify.

                                                                                                                1. 1

                                                                                                                  What do you mean by that? Are you referring to the file you curl not being hosted on GitHub? The point I was responding to was referring to tricking users to copy malicious text when they think they’re copying the curl | sh command.

                                                                                                                  1. 1

                                                                                                                    curl https://getcroc.schollz.com | bash

                                                                                                                    Does not appear to grab from GitHub

                                                                                                                    1. 1

                                                                                                                      So I did understand you correctly, then. Did you get my reply?

                                                                                                                      1. 1

                                                                                                                        Hidden text attacks, as I understand them, are downloads that say they are doing one thing but work differently when piped to a shell vs. piped to stdout.

                                                                                                                        1. 1

                                                                                                                          Please read the article I was replying to. The kind of attack they referred to is displaying harmless text that results in a different, harmful command when copied to the clipboard.

                                                                                                            2. 2

                                                                                                              I’m assuming a bit of humor regarding the word “securely” in the title is intended, as plenty of things install themselves in this way.

                                                                                                            3. 1

                                                                                                              it works 99.99% of the time

                                                                                                            1. 6

                                                                                                              I’m one of those people who learned English with British textbooks and for me that’s a mega useful feature, as I constantly opt for the British spellings, even though I know in Computer Science we’re not supposed to use them.

                                                                                                              I am spelling British and am rather unapologetic about it. Languages have dialects and people should spell in any colour they want to.

                                                                                                              (I see the strive for consistency, but as long as the particular codebase has not standardised on a spelling, I won’t walk around and figure out which one it uses)

                                                                                                              1. 2

                                                                                                                I’m Canadian, even for words that we consistently spell British (which is most of them I think), I spell American in source code. Most source code seems to be American and I want to minimize the number of identifier mispredictions for anyone editing it - including myself.

                                                                                                                1. 1

                                                                                                                  It’s the same for non-English speakers. Few people write code in anything other than American English.

                                                                                                                  1. 1

                                                                                                                    But don’t they generally then also spell American English outside the code, then?

                                                                                                                    I don’t (English is my second language, my spelling is British, but in code I do American for the same reasons gpm outlined). However, I come to contact with a lot of non-native speakers and I feel that pretty much all of them lean American in both their spelling and pronunciation in all contexts.

                                                                                                                    1. 1

                                                                                                                      My point was primarily about English, not specifically only the American dialect.

                                                                                                                2. 2

                                                                                                                  I’m an American and I use American spellings in my own code and writing about code. That said I see no reason why British people or nonnative speakers who learned British rather than American spellings should feel the need to switch. I think nearly every educated AmEng speaker is familiar enough with British useages (and vice versa) that there’s no serious barrier of comprehensibility.

                                                                                                                  As a matter of fact, in my readings of the recently-deceased Joe Armstrong’s papers and other writing about Erlang, he (a British person who worked in Sweden) did in fact use British spellings like “colour” without any kind of problem.

                                                                                                                1. 8

                                                                                                                  I’ve not (yet) been able to watch the video - no transcript is available for me and I’m not in a situation where I can listen (a11y people take note).

                                                                                                                  If “fragmentation” and “commercial power plays” aren’t strong contenders I’ll be very disappointed, and Canonical are one of the major offenders here. There was no need to push Upstart when the rest of the world was leaning into systemd (be that right or wrong), likewise Mir vs. wayland, bzr vs. git, etc.

                                                                                                                  Canonical has a remarkable desire, it seems to me, to want to be like redhat - build a big userbase, control the software they use and peel it away from the traditional foss consensus, and then dominate the ability to provide services for that software. I’m not sure it’s healthy for users, nor the concept of “linux on the desktop”.

                                                                                                                  1. 14

                                                                                                                    There was no need to push Upstart when the rest of the world was leaning into systemd (be that right or wrong), likewise Mir vs. wayland, bzr vs. git, etc.

                                                                                                                    Upstart came along quite a bit before systemd, 4 years I think it was in fact. As I recall, the early systemd blog posts even referenced upstart regarding lessons learned (good and bad).

                                                                                                                    As for Canonical pushing upstart – for a while it also wasn’t assured that systemd pickup would be as quick or as pervasive as it ended up being – Redhat pushed it pretty hard, with Fedora being the first major distro to adopt it.

                                                                                                                    That said, Canonical certainly /did/ drag their feet converting, but then again.. look how long it took debian to change! It was something like a year after ubuntu did? (EDIT: I read the wrong date here. See here for update)

                                                                                                                    Not that I defend upstart /at all/. I found it to be pretty darn buggy in fact.

                                                                                                                    1. 9

                                                                                                                      Upstart came along quite a bit before systemd, 4 years I think it was in fact. As I recall, the early systemd blog posts even referenced upstart regarding lessons learned (good and bad).

                                                                                                                      To add to this, Lennart states: “Before we began working on systemd we were pushing for Canonical’s Upstart to be widely adopted (and Fedora/RHEL used it too for a while). However, we eventually came to the conclusion that its design was inherently flawed at its core…”

                                                                                                                      That said, Canonical certainly /did/ drag their feet converting, but then again.. look how long it took debian to change! It was something like a year after ubuntu did?

                                                                                                                      Not sure where you’re getting that from, but Mark Shuttleworth announced that Ubuntu would adopt systemd the same week as the Debian decision.

                                                                                                                      1. 4

                                                                                                                        Not sure where you’re getting that from, but Mark Shuttleworth announced that Ubuntu would adopt systemd the same week as the Debian decision.

                                                                                                                        Ah. I couldn’t remember the timeline, and looked at the wikipedia ubuntu version history. It looks like I accidentally, and certainly erroneously, used the announcement date instead of the actual release date.

                                                                                                                        Thanks for noticing and correcting!

                                                                                                                    2. 6

                                                                                                                      I doubt I’ll ever watch this video, since I don’t really care for videos. But I’d happily read a transcript.

                                                                                                                      Mostly I’m curious about what Mark’s definition of “success” would be. I don’t have stats handy, but in my limited experience Linux desktop usage seems pretty strong in certain technical and professional settings. Meanwhile desktop OS usage of any variety has declined in relative terms, thanks to the rise of the mobile platforms. If he means success as a consumer OS, it’s not clear to me that any players besides Canonical were ever tilting at that particular windmill.

                                                                                                                      1. 8

                                                                                                                        So much this. People keep making the “Year of the Linux Desktop” joke mostly for historical reasons, I think. So far as I can tell GNU/Linux based Desktop and Laptop systems have been very good for quite some time, very usable (and used) by non enthusiasts, and also Desktop as a target is in strong decline.

                                                                                                                        1. 5

                                                                                                                          Aside from hardware issues, the one thing that’s bad about them is that they keep making changes that break fundamental parts of the platform users rely on. That or just not QA testing enough on those. This is easy for them to avoid like the proprietary ones do with their stronger assurances of backward compatibility. I mean, sure Microsoft tried doing something similar with Windows 8 but look how that went. The Linux desktops should make sure basic functionality always works and is consistent over time.

                                                                                                                          Recent example that just happened is I can’t open PDF’s with Firefox on Ubuntu. The JS reader always clobbers the abstract texts I copy and paste in ways the native apps don’t. So, if I want to use the text, I’ll re-open the PDF in native reader from within the JS reader with open/save button or Firefox with ask feature. Suddenly, I can’t do that. It’s also suggesting opening it with a shell script, “env,” or finding the specific executable in Linux filesystem (what Windows/Mac user would…?). I’ll debug this new problem later. Meanwhile, yet another critical part of workflow has broken for no justifiable reason if any QA is getting done.

                                                                                                                          I can’t remember that kind of stuff happening on Windows (NT onward) until Vista’s issues with hardware. Aside from bloat, it worked fine with some apps needing WinXP compatibility mode. Simple fix. I’m likewise not knocking Linux on hardware issues: just the one’s developers can avoid easily. Wireless suddenly stops working, can’t open PDF’s, weird interactions between three ways of managing packages I need, and so on. The best proof is probably that several, small distros did fix some of these problems despite not having millions of dollars.

                                                                                                                        2. 2

                                                                                                                          If he means success as a consumer OS, it’s not clear to me that any players besides Canonical were ever tilting at that particular windmill.

                                                                                                                          Strictly speaking, Chrome OS put a Linux kernel (albeit a somewhat non-standard one) on all sorts of consumer machines.

                                                                                                                          1. 9

                                                                                                                            While loads of people talk about ChromeOS and Android in these discussions, I think “Linux on the desktop” is more about “free software on the desktop”, and they don’t really hit the mark, though Android has absolutely been a pretty important step.

                                                                                                                        3. 1

                                                                                                                          A proper transcript would be much nicer, but youtube does do a fairly decent job at automatically captioning the video so you can turn that on and watch it silently if you really want…

                                                                                                                        1. 11

                                                                                                                          No mention of the xpi direct link that was posted everywhere? That worked on e.g. Firefox for Android even when studies didn’t? Is there a reason we’re still not talking about this method? Why is it only studies or dot release upgrade?

                                                                                                                          1. 3

                                                                                                                            I agree that a mention of it would have been nice, but when working at Mozilla’s scale I think it makes sense to focus on only solutions that can be deployed automatically. I doubt even 1% of users clicked on that direct link, or read any of the mozilla communications about the bug.

                                                                                                                            1. 10

                                                                                                                              The initial communications included “if you’ve turned studies off, turn them back on and wait six hours” which would have been a great place to say “or click this link to install the intermediate cert xpi now”. There’s a peculiar fight club style silence bubble around it.

                                                                                                                              1. 2

                                                                                                                                I assume they don’t want to teach people to install XPIs from websites other than AMO?

                                                                                                                                1. 1

                                                                                                                                  Although I did read that that xpi was signed by AMO, otherwise Firefox would have rejected it.

                                                                                                                                2. 1

                                                                                                                                  That and how it happened in general as you said on HN. This is really weird for a web company with recent focus on privacy that has hundreds of millions of dollars depending on screw ups like this not happening. I previously speculated that their management barely invests in security vs what they could be doing. They don’t care enough. Adding this to the list.

                                                                                                                              2. 1

                                                                                                                                I read that solution wouldn’t work on vanilla Firefox, just the unstable nightly builds or something, so I didn’t even bother trying it. Did it work on vanilla Firefox for you?

                                                                                                                                1. 1

                                                                                                                                  This is different then the about:config setting change to disable checking. This is the xpi that the study installed with the updated certs. (Yes, it worked fine.)

                                                                                                                              1. 10

                                                                                                                                To re-enable all disabled non-system addons you can do the following. I am not responsible if this fucks up your install:

                                                                                                                                Open the browser console by hitting ctrl-shift-j

                                                                                                                                Copy and paste the following code, hit enter. Until mozilla fixes the problem you will need to redo this once every 24 hours:

                                                                                                                                // Re-enable *all* extensions
                                                                                                                                
                                                                                                                                async function set_addons_as_signed() {
                                                                                                                                    Components.utils.import("resource://gre/modules/addons/XPIDatabase.jsm");
                                                                                                                                    Components.utils.import("resource://gre/modules/AddonManager.jsm");
                                                                                                                                    let addons = await XPIDatabase.getAddonList(a => true);
                                                                                                                                
                                                                                                                                    for (let addon of addons) {
                                                                                                                                        // The add-on might have vanished, we'll catch that on the next startup
                                                                                                                                        if (!addon._sourceBundle.exists())
                                                                                                                                            continue;
                                                                                                                                
                                                                                                                                        if( addon.signedState != AddonManager.SIGNEDSTATE_UNKNOWN )
                                                                                                                                            continue;
                                                                                                                                
                                                                                                                                        addon.signedState = AddonManager.SIGNEDSTATE_NOT_REQUIRED;
                                                                                                                                        AddonManagerPrivate.callAddonListeners("onPropertyChanged",
                                                                                                                                                                                addon.wrapper,
                                                                                                                                                                                ["signedState"]);
                                                                                                                                
                                                                                                                                        await XPIDatabase.updateAddonDisabledState(addon);
                                                                                                                                
                                                                                                                                    }
                                                                                                                                    XPIDatabase.saveChanges();
                                                                                                                                }
                                                                                                                                
                                                                                                                                set_addons_as_signed();
                                                                                                                                

                                                                                                                                Edit: Cleaned up code slightly

                                                                                                                                1. 11

                                                                                                                                  Or, just go get the hotfix directly and install it. This also worked for my Firefox Android install.

                                                                                                                                  https://storage.googleapis.com/moz-fx-normandy-prod-addons/extensions/hotfix-update-xpi-intermediate%40mozilla.com-1.0.2-signed.xpi

                                                                                                                                  1. 1

                                                                                                                                    Absolutely the better solution now that that exists!

                                                                                                                                  2. 5

                                                                                                                                    To get the command input line in the browser console one might need to set devtools.chrome.enabled in about:config to true.

                                                                                                                                    1. 1

                                                                                                                                      will this affect the addon signature check once Mozilla will resolve the issue? the folks at Mozilla must be having a hard time, I just woke up to a addon-less browser, and seems the issue is pretty widespread.

                                                                                                                                      1. 1

                                                                                                                                        It shouldn’t, but I can’t make any guarantees. I certainly wouldn’t complain to Mozilla if something broke after.

                                                                                                                                        It’s basically an adapted version of the verifySignatures function, except instead of setting signedState to the result of the signature check it sets it to a ‘doesn’t need a signature’ value if it is currently at a ‘couldn’t verify signature value’.

                                                                                                                                    1. 6

                                                                                                                                      So, as noted on HN this isn’t affecting everyone yet.

                                                                                                                                      Actually, I’m surprised it’s affecting so many people. It looks to me like extensions signatures are only checked once every 24 hours (source), and assuming a random distribution of when it checks it should only be broken for 13% of installs so far. But from the comments I’m seeing it sounds like it’s much greater.

                                                                                                                                      I’m wondering if people would be willing to post the value of app.update.lastUpdateTime.xpi-signature-verification from their about:configs, which should show when the timer last fired for you. I’m curious if there is some weird distribution. (I’m not involved with fixing the problem or anything though, this is just for my own curiosity).

                                                                                                                                      Edit: And if it hasn’t broken yet for you, I think (but I’m not very much not sure) setting that preference to 1556940100 should keep it working until 24 hours from now. And if you keep updating that value every 23 hours to the output of date '+%s' until it is fixed via a firefox update it should keep working forever.

                                                                                                                                      Edit2: I think you need to restart the browser as well after updating the preference for the above idea to work.

                                                                                                                                      1. 4

                                                                                                                                        What do you think the rationale for rechecking signatures every X hours is?

                                                                                                                                        I understand checking at install-time, and checking at startup or load time.

                                                                                                                                        I guess this does give Mozilla a mechanism to download a CRL and use that to disable malicious addons in the field. Checking expiry may be an artifact of using a single code path for the check. (Just speculating.)

                                                                                                                                        1. 5

                                                                                                                                          Revocation is used to block malicious extensions, indeed.

                                                                                                                                          1. 3

                                                                                                                                            Perhaps the client should distinguish between extensions for which a revocation has been actively posted, and those for whom a certificate expiration deadline has passed.

                                                                                                                                            It’s totally reasonable for a user to prefer to honor the first and ignore the second.

                                                                                                                                            1. 1

                                                                                                                                              Doesn’t that open up a cut-the-browser-off-from-the-revocation-server attack?

                                                                                                                                              1. 3

                                                                                                                                                Yes, but it’s up the user to choose how to proceed in that case.

                                                                                                                                                A communications disruption doesn’t always mean invasion. :-)

                                                                                                                                        2. 2

                                                                                                                                          It’s 1556904525, 1556955570, 1556961147, 1556957388 on all of my installs (all of them are broken by now). Have fun!

                                                                                                                                        1. 6

                                                                                                                                          i wonder how hard it is to strip out all the sync/pocket/etc. crap that gets pushed into ff? i just want a browser, please :(

                                                                                                                                          edit: i just want a browser which displays web pages. i like firefox, i like the engine, but why are these things builtin and not a plugin? thanks for the downvotes btw! :)

                                                                                                                                          1. 3

                                                                                                                                            I’ve started trying to make a “soft” (updating) fork of firefox like that, haven’t published anything yet. It’s not a trivial task unfortunately. Here are a few notes

                                                                                                                                            • I need to be able to quickly update to new versions of firefox because updates play such a huge role in security. I’m trying to script everything I can and make it so if some patches don’t apply everything else still will to compensate.
                                                                                                                                            • Pocket is integrated into a few different parts of the browser, it isn’t just an addon. As are a few other things I’m looking at stripping out. Haven’t looked at sync but I suspect it’s pretty bad on this front.
                                                                                                                                            • The source code for firefox includes pre-compiled front end stuff, to remove pocket from the newtab page I need to re-compile the frontend code. Recompiling the frontend code does evil things like installing git hooks in any git repository above where the code lives.
                                                                                                                                            1. 1

                                                                                                                                              this sounds nice (and like a lot of work.. :) would you mind submitting it to lobsters when you feel it’s ready?

                                                                                                                                              1. 2

                                                                                                                                                I definitely will :)

                                                                                                                                            2. 2

                                                                                                                                              I’d say “start console” for browser and work up to features you need?

                                                                                                                                              https://wiki.archlinux.org/index.php/List_of_applications/Internet#Console

                                                                                                                                              There are other repackages of FF and Chromium; YMMV on their various states of release and maintenance. Attempting to harness the FF/C beast is difficult no matter the maintainer I’m afraid; so I tend to sidestep when fed up.

                                                                                                                                              1. 2

                                                                                                                                                Sorry for second reply, but middle-ground if GUI rendering is needed “uzbl” is a choice I’ve enjoyed

                                                                                                                                                https://www.uzbl.org/

                                                                                                                                                Not associated, just a happy user.

                                                                                                                                                1. 1

                                                                                                                                                  thanks for the replies! most of the stripped-down browsers use a webkit descendant (which by itself is ok), but i’d like some competition in browser engines. a console browser doesn’t help me much, as spa don’t work with them, the same goes for netsurf and dillo.

                                                                                                                                            1. 3

                                                                                                                                              I think kindness is the wrong word. A highly critical comment is often by it’s nature not very kind, but it can be valuable. It also doesn’t necessarily have the abrasive tone you are talking about though.

                                                                                                                                              How about “obnoxious”? Or maybe “nasty”?

                                                                                                                                              1. 3

                                                                                                                                                A disturbing number of kernel buffer overflows are not classified as security, unless those can all only be triggered by root I doubt I agree with the classification scheme.

                                                                                                                                                1. 1

                                                                                                                                                  I included overall stats as well as “security fix” only stats to avoid relying on OpenBSD’s own categories.