Threads for alilleybrinker

  1. 18

    Jonathan Turner, one of the creators of RLS, had a nice tweet about this deprecation yesterday:

    I’m proud to have created the Rust Language Server (RLS) with @nick_r_cameron and equally proud today to see it deprecated and replaced by Rust Analyzer. It served its role well, helping to kickstart IDE support for @rustlang.

    1. 2

      That is a wholesome response. Thanks Jonathan!

    1. 2

      gibo, mentioned in the post, is pretty cool. It dumps GitHub’s recommended .gitignore file contents out, so you can kickstart a .gitignore for a new project.

      1. 2

        ❤️ gibo.

        It’s one to the shell utilities that I install when I set up a new development box. For this post I went with the curl since it’s Python specific and an easy copy/paste. The reality is that I end up typing gibo dump Python more often that ctrl+r /gitignore.

        1. 2

          There’s also gitignore.io which allows combining and discovering different ignores

      1. 5

        Saw it was written in Rust and wondered if it was already tracked on langs-in-rust, and it is (though I should update its star count)! https://github.com/alilleybrinker/langs-in-rust

        Cool language; seems ambitious in its design!

        1. 2

          This article doesn’t speak about cache invalidation at all - and isn’t that the hard part?

          1. 2

            Cache invalidation is hard in any language; the things that make it hard aren’t Rust-specific. This article is about how the design of Rust makes writing caches more difficult than one may expect.

          1. 1

            On the vulnerable dependency issue, specifically around critical open source, the Open Source Security Foundation has a Securing Critical Projects Working Group which is trying to identify “critical” open source projects like Log4J, and then work with those projects to improve their security.

            1. 2
              libyasna-080c400d17ab0fbc.rlib
              libyasna-38f10d7bdca324e3.rlib
              libyasna-3bbd36df87de5542.rlib
              libyasna-4ee5bbdf3224f90b.rlib
              libyasna-53051573765968ab.rlib
              libyasna-5a757cdb54d25cfc.rlib
              

              I wonder how much or Rust’s slow build times can be attributed to such dependency bloat? It would have been an interesting experiment if Cargo could force only a single “variant” of every dependency per build (it would also have been interesting to know how many builds would not have been able to “negotiate” such a constraint). Also would have opened the door to shared libraries…

              1. 4

                cargo hakari implements automation for “the workspace hack,” a trick in Rust to ensure that, at least when you’re working in a workspace of your own crates with some common shared dependencies, those dependencies are always compiled with the same feature set to avoid duplication.

              1. 7

                Elm enforces packages like this and so there are no exceptions. This is the way to do it and this idea should spread imo.

                The problem with adding OOP with Moose to Perl, types to Ruby, contracts to Python or any other retroactive guarantee is that it creates a red/blue ecosystem. We could instead use Pact to test and verify behavior of messages while not caring about a red/blue ecosystem. But Pact’s approach is heavy on network messages and not internal memory messaging so it might not work for all situations.

                1. 3

                  I had no idea that Elm ties major version numbers to API signatures. Fascinating. It sounds like the implication is that backwards-incompatibility should always be accompanied by an API-level change: that would be a quite narrow definition of backwards compatibility. On the other end of the spectrum, we have Hyrum’s Law, and a classic xkcd. But however you interpret it, I’d agree that Elm’s decision to impose uniformity on it is admirable.

                  Which I suppose brings us to red/blue ecosystems: indeed, most languages have packages that vary in their versioning schemes. I would like to think that much (but not all) of the benefit of adopting Contractual SemVer is realized even if you only wrote natural language contracts, and didn’t use any 3rd party contract library at all. (clients understand what behaviors they can depend on, and providers understand when to change what numbers) Again, though, it’s clearly all much better if a single version scheme is adopted across the platform.

                  1. 3

                    I am not sure how similar Elm is to Haskell, but Haskell types (including function signatures) are usually more than just data types, and tend to correspond to “contracts.” (This kind of idea has been recently widely adopted in other languages too, in the name of making illegal states unrepresentable, however IMHO it’s more norm and further advanced in pure functional languages.)

                    1. 1

                      Oh! Have you seen Liquid Haskell? IMO, this gets even closer to the expressivity of contracts.

                      1. 1

                        I might have heard of its name once or twice, but I’ve never tried to take a look on it. It looks really cool! If I understand correctly, it’s sort of proof assistant, right?

                        1. 1

                          I know it’s focused on proving properties of programs, perhaps unlike a general theorem prover like Coq.

                          I think refinement types are ~close to dependent types (I only sort of follow this discussion about the differences). As a user of the type system, I find refinements types way more grok-able.

                  2. 1

                    I think Pact uses contracts in a different way from design-by-contract contracts.

                    1. 3

                      You’re correct (hi, I wrote the initial FFI for Pact’s Rust reference implementation!). Pact describes a “contract” between a provider and a consumer. A “contract” in Pact is a set of interactions. If you’re dealing with HTTP APIs, then the interaction defines an expected request and a minimal expected response. If you’re dealing with messages, the interaction describes the minimal parts of the message the consumer expects from the provider. This description of interactions is then used to test both the consumer and the provider.

                      For the consumer, Pact assumes the provider returns the expected response to a request, and tests whether the consumer correctly 1) generates the expected request, and 2) handles the assumed response.

                      For the provider, Pact assumes the consumer is sending the expected requests, and tests whether the provider returns data that is consistent with the minimal data expected for the response.

                      In either case, Pact basically lets you define the flow of interactions, and it then creates tests which mock out the “other side” of the side being tested, so that you 1) test both consumers and providers, and 2) neither relies on having a live instance of the other. Interactions are also isolated, so there aren’t dependencies where one interaction establishes state on the provider side which needs to be used by a subsequent interaction. Instead, interactions may be set up to assume specific provider state up-front.

                      At any rate, all of this is in the end about generating tests, not performing formal verification of consistency with some rigorously defined policy in a static way, or validating pre- and post- conditions a la contracts in something like Racket. Same word, different meaning.

                      Edit: Clarified my contribution to the Pact project.

                      1. 1

                        Yeah, Pact in theory not coupled to HTTP. https://github.com/reevoo/pact-messages is in-memory. Same paradigm. This ruby gem is out of date though, I’d like to bring this up in their Slack. I wonder if there’s some internal API to use. I’ve looked, can’t get it.

                        Are contracts in Racket only in Racket?

                        The cool thing about Pact is that is works for mobile, web, desktop, any language that has bindings. Just put it in your CI pipeline and you get a can-i-deploy check. You just need to be honest and put in the work of writing the tests like other testing levels. It integrates really well.

                  1. 6

                    source-based code coverage

                    I’m probably dense, but I don’t get it. I read the Rust release notes, I google the clang feature. Code coverage of what? Test coverage?

                    1. 18

                      It’s in the linked docs, but “coverage” here means “the code is executed.” It tells you where your dead code is, and can be summarized into four useful high-level statistics:

                      • Function coverage is the percentage of functions executed at least once.
                      • Instantiation coverage is the percentage of function instantiations executed at least once (this is for generics and macro-generated functions).
                      • Line coverage is the percentage of lines of code executed at least once.
                      • Region coverage is the percentage of code regions executed at least once (there’s a particular definition of a “region” but basically it’s more granular than the function-level metrics).
                      1. 3

                        Specifically to the question: it’s quite easy to record coverage in a binary (i.e. which instructions were executed), you add some instrumentation on each branch to record what the target was and you can then generate a list of the basic blocks that were executed. You need some extra tooling to map these back to locations in the source code. This is what source-based code coverage mean. Often, it’s implemented by inserting the instrumentation in the front end (which has more relevant information for the coverage), though it’s also possible to use source locations in debug info to map from pure binary coverage info back to the source code. The latter approach is nice if you can make it work because it can be non-disruptive: if your CPU and OS support modern tracing functionality then they can generate basic-block-level traces for arbitrary binaries and you can then map this back to the source code for the exact version that was executed.

                        1. 1

                          Thanks!

                      1. 10

                        Call me an optimist, but we have achieved amazing things in the last 40 years. Many of them are indirect or not mainstream, but still amazing.

                        The first indirect thing we have is really strong type systems. Every single useful feature in type systems came out of PL research, even if we don’t all use Standard ML or Haskell.

                        An example of a off-mainstream amazing achievement is sel4. Ok, it’s still a lot of work to formally verify a relatively small micro kernel, but we did it.

                        Project Everest has gotten verified cryptographic software into mainstream browsers.

                        TLA+ is getting used at Amazon and Elasticsearch.

                        Tools like Jepsen apply property-based testing to real world projects.

                        There are practical successes everywhere, even if the average project doesn’t use research ideas directly.

                        1. 4

                          I think many of those fields are not part of the author’s definition of “software engineering research”.

                          I’d be surprised if they tried to argue that we have made very little progress in programming languages , cryptography, or formal methods.

                          1. 2

                            Right. What other research fields are there really? Almost everything boils down to PLs or formal methods in some way.

                            1. 7

                              Databases, engineering practices, distributed systems, CS education, most things involving performance, defect detection, version control, production surveys? Those are just the research fields I recently read papers in.

                              1. 1

                                Nice, I was honestly having trouble thinking outside of the research that I tend to look at.

                                1. 5

                                  Also, “software engineering” is a specific research area about the methods by which software is produced.

                        1. 3

                          I’m not sure I follow the logic that there is short-termism caused by publish-or-perish and that the solution should be, effectively, for the existing members of the field to voluntarily perish to make space for new entrants. Even given that people would do this, I see no reason to believe that it would work - wouldn’t the newcomers just reproduce the exact same structure, given the same pressures?

                          1. 3

                            There is one study I am aware of which indicated that research areas tend to expand when “star” researchers die. The claimed reasoning is that those researchers, because of their influence both direct and indirect, limit the ability of research in areas they personally undervalue to get funded and published.

                            That said, you’re right that this doesn’t resolve the problem of anointing “star” researchers who then bottle up opportunities in the field; it simply changes which subareas are valued.

                          1. 4

                            Very appreciative that this post focuses on the systemic issues which cause this (namely that all incentives align to short term thinking which is antithetical to good software engineering research). Wish it proposed some solutions, but even identifying a problem is useful.

                            1. 3

                              Trying to help close the lit gap for Rust was part of why I started writing Possible Rust (currently on pause because I started grad school), and it’s also motivated things like the Rust for Rustaceans book. Personally, I find writing for that audience to be really rewarding and definitely recommend it for people considering technical writing!

                              1. 9

                                This pattern of implementing things in a higher-level / easier language and then lowering for performance is also a common one in the computer vision space! It’s often much easier to prototype things in something like MATLAB before moving to OpenCV with C++. The MATLAB version lets you make sure that the concepts work or are even worth pursuing further, and then you go to OpenCV when you want it to actually run fast and be usable in other contexts.

                                1. 7

                                  Also, in the Rust world this makes me think of the often-recommended pattern of avoiding references, cloning liberally, sprinkling RefCell and Rc all over the place, and only once you’ve got it working and right, starting to reduce the unnecessary allocations.

                                  1. 3

                                    Yes for sure, I think we need more of this to make reliable systems! I mentioned the analogy to TeX here:

                                    http://www.oilshell.org/blog/2020/05/translation-progress.html

                                    And how the Go compiler was translated from C to Go. It’s very similar because it’s not arbitrary C – you can’t translate say bash or sqlite to Go with their translator. Similarly our translator is not for arbitrary Python – it’s for the (statically typed) subset that we use.

                                    1. 2

                                      This is all very cool! I’ll add that in the formal methods world, there’s also a fun approach of model-based code generation, where you write some model that you prove useful properties of, then generate code from that model, and prove that the generated code maintains all the stuff you proved about the model!

                                      I’ve actually contributed to a project that did this, though I was a developer and target language expert and definitely not one of the formal methods people doing the proofs. Fun stuff!

                                      1. 3

                                        Yes for sure! That is kind of what I was getting at with the C++ bits at the end.

                                        When many people see C++ now, they think “ewww” unsafe. But the point is that if the C++ is generated from the Python “model”, then it retains its semantics!

                                        As the post says, you can’t express use-after-free, double-free, or any kind of memory unsafety in Python, so the generated C++ won’t have it.

                                        I guess it’s hard to explain without examples …

                                        1. 1

                                          Whoa - do you have anything else to share about a project like that? Any posts or anything?

                                          This is the exact approach that I’m taking with a programming language that I’m developing. It feels like a workflow that actually has a ton of potential, especially for people who care about quality (that’s how I ended up thinking about this).

                                          1. 1

                                            Unfortunately no this wasn’t an open source project and I’m not able to say any specifics about it.

                                            Your project looks really cool though! I’d like to add it to my list of languages implemented in Rust if that’s alright with you!

                                            1. 2

                                              Ah, I understand. Adding to your list is cool with me!

                                      2. 2

                                        This pattern of implementing things in a higher-level / easier language and then lowering for performance is also a common one in the computer vision space!

                                        Why not implement in a high level but also performant language like Rust, Ocaml, Swift, etc?

                                        1. 6

                                          Nothing beats the expressiveness of using a DSL or a language that’s more suited to a particular domain. The languages you mentioned are all general purpose, which means they have a ceiling on how expressive they can be for any given domain.

                                          1. 4

                                            Yes exactly, thank you for the great answers! :) That’s what I meant in the post by by “getting leverage” if the “middle language” fits the problem domain.

                                            I find that if you program mostly in one language it’s hard to see this … You “think” in that language; it becomes your world. But if you use many languages then you always see the possibility of fitting the language to the domain more closely.

                                            1. 1

                                              Sure, but writing in one of them has got to win over writing in python/Ruby first and then rewriting in C++…

                                              1. 5

                                                Maybe. I know this was in a reply to a comment talking about writing both versions, but this article is about generating the lower-level code. So it’s not being implemented twice.

                                            2. 2

                                              I don’t do computer vision stuff anymore, but the answer is almost certainly lack of libraries. MATLAB and OpenCV have a lot of APIs that any other language would need to match to be usable, not to mention the amount of existing research code in MATLAB or OpenCV which you just “just use” if you’re working in the same language.

                                              1. 3

                                                Also I was doing this work ~8 years ago, in 2014, so Rust wasn’t even 1.0 yet.

                                          1. 25

                                            My favorite highlight from the article is the “Building code doesn’t execute it” section:

                                            It is an explicit security design goal of the Go toolchain that neither fetching nor building code will let that code execute, even if it is untrusted and malicious. This is different from most other ecosystems, many of which have first-class support for running code at package fetch time.

                                            This is something unique to among all programming languages, something that even Rust (which puts “security” among its core attributes) doesn’t provide.

                                            I can safely build a Go application and then run it in a separate account or under bubblewrap without the concern that the build process will trash my workbench or account. (On the other extreme end there was one time when a Ruby dependency decided to overtly sudo without even notifying or asking for permission; I was saved by the fact by default, on all my systems the default sudo user is not root but nobody…) :)

                                            1. 6

                                              The Ruby situation is especially dire because Gemfiles are themselves Ruby programs, so even resolving the dependencies of a project opens you up to remote code execution!

                                              1. 3

                                                That said, I think there are reasons why projects may sometimes need build-time logic, and my long-term preferences is for this to be available in Rust and other languages, but only in a sandbox with strong limitations, or even the ability for end-users to place additional sandbox constraints or (more ideally) to relax the by-default-strict sandbox constraints.

                                                1. 4

                                                  I don’t think this will ever happen… I think most Rust developers come from two legacies: one is former C/C++ developers that are used to the auto* or CMake or plain make, thus they don’t want to give away those abilities; the other part of developers seem to come from Ruby, Python and other interpreted languages where security is not a top priority…

                                                  I would love it if cargo (the Rust build tool) would have a build option that disables the usage of build.rs.


                                                  Now, getting back to Go, I think it’s fair to say that this decision (of not running code at build time) is also helped by the fact that a lot of libraries are written in “pure Go” and thus there is no need for any “external build” facilities.

                                                  Also it is worth mentioning that even Go has go generate, but which is usually manually invoked by the developer, and its outputs are usually committed besides the code, thus there is no need for it at build time.

                                                  1. 2

                                                    I would love it if cargo (the Rust build tool) would have a build option that disables the usage of build.rs.

                                                    Note that you’d also need to disable proc macros. And I fear that the number of crates which transitively use neither build.rs nor proc macros is vanishingly small :(

                                                    1. 1

                                                      I forgot about proc macros…

                                                      However, at least with regard to proc macros, I assume most of them only process the AST given as input, thus could be limited (either by forbidding the usage of certain API, or by something like seccomp.) For the rest, perhaps the access should be limited to the current workspace (and output) directory, and disallow any other OS interactions (sockets, processes, etc.)

                                                      For the rest, that need to invoke external processes or connect to remote endpoints, perhaps their place is not in the build life-cycle, and just like go generate should be extracted into a completely separate step.

                                                      1. 3

                                                        However, at least with regard to proc macros, I assume most of them only process the AST given as input, thus could be limited

                                                        watt tries to accomplish this by coming proc macros to WebAssembly, and then executing those.

                                                        1. 2

                                                          That’s what Watt does by compiling proc macros to WebAssembly (which is naturally sandboxed).

                                                  2. 3

                                                    On the same subject, I have the feeling that Python fits in the same category with it’s setup.py.

                                                    (Funny enough, I think that Java, at least through Maven, dosen’t suffer from this…)

                                                    1. 3

                                                      On the same subject, I have the feeling that Python fits in the same category with it’s setup.py.

                                                      Python’s “wheel” (.whl) packages do not have and have never had the ability to run code during installation; they only run setup.py when building the package for distribution.

                                                      And more recently, people have been working on moving to pure declarative package-build configuration anyway.

                                                      1. 1

                                                        It’s an unfortunate fact of the world that there are still a lot of sdist-only packages, even ones that are pure python and could easily distribute a universal wheel.

                                                  3. 3

                                                    Elm is right up there with Go in not executing code during fetch and build. I’ve even seen experiments with CLIs written in Elm where you can restrict at the type level as to what the code has access to so that were you to run a CLI written in Elm you can know that it’s only touching approved files/directories.

                                                    You could maybe include Deno in here too, though it’s a runtime and not a language, because in order to execute something that wants to do IO or such, you need to explicitly allow it. You can even restrict to the directory or url it has access to.

                                                    1. 2

                                                      Huh, doesn’t Go tend to make heavy use of code generation? I guess if you check in the generated code, you technically don’t have to execute any code at build-time… but avoiding compile-time code execution by shipping build artifacts in the source repo feels like cheating.

                                                      Better than literally distributing binaries, mind you, because generated source is theoretically human-readable! But still, it feels like they only manage to build from source with no code execution by taking a bizarre definition of what “source” is.

                                                      1. 4

                                                        I guess if you check in the generated code, you technically don’t have to execute any code at build-time… but avoiding compile-time code execution by shipping build artifacts in the source repo feels like cheating.

                                                        Actually I prefer having pre-generated stuff in the repository, as opposed to having to install (and fiddle with) various obscure tools for code generation or documentation… This way, if I only need to patch some minor bug, or make some minor customization to the code, I can rebuild everything by just having Go / Rust / GCC installed.

                                                        I have the opposite experience with lots of other projects that in order to build them you need a plethora of Python or Ruby tools, or worse other more esoteric ones, most which are not available by default on many distributions…

                                                        Just imagine that you want to patch a tool that relies on serving some JS bundle. Do I want to also build an entire NodeJS project for this? Hell no! I’ll just move to another alternative… (In fact this is my preferred way to interact with the NodeJS based ecosystem: as long as it runs only in the browser, and as long as I don’t have to touch the NodeJS tooling, great! Just give me a “magic” blob! Thus I also keep a close eye on Deno…)

                                                        1. 1

                                                          This is fair, but in some cases quite pain, particularly for cross-compilation (or support for other hardware platforms in general). In Rust crates I maintain, we generate FFI bindings for the most common targets, it would be a complete hassle to (re)generate them for all possible targets, and new ones get added regularly, so we’d have to keep on top of that as well. So we offer a feature to do that at build time, if you want to build for a platform we don’t “support”, or if you have some special sauce in your bindgen or the other tooling around it.

                                                          1. 2

                                                            I agree that one can’t possibly generate artifacts for all platforms under the sun. (My observation mainly applies to portable artifacts such as JavaScript bundles, or Java jars, or man-pages, or other such resources.)

                                                            However, in your case I think it’s great that you at least generate the artifacts for the most common targets! As long as you’ve made the effort to cover +90% of the users, I think it’s enough.

                                                            My issue is with other projects out there that don’t even make this effort!

                                                    1. 12

                                                      The distributions have already picked their (often trademarked) names:

                                                      • “Red Hat Enterprise Linux”, not “Red Hat Enterprise GNU/Linux”.
                                                      • “Suse Enterprise Linux”, not “Suse Enterprise GNU/Linux”.
                                                      • “Slackware Linux”, not “Slackware GNU/Linux”.
                                                      • “Gentoo Linux”, not “Gento GNU/Linux”.
                                                      • “Arch Linux”, not “Arch GNU/Linux”.
                                                      • “Ubuntu”, not “Ubuntu GNU/Linux”.
                                                      • Even “Debian” with 6.0 “squeeze” stopped using “GNU/Linux” in its release names.
                                                      • etc.

                                                      Debian further used eglibc for a time, which forked from GNU libc and was not GNU software. eglibc eventually merged back in with GNU libc, but during that time Debian was not a GNU/Linux distribution by this criteria. Speaking in generic terms you run into problems as that article mentioned. Alpine, OpenWRT, and Android all do not use GNU libc, so by their definition, are not “GNU/Linux”.

                                                      Even if it is technically correct to refer to distributions as “GNU/Linux”, it’s also divisive. Back in 2012, Linux Weekly News senior editor Jonathan Corbet mentions that they stopping asking the FSF for comments, specifically because of the naming controversy:

                                                      “Just to be clear on this: we stopped asking the FSF for comments many years ago because the FSF refused to talk to us without prior promises from us on what we would say and which terms we would use. We are unwilling to make such promises. If the FSF’s policy on such things has changed, we would be pleased to know about it.”

                                                      1. 6

                                                        Even if it is technically correct to refer to distributions as “GNU/Linux”

                                                        It’s also pointless. It’s like people who want to control what words and grammar rules are considered correct in a language. Languages evolve organically by it’s users. Words that weren’t considered native or syntactic constructions that weren’t considered valid, become acceptable when a significant portion of language users use them.

                                                        Even though it is technically is valid to talk about GNU/Linux, GNU/kFreeBSD, or perhaps MUSL/Linux, there are other considerations in the evolution of naming. One important one is that “Linux” is just shorter and easier to pronounce than GNU/Linux. Furthermore, most people don’t really care if it is Linux using glibc + coreutils, MUSL, or whatever. To them it is just Linux, not Windows or macOS. Or if they want to be more specific, they’ll refer to the product name, like “Red Hat Enterprise Linux” or “Alpine Linux”.

                                                        The world chose to call it Linux and not GNU/Linux. Live with it. Outside a very small group (who will look obnoxious to the outside world), nobody really cares.

                                                        1. 1

                                                          I think the idea is that the default taxonomy and common terms in this space lead to invalid expectations for many users, who change Linux distros and expect certain things to work the way they did on their prior GNU system.

                                                          I take the point of this article to be 1) to remind people that the presence or lack of GNU tools is relevant, 2) to encourage separation between things are are GNU-things and things that are Linux-things in the way people talk about the world of Linux-based operating systems, in the hope that users have clearer expectations for what will remain the same when they change platforms.

                                                        2. 3

                                                          I don’t mind distros saying Linux instead of GNU/Linux, because a distro is a platform. The RHEL platform is not just GNU/Linux, it’s GNU + Linux + X.org + systemd + a load of other libraries and system services / configuration tools. The RHEL system includes a load of applications bundled on top of this platform. Saying RHEL or Debian Linux is strictly more specific than saying GNU/Linux (saying Debian might be now - I think Debian dropped the HURD and kFreeBSD versions).

                                                          The problem is that a lot of people use Linux to mean GNU/Linux. You have conversations like (paraphrased from real conversations I’ve had):

                                                          ‘Clang isn’t used much on Linux’

                                                          ‘You mean, apart from the couple of billion Linux systems that use it as the default compiler?’

                                                          ‘Oh, Android isn’t Linux, I mean Linux Linux’

                                                          Or:

                                                          ‘MS Office runs on Linux, I use it every day’

                                                          ‘But only on Android, that’s not Linux’

                                                          When what the speaker means is a Linux distro that has glibc, Linux, and (typically) an X server. Not just something that has the Linux kernel. Android really exacerbates this by having a completely different libc, system service management system, and display server from any other kind of Linux system, yet still being Linux. Hopefully they’ll move to Fuchsia soon and eliminate that confusion.

                                                          Alpine is another fun corner case. In my experience, it’s easier to port software between FreeBSD and a GNU/Linux distro than it is to port between Alpine and a GNU/Linux distro. Most code doesn’t care about the kernel at all and FreeBSD libc and glibc have both adopted a lot of each other’s extensions (and there are shim libraries such as libbsd, libkqueue, and libepoll for papering over the differences, just adding some of them to be build is often sufficient). Runs on Linux or runs on GNU/Linux doesn’t imply that it will work on Alpine without porting (unless it’s something that doesn’t depend on libc at all, such as Go binaries).

                                                          That said, GNU/Linux is itself somewhat confusing. For example, Ubuntu replaced bash with dash as /bin/sh, so their default shell is no longer GNU, even though all of their core utilities are (Debian later adopted this). I’ve seen other systems where the core utilities come from busybox, but libc is still glibc to ease porting. If you have coreutils, glibc, but not bash, are you GNU/Linux? What if you have coreutils, bash, and musl? Or busybox and glibc, no coreutils or bash?

                                                          The distro names are the only unambiguous platform identifiers, but no one wants to say ‘runs on Debian or RedHat, you might be able to make it work on your distro if you have adequate shims and your distro ships compatible versions of the libraries’, they want to say ‘runs on Linux’.

                                                          1. 1

                                                            The idea that distros “already picked” their name is undermined by the Debian example.

                                                            Even if it is technically correct to refer to distributions as “GNU/Linux”, it’s also divisive.

                                                            You give an example of a division caused by one party’s refusal to agree to say GNU/Linux (we can assume). So that would be the divisive thing, no?

                                                          1. 4

                                                            The “Tower of Weakenings” concept is really cool!

                                                            Basically, the strict provenance model is expected to be stricter than any final formal model Rust or C would actually use, so while those more finicky models get worked out precisely, we can point most devs to Strict Provenance and say “comply with this” and we/they can expect their code to just work under any future model, because that model will almost certainly be weaker / allow strictly more code than Strict Provenance!

                                                            1. 3

                                                              The tldr seems to be that Go generics treat all pointer typed type parameters as being the same, so fails to monomorphise, resulting in needlessly slow code.

                                                              1. 10

                                                                Slightly more TL;DR: it monomorphizes by “shape” rather than type, so it has to pass the vtable as a hidden parameter to the monomorphized functions, and the vtable parameter makes inlining hard for the Go compiler and results in extra pointer dereferences.

                                                                It does have some good news though: using a generic byteseq is just as fast as using raw string and []byte!

                                                                1. 10

                                                                  fails to monomorphise, resulting in needlessly slow code.

                                                                  But less code. It’s a tradeoff, not a mistake.

                                                                  1. 6

                                                                    Indeed, and one that the Go team worked very hard to calibrate to the right level for their context. Not every language should make the same choice Rust and C++ make of fully monomorphizing generics (in both cases with optional mechanisms [virtual in C++ and trait objects in Rust] to escape).

                                                                    1. 6

                                                                      +1

                                                                      Canonical post on the topic: https://research.swtch.com/generic

                                                                      1. 0

                                                                        There are many ways to reduce code generated, but there are very few cases where people choose absurdly slow code over code size. Honestly this still just feels like the ongoing saga of the Go team hating generic programming and passive aggressively making design choices for the express purpose of penalizing such code.

                                                                        1. 5

                                                                          Honestly this still just feels like the ongoing saga of the Go team hating generic programming and passive aggressively making design choices for the express purpose of penalizing such code.

                                                                          What if you’re wrong about their motives?

                                                                          1. 1

                                                                            I said “feels like”

                                                                            What I do know is that the core Go team has spent years arguing against generics, and now that they’ve finally added support the implementation’s performance characteristics seem significantly worse than pretty much every other implementation of generics outside of Java.

                                                                            1. 4

                                                                              You already admitted that you have no evidence to claim that ‘the core Go team has spent years arguing against generics’. You have no credibility to argue about Go.

                                                                    1. 8

                                                                      It’s really nice when languages announce better error messages! The more that languages approach or improve on the quality of their error messages, the better off everyone is because it helps train developers to really read the error messages because they’ve learned to expect error messages to be useful!

                                                                      1. 3

                                                                        On the final point of projects advertising compatibility: if a README avoids saying “we support Linux” followed by installation with apt-get they’re already better than the norm.

                                                                        1. 1

                                                                          alias cls="clear && ls" is the first thing I set up in any new environment.