1. 49
  1. 4

    Note that the kernel still requires unstable features, even if it is compiled with a stable rustc release, thus we cannot guarantee that future rustc versions will work without changes to the kernel tree.

    Using unstable language features in a kernel seems weird to me. Is this normal?

    1. 8

      Yes. Compiling Linux with Clang was not possible for a long time because the kernel uses so many niche GCC extensions. Kernels often do this to make strange kernel space things work, or to make them fast.

      1. 3

        Non-standard != unstable.

        1. 2

          Rust calls compiler features that haven’t been frozen unstable. As in they may change. It doesn’t mean they are buggy or dubious, it just means the exact details of their use haven’t been set in stone yet.

          1. 3

            Sure – but despite being non-standard, the GCC extensions used in the Linux kernel are stable, well-established features; they’re not about to disappear or change incompatibly in the next release. So I think @marin’s question is a legitimate concern, and it’s not analogous to the situation with existing usage of GCC extensions.

            1. 1

              The situations are absolutely analogous. Rust is still evolving as a language. When the GCC extensions used in Linux were first implemented, often for Linux, they were unstable. Should they have never been used because they were not stable at the time? Or should they have been used, proven, refined, and stabilized?

              1. 3

                The situations are absolutely analogous

                I really don’t see how is this a case. GCC implements two different languages: “standard C” (when passed -pedantic flag) and “gcc C” (default behavior). Both languages are stable, meaning that they are available on stable compiler and come with quality and backwards compatibility guarantees.

                rustc implements only one language, “reference Rust”. The mechanism of unstable features doesn’t give you neither quality, not stability guarantees, both on paper and an practice. There are some features which are de-facto shippable, but many are not, and there isn’t a fixed classification of unstable features into “de-facto stable” and “volatile”.

                I guess, it might be the case that historically, Linux adopted new gcc features before they were stabilized&documented, but that’s a very different argument from that today Linux uses gcc C.

                1. 1

                  Both languages are stable, meaning that they are available on stable compiler and come with quality and backwards compatibility guarantees.

                  Intentionally or not, you’re condemning Rust for being a newer language than C, and rustc being a newer compiler than GCC. GCC extensions are stable now, because GCC is older.

                  but that’s a very different argument from that today Linux uses gcc C

                  The situations aren’t identical, they’re analogous. Linux in Rust depends on features from a specific compiler. The rest of Linux depends on features from a specific compiler. Windows NT depends on features from a specific compiler. Kernels in general rely on compiler-specific features.

                  The situation will never be exactly the same because Rust will stabilize those features as standard, or as libraries available to any Rust compiler. Not as incompatible vendor extensions. Unstable features are the closest analogous mechanism to vendor extensions that Rust has, as far as I know.

                  1. 2

                    GCC extensions are stable now, because GCC is older.

                    I’m generally first in line to criticise GCC, but GCC generally has exceptionally good backwards compatibility support for these extensions. Once they’re in a release, they’re supported effectively forever. GNU-flavoured C code written for GCC 2.x will happily compile with the latest release. C++ code is a bit more of an issue because they’ve fixed a lot of bugs that old code depended on but there’s a difference between depending on buggy implementation of the language standard and depending on documented extensions. The Linux kernel depends (mostly) on documented language extensions.

                    The analogue in the GCC world would be depending on features that are in C2x or C++2b. Depending on features that are GCC extensions that are in 12 (not yet released) is also close, but not quite the same because I believe rustc unstable features are expected to be part of the language standard eventually, rather than non-standard extensions.

                    Note that language or compiler age has nothing to do with this. A lot of C++ codebases depended on C++1x, C++1y, C++1z, and C++2a features before they were standardised as C++11, C++14, C++17, and C++20, respectively. This is part of how the standard evolves: people try using the prototype implementations and provide feedback. Generally, the Linux kernel is far more conservative about this. It still compiles as C89 and it maintains support for quite old GCC versions.

                    1. 2

                      Unstable features are the closest analogous mechanism to vendor extensions that Rust has, as far as I know.

                      I think that the core of the disagreement, thanks for formulating it!

                      As far as I understand, rustc unstable features and vendor extensions are completely different mechanisms, which solve different problems and have different tradeoffs. So, using gcc vendor extensions in the kernel is not analogous to using nightly rust features.

                      However, if one sees features as a form of vendor extension, then yes, there is analogy.

                  2. 2

                    When the GCC extensions used in Linux were first implemented, often for Linux, they were unstable.

                    Were they? I’d be curious to see concrete examples supporting that – I’m not aware of GCC introducing new extensions on a provisional/experimental basis the way Rust does; AFAIK if it’s documented and included in a release, it’ll continue to be supported.

                    And the Linux developers are often quite on the slow side in adopting new toolchain features (so as to retain compatibility with older ones). Fancy new CPU instructions, for example, are usually written out as raw machine code with .byte directives for quite a while instead of using assembly mnemonics (e.g. this commit in 2018 finally adopted the VMX mnemonics, fully thirteen years after the first binutils release that supported them).

                    1. 1

                      AFAIK if it’s documented and included in a release, it’ll continue to be supported.

                      So it is with Rust. That’s why these features are marked unstable, because they aren’t fully supported until they’re ready. But you can use them before they’re marked stable so that Rust doesn’t launch features into the wild without any user testing.

                      When Linux and GCC were new—like Rust in Linux is new now—did Linux developers fully restrain themselves to GCC features set in stone? No compatibility ifdefs, no testing of key GCC extensions, no iteration, refinement, or collaboration between Linux and GCC?

                      That’s all that’s happening here with Rust. The difference is these unstable features will be stabilized as part of Rust the language, or libraries, instead of incompatible vendor extensions for a single compiler.

                      1. 3

                        When Linux and GCC were new—like Rust in Linux is new now—did Linux developers fully restrain themselves to GCC features set in stone? No compatibility ifdefs, no testing of key GCC extensions, no iteration, refinement, or collaboration between Linux and GCC?

                        But … Linux is not new. Are you saying Linux kernel devs used unstable gcc features after it became an established project? Any documentation on this?