1. 23
    1. 6

      This is a topic I’ve become interested in investigating lately, after Austral made me look at linear types again and ask “why would you want to use these anyway?” I still haven’t particularly found a use case for them and was hoping that this would provide some, but most of the ones listed involve async, which I am slowly becoming convinced really fits best into a language either built around it or at least one with automatic memory management.

      The one non-async-related use case they list is basically “more powerful destructors”. Which… now that I think about it, is a useful thought. Currently in Rust destructors are used for all sorts of nice things, via guard objects. They unlock mutexes, they free hardware resources like OpenGL textures, I’ve used them to push/pop stacks of contextual data so they never get out of sync with the things using them, etc. But not being able to call them with arguments is a limitation, especially for API’s like vulkan or explicit memory allocators that need a context object passed to the destructor function. Right now the easiest workaround is basically to stuff such context objects into globals, but it would be nice if an object could be marked “must be destroyed explicitly” so that you can pass arguments to the destructor function. A compile-time-checked defer statement, essentially. There’s no way for the compiler to to do this for you, as it does with Drop calls, because it doesn’t know what arguments you want to pass to it.

      …which is exactly what this kind of linear types does for you. Hmm. Hmmmmmm!

      1. 8

        For my money the biggest issue linear types solve is faillible destructors, and there are lots of those. Although whether you can recover from a failing destructor tends to be a complicated question, knowing that a destructor can fail, and that you will be told, can be absolutely critical (see the fsync mess from 2019 for instance).

        1. 3

          The idea of chaining together fallible destructors is also interesting; you can make a function call that you are required to call multiple times in case of failure (just return the original object back on an error), or which runs some alternative-destruction (enforce a safety mechanism).

          I think this would be handy in many embedded contexts, where you need to clean up registers in various orders.

      2. 2

        One compelling use-case are Zig-style Unmanaged collections, where the allocator is passed-in to specific methods which require it (including drop). If you do that, then suddenly a whole bunch of code needs destructors with arguments.

        1. 1

          Rust doesn’t have any ability to ensure that the same allocator instance is passed for every use. I presume this is going to be a hard tradeoff for safety vs cost of requiring each allocator to defend itself from possibility of receiving some other allocator’s address.

      3. 1

        not being able to call them with arguments is a limitation

        The workaround I know of is to have a “teardown” or “close” method that does what the destructor does but takes parameters, and leaves the object in a state where the actual destructor will be a no-op.

        You can’t create a compile-time guarantee that users of the type call the teardown method, but I haven’t found that to be a problem; if it isn’t called, the destructor just does whatever it would have done with default arguments. Or else it can panic.

        1. 1

          Or else it can panic.

          Maybe not for long. There’s a proposal to completely remove ability to panic in drop.

    2. 5

      This has been a hobbyhorse of mine for a very long time. For those looking for motivating examples, you might start with the “motivation” section of the RFC I wrote (https://github.com/aidancully/rfcs/blob/linear-trait/text/0000-linear-type.md#motivation). My own motivation centered around a large system I built in C++, in which memory was only allocated on start-up using a bump allocator, and then ownership of the allocated structures moved around at run-time. In this system, linear types would have been used to force ownership to be moved in this system, rather than free’d.

      The RFC is showing its age, but the discussion (https://github.com/rust-lang/rfcs/issues/814) still gets activity periodically. These days, I’d also tentatively suggest that linear types might help with adapting io_uring to Rust (since ownership of the buffers io_uring acts against is a sore spot in io_uring designs).

      1. 2

        I ran into exactly this problem in rust, so the preallocated structures that shouldn’t be free’d argument resonates with me a fair amount. Roughly I had:

        fn process(buffers: [Buffer; 8]) -> [Buffer; 8]

        Where Buffers are all preallocated by a system, but need to be passed along a graph of processors. The type system helped here in that the return value needs to be initialized, and I made buffer creation impossible for users, and due to array size, users must return the same sized array as input. Unfortunately this breaks down if the buffers input is dynamically sized, imagine this roughly (without the allocations):

        fn process(buffers: Vec<Buffer>) -> Vec<Buffer>

        Linear types would be a great help here.

    3. 1

      There is an interesting balance to be found in type system complexity. On one hand, the idea of proper linear types in Rust to enforce more invariants at compile time sounds awesome. Enforcing invariants in the type system leads to more correct code and fewer tests that have to be written. As an advanced user of Rust, I would love a feature like this.

      On the other hand, ?Drop types will make Rust more complicated and difficult to learn. A person new/beginner with Rust would not like a feature like this. And unfortunately I can already imagine the Rust-detractors comments on the matter if this were added to the language. The mindset that Rust is complicated and hard to learn actually makes it harder for people to learn Rust because they can assume complexity where there might not be any.

      This is such a fundamental problem in modern language design, and I think it’s unfortunate. We should expect tooling for professionals to allow for advanced functionality. It seems only in programming languages is there this push for the opposite, and it makes me sad.

      And I get it. I have a friend who loves Go, and his reasoning is that he doesn’t have to wade through crazy abstractions that some coworker wrote that didn’t quite work out for what it was intended to do. And I’ve had that frustration myself. Advanced tools in many professions are used improperly all the time, and can even cause real bodily harm, but it shouldn’t stop us from using them.

      Back to the article, I think the proposal for the ?Drop trait looks good, especially if it can be done in a way that doesn’t “infect” other code (such as lifetime annotations can). Unfortunately I don’t feel confident this proposal will actually go anywhere, as the use-case is too niche and the complexity too high. So few languages have affine/linear types at all, and to expect users of Rust to understand the subtle differences between them is going to a very hard thing to sell.

      1. 4

        The mindset that Rust is complicated and hard to learn actually makes it harder for people to learn Rust because they can assume complexity where there might not be any.

        I don’t know, I think the popular conception of how difficult it is to learn rust is pretty well-calibrated. Attempting to learn it via searching stackoverflow for your error will fail 100% of the time, whereas that approach succeeds with nearly every other non-functional language. I didn’t understand rust well enough to be productive until I went through every one of the 90ish rustlings course exercises. Even after that I didn’t really feel I understood it until spending a few weeks seeing how far I could push the type system encoding program constraints at compile time. Actually, the sheer quantity of people who have learned rust is incredibly impressive. It’s the closest thing our profession has had to a mass-consciousness-raising moment in recent history.

        On the other hand I learned rust just before the release of chatGPT, which understands rust fairly well. Maybe it’s become a lot easier to learn recently.

        Regarding the topic of the post itself, one issue I’ve run into when programming in this sort of pattern is you often will be returning tuples of a calculated value along with the moved item, then destructuring that on the caller side. It looks a bit messy on both ends.

    4. 1

      I skimmed through this looking for the killer use cases for linear types, and at the end they were revealed:

      • fixing weird edge cases with destructors in async code
      • blocking the (unknown to me) Rust feature that allows you to disable a destructor call

      In both case, adding linear types seems to be spackling over other language problems, and maybe there are better more focused fixes for those? In particular, destructor calls being optional seems like a terrible idea to me, breaking all sorts of RAII contracts.

      (Disclaimer: Not a Rust user, though I’ve used it some in the past but bounced off)

      1. 2

        what about parallel structured concurrency?

        1. 1

          I’m in favor of it! But how does it require linear types? The article was not compelling; the closest I saw was that linear types worked around some Rust-specific problems.

          1. 2

            Not sure if I understood it correctly, but I think this would enable to let you use references from the local scope inside a future that gets spawned by e.g. tokio::spawn. I’ll call this the outer task.

            The current problem is that you could spawn other tasks inside the first task that also hold references to the local scope, but when our outer task gets dropped the tasks spawned inside the outer one can continue to run and thus hold invalid references.

            With must move types you could enforce that you can only spawn other must move futures bound by the same lifetime, so that all tasks spawned in the outer task are forced to complete before the outer task drops.

            This could be done by a variant of tokio::spawn that would return a must move handle that must be awaited. You would also pass a lambda to that variant with the first argument being the scope from which you would be able to spawn other lifetime bound must move futures.

            There’s also this blogpost which gives a more in depth explanation about the current problems with scoped tasks: https://tmandry.gitlab.io/blog/posts/2023-03-01-scoped-tasks/

      2. 1

        Destructor calls are already optional, and already broke RAII contracts.

        1. 1

          That’s what I said; I think you misread my comment.