1. 29
  1.  

  2. 5

    This article was my first introduction to linear types. With the caveat that I’ve never written a line of Austral and have no idea how ergonomic it is compared to Rust’s borrow checker… this looks like pretty neat stuff!

    I am super happy to see that Rust has inspired others to think outside the GC box. Overall, modern GC does a great job of making programs easier to write (often the slowest part of software development) by moving complexity out of the user’s program and into the language implementation (which only has to be written once). But it fills its role so well that there haven’t been a lot of competitors beyond malloc/free and new/delete. Happy to see innovation branching beyond just making GC faster.

    1. 4

      The essence of linear types is safe and explicit resource management, using semantics that are compatible with, and feel a lot like pure functional programming.

      One application of this is safe memory management without a GC (since memory is a resource). I got my first taste of this with C++ smart pointers. But smart pointers are optional in C++. What Rust adds is a more comprehensive use of linear typing (actually affine typing) that is used to guarantee memory safety at compile time.

      Although safe GC-less memory management is a nice feature, for me it’s not the main attraction. The killer feature is capability based security. For this you probably need real linear types, not just C++/Rust affine types. In effect, every function is sandboxed. A function is incapable of reading or writing mutable state, unless references to that state are passed as arguments. There is no “shared mutable state”. It’s a lot easier to reason about what a function call will do if you can see all of the inputs and outputs of the function call, right there in the source code. As the author says, this goes a long way towards preventing supply chain attacks and making it safe to run untrusted code that you downloaded from the internet.

      Although this idea has been explored in research languages since the 1980’s, I think that C++ and especially Rust may have finally prepared the way for making this feature available in production programming languages.

      1. 1

        Do any programming langues purport to have a safe way to run untrusted code? Does Austral?

        I guess I never heard that as being a design goal in any programming language before and my instinct is that that is a odd design goal for a programming language. Why wouldn’t you always run untrusted code as a separate process and use whatever isolation features the OS/hypervisor has?

        1. 2

          Lots of capability based languages say that they make it safer to run untrusted code because the untrusted code will only have the capabilities you explicitly pass into it. E.g. they can’t access the network unless you pass a socket or some other network capability; they can’t access files without a filesystem, path or file capability; etc.

          1. 2

            Austral claims to. The reason given is to protect against supply chain attacks, make them more difficult to execute. If each of the libraries in my dependency tree has to be isolated in a different VM, then cross-library function calls are going to be very expensive. An alternative is to provide security properties in the language and enforce them using static analysis.

            Austral also has an unsafe C foreign function interface, which undermines their claim to care about this. However, the Austral linear type system does support the basics of capability based security. The safe subset of Austral is more secure against supply chain attacks than Rust or Python or Javascript.

            The Verona language (based on previous Lobsters discussion) can call into foreign C/C++ code but sandboxes that code so that foreign code cannot violate Verona security guarantees.

      2. 2

        Language looks so clean and small. The documentation is also top notch. I love the Ada feel.

        1. 2

          I’m guessing that linear types lead to the same requirement as in Rust, that every (dynamically allocated) value has a single owner?

          This is the biggest thing I’ve found that gets in my way when I try to use Rust. When I design my data structures I just don’t think that way, so it feels like the language is trying to get in my way and force me to do things the way it wants. (I’m not saying there’s anything worse about it; it just goes against my grain. And I’m aware you can use things like Rc to work around it, but then you lose a lot of Rust’s benefits.)

          1. 2

            Likewise I’ve asked Fernando if he’d be up for writing more about how to express common mutable data structures like trees and lists with linear types.

            I think the reason Rust and Austral and other similar systems feel horribly impractical is mostly because we just don’t get exposed to the way of programming data structures that fits these systems.

            1. 3

              I think that Entity/Component/System architecture and Data Oriented Programming is compatible with Austral. I also note that the new Zig compiler leans heavily on Data Oriented techniques in order to perform very fast compilation, so it’s a good technique to learn for other reasons. Blob posts about how to represent data structures in safe Rust should transfer over to Austral?

            2. 2

              Yes, that’s right, as far as I can tell. Pointers to allocated memory are linear, thus the single owner rule. No cyclic data structures (using pointers).

              I see Austral as simpler and more austere than Rust, and thus placing even more restrictions on your coding style (if you are coming from a traditional language, your idioms don’t work and you need to learn new idioms). I haven’t tried Austral yet. My own perspective is that Austral could offer static guarantees that other languages (including Rust) can’t offer, thus reducing cognitive load (once you adjust to the programming style). It will be as big a jump as going from C-like imperative programming to Haskell-like pure functional programming, IMO.

              1. 1

                Funny you should mention FP — it’s another programming paradigm I keep bouncing off of. I seem to be allergic to systems that promise big benefits by taking away major capabilities, whether multiple ownership or mutable state.

                I’m sure that’s partly conservatism on my part, but it does seem that as compilers become ever more powerful (to say nothing of AI systems that can write code from natural language prompts!j we should be making languages more flexible instead of more restrictive.

              2. 2

                Using Rc does not mean you lose any of Rust’s benefits. That’s exactly the point of unsafe - you drastically limit the scope of unsafe scenarios, so you can test those in a very focused setting. To the outside world, it still adheres to the safe and more restricted semantics.

                This logic is a false dichotomy. There are more options than “everything statically enforced” and “nothing statically enforced” you can have “everything but a small set of manually checked modules be statically enforced.”

                1. 3

                  Rc is not a “zero cost abstraction,” it incurs reference counting overhead where not strictly necessary to work around the requirements of the borrow checker.

                  1. 1

                    Yep. That doesn’t seem like a bad thing.

                    1. 2

                      I was under the impression that “Rust’s benefits” were memory safety without sacrificing efficiency/performance through use of zero cost abstraction.

                      1. 1

                        This logic is a false dichotomy. There are more options than “everything statically enforced” and “nothing statically enforced” you can have “everything but a small set of manually checked modules be statically enforced.”

                        1. 2

                          I’m not talking about safety as a dichotomy but rather the runtime performance cost of that safety.

                          1. 1

                            Same statement applies to that dimension as well. The core of rust has zero-cost abstractions. That doesn’t mean that a few standard library functions which solve legitimate problems by introducing runtime overhead negate the core language semantics.

                  2. 1

                    Why are you changing the subject to “unsafe”? I was talking about Rc, which is not an unsafe construct (to my knowledge!) just an awkward one.

                    1. 1

                      If you search the Rc source code, you can see quite a few unsafe blocks (105 by my count) . Rc supports cyclic references for example, which isn’t possible in general in Rust (for the most part). It requires unsafe to implement.

                      1. 1

                        I wasn’t saying Rc is unsafe, just awkward. (Compared to using ref-counted objects in, say, Swift or Nim or C++.) it clearly goes against the grain of the language.

                        1. 1

                          And I’m aware you can use things like Rc to work around it, but then you lose a lot of Rust’s benefits.

                          That sounds like a comment about safety.

                2. 1

                  The borrow statement is pretty slick. Is that an innovation in Austral? I don’t think Rust has anything similar, right?

                  1. 3

                    Well, Rust block statements were kinda similar to this before non-lexical lifetimes.

                    Lifetimes strictly bound to lexical blocks were a bit easier to understand as a concept, but were cumbersome and unergonomic to write. There were many places where you had to introduce a block just because this model did not do what you meant it to do. NLL analysis seems to be a beast of static analysis (if I remember correctly, it works on control flow graphs), but it improved ergonomics quite a bit.

                    I appreciate that Austral has different trade-offs and sensibilities. Lexical lifetimes are simpler to reason about, even though a bit verbose and unergonomic.