1. 40
  1.  

  2. 17

    I am one of the folks working on this project, happy to answer questions. Note: This is a research project and the language is not yet finished to an MVP state, the compiler is not yet usable. The runtime is and we are using it in some other projects.

    1. 4

      Could you speak to this goal?

      Can linear regions be used to remove the restrictions of per-object linearity without sacrificing memory management?

      I had previously wondered whether allocating objects with mutual pointers (e.g., doubly linked list, graph, etc.) could be done with arenas in a language like Rust. I.e., when the last pointer to the arena is removed then the whole structure is deallocated. Is this goal a similar idea?

      1. 12

        That’s quite a broad question, so I’ll try:

        The region abstraction in Verona defines a memory management policy per region. Each region is reachable from a single isolated (iso) pointer that points to the sentinel object in the region. All live objects in the region are dominated by (transitively reachable from) the sentinel. You can transfer ownership of iso pointers but only by a move operation. If an iso pointer is on the stack, the region that it refers to is deallocated when the pointer goes out of scope. If an iso pointer is in an object then the region is deallocated after the region that refers to it is deallocated. If an iso is owned by a concurrent owner (cown), one of the roots of a tree of regions, then it is deallocated when the cown’s reference count drops to zero. You can asynchronously schedule work to run with access to any arbitrary set of cowns that you have pointers to and scheduling work on a cown implicitly increments its reference count (in the abstract machine, at least).

        The first region type that we implemented was a tracing GC. This is a lot simpler than most modern GCs because the single ownership property of the region guarantees no concurrent access (when you invoke the GC, you have exclusive ownership of the region, nothing else can be reading or writing it) so you don’t have to worry about concurrency. Because the region is not the whole of memory, this isn’t a stop-the-world thing, it’s just a block-the-caller thing - the rest of the system can continue executing happily. This allows arbitrary object graphs and will run finalisers on unreachable objects when the GC is invoked (explicitly) or when the region is deallocated. This kind of region is great for prototyping and lets you construct and collect arbitrary cyclic object graphs.

        The second region that we implemented is an arena allocator that requests chunks of memory at a time and allocates with a simple pointer increment. This kind of region does not support object deallocation at all until the region is deallocated. Finalisers are run when the region is destroyed. This gives you fast allocation and deallocation for a group of objects with the same lifetime (e.g. things associated with handling a short-lived HTTP request).

        We currently have an intern implementing a variety of different variations on reference counting within a region. Again, the region abstraction means that refcount manipulation can be cheap because it doesn’t need atomic operations and we can use all of the optimisations from the ‘80s that became unsound when threads were invented to do refcount elision. The simplest refcount policy is that objects in garbage cycles are leaked until the region is deallocated. If a data structure is potentially cyclic, but has a bounded lifetime, then this lets you optimise for the acyclic case. We can also use David F. Bacon’s refcounting + deferred cycle detection approach to collect a set of potential roots of garbage cycles (objects that are decref’d and not deallocated, and not incref’d while they’re in the possibly garbage buffer) and then scan those cycles to see if all refcounts are found by internal references and then deallocate the objects if they are. Or we can fall back to our tracing GC for cycle detection. All of these give different policies.

        This lets us build DAGs and even cyclic data structures within a region, only requiring that regions themselves are arranged in a tree structure. Part of the motivation for this is that we have made a principled decision in Verona to not provide an unsafe escape hatch in the language. There are a small number of things that are intrinsically unsafe (scheduling, memory management, low-level I/O) because they are implementing and enforcing the type system and so can’t rely on the existence of the type system. Our goal is to allow everything else in fully type- and concurrency-safe code.

        In contrast, Rust has a philosophy of asking developers to provide safe abstractions to unsafe code. In Rust, you can implement a doubly linked list in unsafe code and then expose safe interfaces. For simple things, I think this works, but my big worry about Rust is compositional correctness. If every crate that you use implements a data structure in unsafe code, and every single one is 100% safe when used with other safe Rust code, are they still safe when used together? Or do they opt out of safe mode in ways that are fine in isolation but are unsafe when used together?

        The region model also provides a place in our abstract machine to fit untrusted C/C++ code into the system. Foreign code should not provide an unsafe escape hatch via the back door. There’s no point saying ‘use our safe programming language! Oh, of course, that 1,000,000 line C library that you want to use is completely unsafe and a single bug in that can violate all of the invariants that our safe language depends on’. In Verona, the plan is for every library to be instantiated (potentially multiple times) in a region. We have a prototype of this that uses separate processes and we can also support it with SFI or CHERI sandboxing (or with some non-process abstractions on a conventional MMU).

        1. 2

          my big worry about Rust is compositional correctness

          In principle, the abstractions are 100% safe and composable. The abstraction layer is supposed to uphold all of Rust’s safety invariants, which ensures it can’t make any safe code unsafe, even if the safe code is incorrect.

          Bugs can of course happen, and buggy unsafe code can break safety guarantees. But that’s more of a plain-old bug rather than unexpected compositional behavior. The rules compose safely, if you manage to follow them.

          The rules themselves are conceptually simple to follow: no mutable aliasing, and basics of C/C++ memory safety. There are implementation gotchas. For Rust common ones are: unsafe code not defending from permissible-but-evil safe code (e.g. unsafe code is not supposed to trust custom iterators to report correct length), panic (exception) safety, and incorrect declarations of Send/Sync of the abstraction (see Rudra project).

          1. 7

            In principle, the abstractions are 100% safe and composable

            That’s a big ‘In principle’. For example, std::mem::transmute points out in the documentation that it is ‘incredibly unsafe’ (emphasis theirs). There are lots of ways in which you could use this and end up with something that is safe in the presence of safe Rust code, but not safe in the presence of other unsafe code. Things like RC are similar: they violate the unique ownership rules, but in such a way that the borrow checker does the right thing in safe Rust, but where the set of things that you can do in unsafe Rust may violate the invariants that RC expects.

            Each use of unsafe imposes some extra undocumented invariants that must hold for the use of unsafe to be correct. These are, by definition, rules that can’t be expressed in the Rust type system (if they could be, you wouldn’t need unsafe). You can locally enforce that these rules apply only in terms of the tools that the type system provides and in terms of your API.

            One of the examples that came up in the Rust issue tracker, for example, was a proposed setjmp / longjmp crate and an async I/O crate. The latter depended on finalisers being run at the correct point for correctness, but an unsafe crate invoking longjmp would break this. Both are fine in isolation: the setjmp crate is fine if callers follow its rules and don’t depend on finalisers in frames between the setjmp and the longjmp. The async I/O crate was fine in isolation and mostly expressed its invariants in terms of conventional Rust behaviour. The two together ended up with something that doesn’t work and can violate memory safety (at which point all bets are off for all of your code).

            There are rules, and if you follow them then you don’t end up with bugs, but that’s also true of C/C++. That’s not what I want from an infrastructure language that’s used for writing network-facing code that needs to provide strong security guarantees and five or more nines of uptime.

            1. 1

              There are lots of ways in which you could use this and end up with something that is safe in the presence of safe Rust code, but not safe in the presence of other unsafe code.

              Rust is not a sandbox, so of course arbitrary buggy code can cause nasal demons anywhere. But when the abstraction around unsafe is not buggy, then it’s not going interact in sneaky non-composable ways with other code.

              Each use of unsafe imposes some extra undocumented invariants

              unsafe fn is free to require whatever it wants. However, once unsafety is wrapped in a safe fn, then it’s not allowed to impose any undocumented invariants on any code anywhere. Once you wrap something and say it’s a safe function, you play only by Rust’s rules, and they are enforced by Rust itself.

              If any two functions marked as “safe” could interact to cause memory unsafety, Rust considers this a bug in the function’s definition, not function’s usage.

              This is a big difference from C/C++ reasoning. A C library can say “don’t call this function on Wednesday”, and if you call it on Wednesday and it causes a buffer overflow, that’s your fault. In Rust it’s either unsafe fn, or it never ever causes buffer overflow, even if you call it all week. It may panic!("It's Wednesday, you dummy!"), but that’s not memory unsafety.

              BTW, setjmp and longjmp are unsafe fn and cause UB in Rust. Any use of them is invalid, and AFAIK you just can’t build any safe Rust abstraction around them.

        2. 4

          You can’t quite do that in Rust, but you can have nodes that reference others in an arena, and have their lifetime bounded by that of the arena. I think dodrio does this, but it might just use pointers: https://github.com/fitzgen/dodrio#design

        3. 2

          The FAQ section is pretty good, when you offered to answer questions – I wanted to make sure I do my leg work and see if my question was already answered, and surely it was in the FAQ Section (about why C++) :-) [1]

          My other question, if you may talk about this – what would you envisage to be a ’ lay-man practical’ success outcome of the language. In other words what types of ‘typical business’ applications would then be possible, or easier to build, compared to the current state. Apologies in advance if this type of question is too open ended/ naïve.

          [1] https://github.com/microsoft/verona/blob/master/docs/faq.md

          1. 6

            My other question, if you may talk about this – what would you envisage to be a ’ lay-man practical’ success outcome of the language.

            I think it’s worth separating this into ‘success for the language’ and ‘success for the research project’. I’ll answer the second one first: I think there are two ways that the research project can succeed and two in which it can fail. It can fail because either the abstractions that we’re building don’t work (with the new version of the type system, I’m more confident that this won’t happen), or because they do work but they don’t provide sufficient advantages to drive adoption. I think the project can succeed either by producing a new language that has compelling advantage with respect to others or by discovering that we have a path to retrofitting those ideas onto other existing languages (for example, if Rust 4.0 adopts Verona’s core ideas).

            In other words what types of ‘typical business’ applications would then be possible, or easier to build, compared to the current state. Apologies in advance if this type of question is too open ended/ naïve.

            I have actually written a blog post about this, just not published it yet. We are describing Verona as an infrastructure language. To my mind, there’s a big gap between ‘systems languages’ and ‘application languages’ that doesn’t really have any good options in it.

            A systems language needs to provide very low-level abstractions and to allow you to break the rules sometimes. You need a systems language to implement a scheduler or a memory manager because these are fundamentally components of any higher-level type system and so can’t depend on such a type system existing. C++ is a great language in this space. Rust seems to be competing quite solidly here too.

            Application languages need to minimise the set of things that are not business logic that the developer needs to think about. For example, in C# (or Java, or Python) you have a global garbage collector because a bit of memory overhead or a bit of nondeterminism in tail latency is far less important than making the programmer think about ownership and object lifetimes for every object. Workloads in this space are often I/O-bound and so sequential perf doesn’t matter that much because even if you made the program take no CPU time at all it wouldn’t get much faster (and the bits that are CPU-performance critical are self-contained components that you can write in a different language and typically have an off-the-shelf building block for).

            Infrastructure languages have a set of requirements that slightly overlap both of these. They need to be safe because they’re aimed at network-facing workloads that are going to be exposed to the public internet. A single security bug in the kinds of things that they target can cost millions of dollars. At the same time, they have strict resource (latency, memory overhead, downtime) constraints that also cost a lot of money if you violate. Programmers need to be able to reason about memory consumption and tail latency (which you can’t do easily in a world with global GC) because they have SLAs that cost money if they are not met.

            The main targets at Microsoft (from my perspective, this is not in any way a plan - Verona is a research project and there are no product teams committed to using it) for an infrastructure language like Verona are the services that run on Azure. This would expose services to customer code written in application languages (TypeScript, .NET, whatever) where customers can focus on their business logic.

          2. 1

            I didn’t finish reading all the markdown notes but I described it to my friends as using a type system that looks a bit like tla+ I wonder if you’d agree with this characterization?

            1. 4

              I was greatly excited when I first heard about TLA+ (via Hillel Wayne’s writings). I lost all interest when I realised that it was purely at the level of specification rather than implementation. Don’t get me wrong - that doesn’t detract from the usefulness or viability of TLA+ itself, it’s just a personal feeling. On that note, I would rather imagine that TLA+ would be comparable to theorem provers/solvers like Coq, Lean, Idris, Agda et al.

              1. 2

                Note: One of our interns proved some properties of the the Verona runtime’s scheduler with TLA+, but this is the closest that I’ve ever come to it and so I might be entirely wrong here:

                I think the comparison is both fairly accurate and somewhat misleading. Like any kind of concurrent specification language, Verona tries to provide a model of concurrency that is easy (or, at least, possible) to reason about and so it has an isolation and immutability type system that prevents data races. Unlike a specification language, a lot of the Verona type system relates to memory ownership and allocation. This is said from a position of almost complete ignorance of TLA+ though, perhaps if you can let me know the ways in which you think it’s similar then I can understand better whether this is the case?

                1. 1

                  The intuition I have is that you use “in-degree 1” structures to encapsulate local things in nice ownership properties (like rust/pony) and then at the top you have the owners interacting and you use a tla+ style strategy to try all different interleavings and discover any hidden coordination in your logic… So I perceived this tla+ish testing strategy as the main novelty. Btw I like your framing of a “infrastructure language”

              2. 1

                FYI, the links to sample code are all broken — on the Explore page, under Concurrent Ownership.

                1. 1

                  Thanks. The example code was all written against the interpreter, which was based on an older version of the language, so has all been moved to the deprecated directory. I’ll make a note to update the links and add some disclaimers.

              3. 3

                Yes, this is one project that I’m keeping a close eye out on. As an aside, it’s interesting that MS Research is actually fostering some very interesting developments in the field of languages and compilers what with Lean, Koka, Verona (and many more that I haven’t explored myself yet). Impressive!