1. 39
  1.  

  2. 6

    I sort of conflicted about this….

    I love memory safety…. but I can’t remember when last I created a bug in this class.

    I have found and fixed a few, but they were really really a small percentage of the available bugs.

    Yes, I would love my languages to guard against that.

    No, it’s not at the top of my list.

    What screams to me about that example is the violation of encapsulation.

    If y really belonged in that struct, it was part of that’s struct’s invariant.

    If so, what the hell is it doing escaping, as a naked reference, away from the interface that enforces that invariant?

    ie. The lifecycle bug is the least of the bugs being enabled by that interface.

    1. 4

      I’m not convinced Rust’s lifetimes and borrow checking is the best solution to the problem, but I do think that this is a real problem. It just doesn’t happen as directly as it does in this toy example.

      Consider a type that provides two properly encapsulated operations: query and update. The query operation is surprisingly complex and operates on various subdata structures. One multi-level-nested sub-structure accidentally returns a pointer to some owned memory, and things work out just fine…. for a while. In fact, the bug may live in production for a decade and nobody notices because the caller didn’t retain the data between calls to update. … And then somebody comes along and caches the result of a query. Whoops. Or, even worse, somebody comes along and adds mutex and a second thread making query/update calls. Now you have memory corruption and it can be pretty hard to track down.

      Having said that, I’d rather solve this problem with garbage collection and immutable data or variants on that theme, rather than mitigated single ownership and borrowing of mutable data.

      1. 3

        Garbage collection is a perfectly fine solution to the problem of memory management. More generally, garbage collection can be used when the following two conditions are met:

        • The resource is plentiful enough, so you don’t risk running out of it, even if you don’t relinquish it as eagerly as possible.
        • The physical identity of the resource doesn’t matter: Do you care whether malloc returns this or that memory block? Usually, no, so long as the block is big enough to store the data you actually care about.

        Unfortunately, these conditions aren’t always met:

        • Manipulating different files will cause your program to observably do different things.
        • Manipulating different GUI objects will cause your program to observably do different things.
        • etc.

        When the physical identity of the resource matters, ownership is a fundamental abstraction.

        1. 3

          When the physical identity of the resource matters, ownership is a fundamental abstraction.

          For example, imagine a database API that lets you use the connection with a closure, to build and execute a transaction:

          db.transaction(|txn| {
            txn.select(...stuff...);
            txn.insert(...stuff...);
            txn.delete(...stuff...);
          });
          

          Wouldn’t it be nice to know that you can’t do this?

          let mut escape = None;
          {
            let mut escaper = |txn| { escape = Some(txn); };
            db.transaction(escaper);
          }
          if let Some(txn) = escape {
            txn.launch_missiles();  // Now we are fiddling with a commited transaction.
          }
          

          (I apologize in advance for any errors in syntax or declaration.)

          1. 3

            Yep, that’s exactly what I’m thinking about!

          2. 2

            When the physical identity of the resource matters, ownership is a fundamental abstraction.

            I agree with this statement 100%. I’ll go further and say that I’m very excited about the research and engineering that has gone in to crystallizing this abstraction in Rust. However, that doesn’t mean it needs to be such a pervasive abstraction.

            Maybe I can make my perspective on this clear by analogy: I have nothing against Objects, but I am against a style of programming oriented by objects. I believe that encapsulation and messaging is a fundamental abstraction, but that doesn’t mean I should structure my entire system using that abstraction.

            Similarly, I have nothing against modeling ownership and borrowing, but I am against a style of programming oriented by ownership and borrowing.

            Then there’s the subject of enforcement. I’m not convinced that static enforcement of lifetimes and borrows is the only way, or a strictly better way, to enforce this abstraction. I’d love to see the dynamic languages community take a crack at this abstraction as well.

            1. 2

              Similarly, I have nothing against modeling ownership and borrowing, but I am against a style of programming oriented by ownership and borrowing.

              I’ve expressed elsewhere mild dissatisfaction with the fact everything is owned in Rust. Owned files? Yay! Owned strings? Meh, I don’t need in-place mutation that often. Owned complicated but non-concurrent data structures? You’re seriously getting in the way.

              Then there’s the subject of enforcement. I’m not convinced that static enforcement of lifetimes and borrows is the only way, or a strictly better way, to enforce this abstraction. I’d love to see the dynamic languages community take a crack at this abstraction as well.

              Well, what else could work? Do you have any examples of abstractions that can be successfully enforced dynamically? Note I’m not saying “in a dynamic language”, since dynamic languages can have static enforcement facilities too, e.g., Typed Racket is built on top of a dynamic language, but it’s built using a very static macro system.

              1. 2

                Owned strings? Meh, I don’t need in-place mutation that often.

                If I want to share strings, I’ll stick it in an Rc and be done with it! You instantly go from a mutable string buffer, to a reference counted immutable string.

                Owned complicated but non-concurrent data structures? You’re seriously getting in the way.

                See, I’m of a totally opposite view here. I love the fact that my language statically prevents bugs like iterator invalidation and use-after-free.

                1. 1

                  For non-concurrent data structures, in the vast majority of cases, purely functional data structures are simpler to understand and implement, and they perform well enough. (At least for my use cases. I don’t write web browsers or real-time video games, admittedly.)

                  However, purely functional programs can’t express concurrency (not to be confused with parallelism, which purely functional programming handles excellently), and that’s precisely where I’d like to have a Rust-like ownership system.

                  1. 1

                    Many of Rust’s defaults could be built around purely functional data structures, with deref to smoother over somethings. Because "..." is always a &str and &str is efficient, one finds oneself dealing with &str vs String all the time which for many applications is too tricky. C++ programmers are used to this, or so I understand – distinguishing between char * and std::string – but for most Ruby/Python programmers it is odd and it’s also hard to just pick String and stick with it (or just pick &str and stick with it).

                    It’s hard to know what “easy” is, basically.

                    1. 1

                      I don’t want to have to think about who owns or who has borrowed a purely functional data structure. Unlike files, sockets, etc. which are objects, created at some point in time, and destroyed at a later one; bigints, strings, syntax trees, etc. are values, which conceptually exist forever (or, even better, independently of time), no matter how ephemeral (or, rather, time-bound) their representation in computer memory might be.

                      1. 2

                        Swift might end up doing the right thing here. The default is to treat things as copyable values, and then statically elide the copy and heap allocation if it can. However, you can mark function parameters as inout to communicate that there is purposeful sharing. There is some stuff in some slides about changing this keyword to borrowed and adding a notion of linear typing.

                        In Rust, the approach taken to simplify this stuff is to have a lot of Deref instances, so that functions which take references to stack allocated values (&...) can take heap allocated stuff, too. This turns out not to make it simple “enough”, though; because if you want to return something you have to remember to put it on the heap, as opposed to just declaring it and hoping for stack promotion if it makes sense. That might be right, from the standpoint of what Rust is trying to do.

                2. 1

                  Do you have any examples of abstractions that can be successfully enforced dynamically?

                  I believe that every static abstraction has a dynamic counterpart. In all likelihood, each dual has already been discovered, but the relationship may not yet have been formalized. It’s also worth noting that, traditionally, dynamic enforcement is reserved for “heavier” use cases, since it’s typically slower by default.

                  For example:

                  Also, some dynamic abstractions don’t yet have good static counterparts, or inspired static counterparts that are still not yet popularized.

                  • “object capabilities” See Mark S Miller’s Thesis
                  • Contract checking for properties that can’t be checked until you have runtime input.
                  • The kinds of metaprogramming you can do with fexprs and symbolic evaluation is now partially available in languages like Scala with “call by name” or C# with “expression trees”.

                  And then other abstractions are already somewhere in the middle and being stretched in both directions. For example, ML style modularity is deeply related to dynamic linking. Now you can get dynamic modularity for effects with “algebraic effect handlers”, but at the same time you can do even more dynamic “linking” of your entire kernel with something like Docker.

                  ^^^ I had a bunch more examples for each of the categories above, but my comment got eaten by my browser and I need to get back to work…. sorry.

                  Well, what else could work?

                  One strawman idea: You could get an exception if you try to make a re-entrant call in to a “borrowed” object/actor/service.

                  1. 1

                    Unix enforces per-process memory safety with the memory manager hardware.

                    Only because it has to: memory-unsafe programs already exist in the wild, and people will run them no matter what, so it would be totally crazy not to perform any runtime checks. But, wouldn’t it be nice if all programs were statically guaranteed to be memory-safe?

                    “object capabilities” See Mark S Miller’s Thesis

                    Isn’t ownership essentially a statically enforced object capability? Or am I missing something?

                    For example, ML style modularity is deeply related to dynamic linking.

                    This is false. ML modules are traditionally second-class (not first-class values) and meant to be statically linked before you deploy your program. Some ML dialects (OCaml, Alice ML, etc.) allow you to package modules as first-class values, but that feature is very much an afterthought, not something you want to use if you can avoid nit.

                    Now you can get dynamic modularity for effects with “algebraic effect handlers”, but at the same time you can do even more dynamic “linking” of your entire kernel with something like Docker.

                    Algebraic effects aren’t about modularity, they’re about expressiveness: without them, you have to manually inject your code into some delimited continuation monad, or reify those pesky continuations as zippers or some such.

                    One strawman idea: You could get an exception if you try to make a re-entrant call in to a “borrowed” object/actor/service.

                    Okay, an exception is thrown… and then what? How do you determine exactly what part of your program is wrong and has to be fixed?

                    1. 1

                      wouldn’t it be nice if all programs were statically guaranteed to be memory-safe

                      First: If memory-safety is enforced dynamically, then the program is statically guaranteed to be memory safe. It’s just not statically guaranteed to avoid attempting and failing a memory-unsafe operation.

                      Second, unless you 1) heavily constrain the language (such as ban raw pointers) or 2) provide an excessively powerful logic (sequent calculus, etc), you’re not going to be able to statically guaranteed both memory safety with freedom from any runtime errors.

                      This is my fundamental argument: The engineering tradeoff is closer to a mix of static and dynamic enforcement. Statically prove or dynamically validate as much as you reasonably can given your tools, skill, budget, risk tolerance, etc.

                      Isn’t ownership essentially a statically enforced object capability? Or am I missing something?

                      Yes, you’re missing exactly the class of dynamic security properties. For example, consider permission revocation.

                      meant to be statically linked before you deploy your program

                      My statement is absolutely true: These things are related by the static/dynamic dual I’m talking about. Of course the ML designers, being static language advocates, push for static linking. However, as you even mention, things like first class modules show that you can satisfy module signatures dynamically. Consider how you can do things like in C/unix like LD_PRELOAD for substituting your malloc/free implementation. That’s effectively dynamic modularity.

                      Algebraic effects aren’t about modularity, they’re about expressiveness

                      I disagree that this isn’t modularity. Algebraic effects enable temporal modularity. Anytime you have time, you have something which is naturally dynamic. That’s why it’s so common for static languages to offer dynamically typed open unions for exceptions.

                      How do you determine exactly what part of your program is wrong and has to be fixed?

                      You get a stack trace, just like when an application divides by zero in production…

                      1. 2

                        First: If memory-safety is enforced dynamically, then the program is statically guaranteed to be memory safe. It’s just not statically guaranteed to avoid attempting and failing a memory-unsafe operation.

                        For me, that notion of safety is completely useless. We consider C unsafe because C programs can attempt memory-unsafe operations, even if the operating system will detect such attempts in some cases, and kill the process as a result.

                        Second, unless you 1) heavily constrain the language (such as ban raw pointers) or 2) provide an excessively powerful logic (sequent calculus, etc), you’re not going to be able to statically guaranteed both memory safety with freedom from any runtime errors.

                        I don’t believe in “mechanically verify everything”. IMO we all have vastly underestimated the possibility of using our own brains (enhanced with pencil and paper) for proving things about programs. Sadly, neither the mechanical verification gang nor the programmer gang cares…

                        This is my fundamental argument: The engineering tradeoff is closer to a mix of static and dynamic enforcement.

                        … and neither do you, apparently. :-(

                        For example, consider permission revocation.

                        Then you never had permission in the first place. You only had “maybe permission”. (In the everyday sense of the word “maybe”, not Haskell’s.)

                        However, as you even mention, things like first class modules show that you can satisfy module signatures dynamically.

                        Alice ML can certainly do that. OCaml, I’m not so sure. OCaml has modules packaged as first-class values, but all checks are static as far as I can tell.

                        Algebraic effects enable temporal modularity.

                        As I said before, you can get the same “temporal modularity” (whatever that might mean) by zipper-ifying all your data structure traversal code. In fact, I’m doing just that in my own SML code, because SML has no built-in support for algebraic effects.

                        You get a stack trace, just like when an application divides by zero in production…

                        Alas, concurrency errors are much harder to trace back to their ultimate causes than division by zero errors (in a non-concurrent setting).

                        1. 1

                          that notion of safety is completely useless

                          It’s all about boundaries. C/Unix programs are memory safe. C functions are not.

                          This is a spatial boundary: “program” or “function”, but there’s also a possibility for temporal boundaries. Consider mprotect.

                          and neither do you, apparently

                          I’m not sure how you got that impression from my comment. I think you and I are on the same page on that point.

                          You only had “maybe permission”

                          You’ve lifted a dynamic property “you may or may not have permission” in to a static property “you have dynamic permission”. This is the heart of the “unityped” argument, but I view it as “half a dozen of one, six of dozen of the other”. Given that, what do you do now when you are denied permission? If the programmer considered that case, the code does case analysis and custom handling logic. If the programmer didn’t consider that case: raise an exception!

                          concurrency errors are much harder to trace back to their ultimate causes than division by zero errors

                          I’m not sure that this is true in general. Where did the zero come from?

                          In general, stack traces are an atrociously impoverished debugging tool for origin tracking. However, many common runtimes even suck at stack traces! Ideally, there would be a blend of static and dynamic metadata associated with the 0, so that when the error does occur you can quickly find the cause. Static metadata would include callers, dataflow analysis, etc. Dynamic metadata could include something like passport stamps: Where has this value been on its journey?

                          1. 1

                            I’m not sure how you got that impression from my comment. I think you and I are on the same page on that point.

                            How did you get that impression? I want ahead-of-time verification under all circumstances - just not always automated, because automatic verification tools have limitations, and we shouldn’t be bound by them.

                            Ideally, there would be a blend of static and dynamic metadata associated with the 0, so that when the error does occur you can quickly find the cause.

                            To find the cause, you need to think in terms of predicates on the program state (preconditions, postconditions, invariants). I don’t think the usual kinds of “metadata” attached to program data are particularly helpful for recovering such predicates.

                            1. 1

                              I want ahead-of-time verification under all circumstances - just not always automated, because automatic verification tools have limitations, and we shouldn’t be bound by them.

                              I think we’re on the same page because I also value ahead-of-time verification, only where where time=production, not necessarily time=run. You’re saying you want ahead of time analysis to include external analysis, such as by-hand proofs, etc. I agree with that, but I’d also like to include imperfect analysis, such as dynamic (eg. code coverage) and stochastic methods (ie fuzz testing, quick check, etc). Blended portfolio is my strategy.

                              I don’t think the usual kinds of “metadata” attached to program data are particularly helpful

                              You’re right, the usual metadata isn’t particularly helpful. The best systems I’ve ever had to work with add unusual metadata, usually in the form of dynamic trace information. For example, tagging an HTTP request with a set of symbols for which middleware functions touched it. Or an “undo stack” based on persistent data. Or simply some counters.

                              1. 1

                                I think we’re on the same page because I also value ahead-of-time verification, only where where time=production, not necessarily time=run.

                                I can agree with the notion of a debug build that performs checks that are in principle unnecessary (because they are meant to always succeed, even if in practice they will sometimes fail). But if you can’t confidently strip those checks out of the release build, then you haven’t really verified your program ahead of time.

                                I’d also like to include imperfect analysis, such as dynamic (eg. code coverage) and stochastic methods (ie fuzz testing, quick check, etc).

                                Sure, use whatever works, as long as the release build is guaranteed to be free of both errors and unnecessary checks. Tests are useful, not so much as an acceptance criterion, but rather as a means to quickly reject wrong programs.

                                That being said, for the specific case of ownership enforcement, I don’t think tests are particularly helpful, even for rejecting wrong programs. The amount of runtime work necessary to keep track of who owns what is prohibitively high. For example, if you have an invariant of the form “Foo and Bar are always owned by different threads”, then the only way to check that invariant is to stop the world.

                                I don’t think the usual kinds of “metadata” attached to program data are particularly helpful

                                You’re right, the usual metadata isn’t particularly helpful.

                                You have conveniently left out the most important part of my message: “predicates on the program state”.

                3. 2

                  I’m not convinced that static enforcement of lifetimes and borrows is the only way, or a strictly better way, to enforce this abstraction. I’d love to see the dynamic languages community take a crack at this abstraction as well.

                  Because of it’s relationship to code generation – the post linked by the OP points out that there is check around the dereference – it is at least attractive to have static enforcement.

                  Static enforcement seems better to me because you get a check and an error to fix at compile time, instead of having to (try to) exercise all the code paths that might lead to the problem. Why do you think it might not be strictly better?

                  1. 1

                    Why do you think it might not be strictly better?

                    Because it’s always runtime somewhere.

                    If I have a dynamic implementation, I have the option of static analysis. I might have to add some hints.

                    If I have a static implementation, I generally have to rewrite my program in order to get a dynamic version.

                    1. 1

                      I can agree that it’s one of those things where one does not always want static analysis – but I suspect one usually wants it – which is to say it could be opt-out rather than opt-in.

                      This is the situation that arises with Flow viz-a-viz TypeScript. Sure, you can type check your JS with Flow…hope you don’t have too much code lying around…

                      You do see static languages sometimes move in this direction – TypeScript’s Any, Swift’s AnyObject and Any – but it does seem that this feature, like any potentially-static-feature, is usually provided always on or always off.

              2. 1

                Oh I agree, it’s a real problem…. Just not near the top of my list of problems I have found and/or created in real industrial code recently.

                So certainly, any language that fixes this class of problems will be regarded as an improvement by me…

                Borrow checking is in a sense “reference counting on the fingers of 1 thumb”. AKA linear logic.

                So in some senses I am very much for it.

                D language is exploring somewhat more flexible controls on reachability and life times, which has potential to be great.

                https://github.com/dlang/DIPs/blob/master/DIPs/DIP1000.md

                My core concern is leaks of references beyond the lifetime of the object is only half the problem.

                The whole point of a class is to enforce the class invariant. If you leak instance variables all over the place, having a reference to an instance variable existing beyond the lifetime of the instance is merely one of many ills that can befall you.

            2. 7

              In Go:

              type X struct {
                  y Y
              }
              
              func (x *X) gety() *Y {
                  return &x.y
              }
              

              Like C and C++, it’s still possible to return an interior pointer, but unlike those languages doing so is still guaranteed to be memory safe because of garbage collection. This might mean that your object ends up being punted off to the heap because the *Y outlives the X, but the actual access to the pointer carries no extra cost.

              1. 20

                Fwiw, part of his specification (a “crucial” part) seems to be exactly that *Y doesn’t outlive and that it compiles to mere pointer arithmetic. Not compelling for all use cases, but definitely Rust’s big rallying cry: “zero cost abstraction!”.

                1. 4

                  well to be pedantic it still doesn’t outlive the owning object, but that’s because the owning object is kept alive by the GC for as long as it needs to even if the code doesn’t reference it anymore. But yes I understand what the author is getting at.

                2. 12

                  Literally every language with GC can do this. What makes it novel is doing it without GC.

                  1. 3

                    No, the trick here is that it is an interior pointer. Most GC systems do not support this at all (e.g. Java). In fact, the only other one I can think of that can is the CLR, and that can only be done using unsafe pointers and pinning the allocation to the heap.

                    1. 5

                      I’m surprised Java doesn’t handle that correctly, but I’m no Java expert. In languages like Lisp, ML, etc. it works fine, and it’d be surprising if it didn’t. Those kinds of high-level GC’d languages generally have managed/tagged pointers that they communicate to the GC, using e.g. a pointer map (a technique dating to Algol 68), which should mean handling derived/interior pointers works fine, and it’s pretty much a bug if any legal reference to an object gets collected while the reference is still live, no matter what else it sits inside of. Even the textbook GC implementation given in Andrew Appel’s Modern Compiler Implementation book discusses how to handle derived pointers (in Section 13.7, “Garbage collection: Interface to the compiler”).

                      1. 4

                        I’m certainly not an expert on GC design but my understanding is that the JVM uses the fact that interior pointers are illegal to perform some optimizations. Google came up with http://forum.dlang.org/thread/o5c9td$30ki$3@digitalmars.com which looks like a pretty good and recent discussion of how interior pointers limit the possible optimizations.

                        1. 1

                          Sure, but I think it’s a case of finessing the meaning of “legal reference”. If you make it illegal/impossible to construct an interior pointer, or make it illegal/impossible to hold one without holding a pointer to its container for a superior duration, then you can rightly say that your GC handles all legal references properly, while ignoring interior pointers and having generally less overhead.

                        2. 1

                          I think the this in

                          Most GC systems do not support this at all (e.g. Java)

                          is unclear. I believe you are referring to interior pointers, which aren’t a thing at all in Java. I believe mjn thought you were referring to the general pattern of returning a pointer to an instance variable.

                          1. 1

                            Yes, was talking about interior pointers :)

                    2. 2

                      Let’s not drive people in another Perl vs Python, or Java vs C++.

                      1. 2

                        I think it’s a bit disingenuous, because C++ technically can return an internal member. I think the author should be more clear about the fact that some languages can do it, but that Rust is the only one^1 that can do it safely.

                        1: Maybe Ada can do it safely too? I’d need clarification from someone who knows more about Ada than I do

                        1. 20

                          He does.

                          From the post:

                          Crucially, the Rust compiler will verify that all callers of the getter prevent the returned reference from outliving the X object.

                          It doesn’t get much stronger then “crucial” when it comes to explaining a central point.

                        2. 1

                          But if your static analyzer can find the same problem, what’s the advantage?

                          1. 19

                            You know that everyone else in the community is also using the same static analyzer.

                            1. 1

                              But your static analyzer will also work across all the community code you compile into your program.

                              1. 16

                                What do you do when it just fails, though? That’s a lot of vendored code to bugfix and a strong potential to lack the buy-in to do it. If the analyzer is just built-in to your compiler… that all evaporates.

                                1. 0

                                  So we go from (a) “my new programming language solves a serious programming problem not solved by your old tatty one” to (b) “my new programming language bundles its solution to the problem into the compiler and your old tatty one has the solution in a standard tool”. To me, at least, (a) is a stronger argument than (b)

                                  1. 8

                                    Do you know of a static analysis tool for C and/or C++ that is simultaneously meets the following requirements?

                                    • Soundness: All memory-unsafe code is rejected.
                                    • Modularity: Modules can be analyzed and verified in isolation. Never do the implementation details of two or more modules have to be analyzed simultaneously.
                                    • Inference: Things that can reasonably be inferred will be inferred, to keep the annotation burden down to a tolerable level.

                                    That’s what it takes to compete with Rust.

                                    1. 3

                                      Soundness: All memory-unsafe code is rejected.

                                      Doesn’t Rust permit unsafe operation?

                                      Modularity: Modules can be analyzed and verified in isolation. Never do the implementation details of two or more modules have to be analyzed simultaneously.

                                      That’s an impressive feature. Composable properties is a tough problem. For example, if module A requests 55% of available heap and Module B also requests 55% the system (A;B) will fail even though both modules are “correct” on their own. Same with scheduling and timing or even mutual exclusion which is painfully dangerous. Does Rust mutex support solve the compositionality problem? Or am I misunderstanding what you wrote?

                                      1. 1

                                        Doesn’t Rust permit unsafe operation?

                                        Sure, but you have to explicitly request it. One good way to think of it is that Rust is actually two languages: safe Rust and unsafe Rust, and the keyword unsafe is the FFI between them. The relationship between safe and unsafe Rust is similar to that between Typed Racket and Racket: when things “go wrong” (language abstractions are violated), it’s never safe Rust or Typed Racket’s fault.

                                        That’s an impressive feature.

                                        It’s a feature of most type systems. For example, when you type check a C source code file, you don’t need access to the implementation of functions defined in other files. Though, of course, you might need to know the interfaces (prototypes) exposed by those functions.

                                        Does Rust mutex support solve the compositionality problem?

                                        It doesn’t. Rust is modest in what it tries to prevent. In particular, safe Rust doesn’t even try to prevent deadlocks. It only prevents issues that can be expressed in terms of “use after free” or “data races”.

                                        1. 1

                                          Ok, so what you mean by “ Modules can be analyzed and verified in isolation” is limited to type safety.

                                          1. 2

                                            Compositional verification happens to be a feature of most type systems, but there’s no law of nature preventing you from devising your own non-type-based compositional analyses.

                                            1. 1

                                              Maybe there is a law of nature. Discrete state systems are hard.

                                        2. 1

                                          Doesn’t Rust permit unsafe operation?

                                          This is actually a very good point and one that was raised with regards to Haskell many-a-time.

                                          What I wonder about is how to present some kind of debuggability around that – maybe there can be some kind of breadcrumb that indicates when a crash occurs, that it was in an unsafe operation?

                                      2. 6

                                        I agree, but I’m also pretty directly claiming that as far as “solving the problem” goes there’s a big gap between language-integrated solutions and “aftermarket” ones, even if they enjoy wide community adoption. In one case, the problem has actually vanished. In the other, there’s a means to tackle it if you want to spend the energy.

                                        1. 3

                                          A language is not just a spec but also a community and an ecosystem. If we’re comparing between “Rust” and “C with foo analyzer”, questions like “which has more libraries?” and “which is easier to hire for?” have quite different answers to when we’re comparing “Rust” and “C”.

                                  2. 2

                                    What is the state of the art for static analysis of C/C++? You can (fairly sure) find this kind of code and prohibit it entirely, but is it possible to allow it and make guarantees about its safety?

                                    I would have thought not, but I don’t really know that much about static analysis… Is it possible within some constraints? (For constraints that lie somewhere between “don’t ever allow this kind of code” and the constraints that Rust applies via the language.)