Threads for brocooks

  1. 5

    In general, we recommend regularly auditing your dependencies, and only depending on crates whose author you trust.

    Or… Use something like cap-std to reduce ambient authority like access to the network.

    1. 8

      My understanding is that linguistic-level sandboxing is not really possible. Capability abstraction doesn’t improve security unless capabilities are actually enforced at runtime, by the runtime.

      To give two examples:

      • cap-std doesn’t help you ensure that deps are safe. Nothing prevents a dep from, eg, using inline assembly to make a write syscall directly.
      • deno doesn’t allow disk access by default. If you don’t pass --allow-net, no dependency will be able to touch the network. At the same time, there are no linguistic abstractions to express capabilities. (

      Is there a canonical blog post explaining that you can’t generally add security to “allow-all” runtime by layering abstraction on top (as folks would most likely find a hole somewhere), and that instead security should start with adding unforgeable capabilities at the runtime level? It seems to be a very common misconception, cap-std is suggested as a fix in many similar threads.

      1. 2

        Sandboxing is certainly possible, with some caveats.

        You don’t need any runtime enforcement: unforgeable capabilities (in the sense of object capabilities) can be created with, for example, a private constructor. With a (package/module) private constructor, only your own package can hand out capabilities, and no one else is allowed to create them.

        cap-std doesn’t help you ensure that deps are safe.

        That is true, in the sense that no dependency is forced to use cap-std itself. But, if we assumed for a second that cap-std was the rust standard library, then all dependencies would need to go through it to do anything useful.

        Nothing prevents a dep from, eg, using inline assembly to make a write syscall directly.

        This can also be prevented by making inline assembly impossible to use without possesing a capability. You can do the same for FFI: all FFI function invokations have to take a FFI capability. With regards to the rust-specific unsafe blocks, you can either do the same (capabilities) or compiler-level checks: no dependencies of mine can use unsafe blocks unless I grant them explicit permission (through a compiler flag, for example).

        Is there a canonical blog post explaining that you can’t generally add security to “allow-all” runtime by layering abstraction on top […] and that instead security should start with adding unforgeable capabilities at the runtime level?

        I would go the other way, and recommend Capability Myths Demolished, which shows that object capabilities are enough to enforce proper security and that they can support irrevocability.

        1. 4

          With a (package/module) private constructor, only your own package can hand out capabilities, and no one else is allowed to create them.

          This doesn’t generally work-out in practice: linguistic abstractions of privacy are not usually sufficiently enforced by the runtime. In Java/JavaScript you often can use reflection to get the stuff you are not supposed to get. In Rust, you can just cast a number to a function pointer and call that.

          I would sum up it as follows: languages protect their abstractions, and good languages make it impossible to accidentally break them. However, practical languages include escape hatches for deliberate circumventing of abstractions. In the presence of such escapes, we cannot rely on linguistic abstractions for security. Java story is a relevant case study:

          Now, if you design a language with water-tight abstractions, this can work, but I’d probably call the result a “runtime” rather than a language. WASM, for example, can implement capabilities in a proper way, and Rust would run on WASM, using cap-std as n API for runtime. The security properties won’t be in cap-std, they’ll be in WASM.

          This can also be prevented by making inline assembly impossible to use without possesing a capability

          I don’t think this general approach would work for Rust. In Rust, unsafe is the defining feature of the language. Moving along these lines would make Rust more like Java in terms of expressiveness, and probably won’t actually improve security (ie, the same class of exploits from the linked paper would work).

          I would go the other way, and recommend Capability Myths Demolished

          Thanks, going to read that, will report back if I shift my opinions!

          EDIT: it seems that the paper is entirely orthogonal to what I am trying to say. The paper argues that cap model is better that ACL model. I agree with that! What I am saying is that you can’t implement the model on the language level. That is, I predict that even if Java used capability objects instead of security manager, it would have been exploitable more or less in the same way, as exploits breaking ACL would also break capabilities.

          1. 3

            Go used to have a model where you could prohibit the use of package unsafe and syscall to try to get security. App Engine, for example, used this. But my understanding is that they ended up abandoning it as unworkable.

            1. 2

              Your points are sharp. Note that there was an attempt to make Java capability-safe (Joe-E), and it ended up becoming E because taming Java was too much work. Note also that there was a similar attempt for OCaml (Emily), and it was better at retaining the original language’s behavior, because OCaml is closer than Java to capability-safety.

              ECMAScript is almost capability-safe. There are some useful tools, and there have been attempts to define safe subsets like Secure ECMAScript. But you’re right that, just like with memory-safety, a language that is almost capability-safe is not capability-safe.

              While you’re free to think of languages like E as runtimes, I would think of E as a language and individual implementations like E-on-Java or E-on-CL as runtimes.

        2. 2

          porquoi no los dos?

        1. 2

          There was this entry a few years ago about using π as a storage device. It does contain all data that could exist. Can’t find the link right now.

          Here’s a fun idea for a weekend project: A javascript library that implementing a tag that displays a fixed size image based on a π offset passed as a parameter. A script like the one in this post could be used to find the image. Perhaps even allowing for a certain error level for performance reasons.

          1. 7

            Can’t find the link right now.

            Is it by any chance?

            1. 6

              This is pure gold. Don’t forget to look at the issue tracker. There are valuable gems like this one:

              GDPR compliance #56

              1. 1

                Thinking out loud. We may be able to get to GDPR compliance by rounding Pi down to 3. Yes, we’ll lose some data but we’re really down to the wire here.

                LOL, this is absolutely what my company did when GDPR hit.

              2. 3

                How long till the authorities find child pornography (or Critical Race Theory) in pifs and get π shut down, or possibly censored to an innocuous value like 22/7? That could break everything that depends on π, like for example wheels…

                1. 1

                  Yes, thank you.

              1. 3

                Fun fact: this is also the implementation that Erlang uses for the queues in the standard library.

                1. 22

                  I really like how Go’s Duration time handles this, letting us do time.Sleep(5 * time.Second)

                  1. 14

                    Unfortunately Duration is not a type, but an alias for an integer, so this mistake compiles:

                    1. 10

                      Your point stands about the mistake, but just to clarify the terminology: Duration is a defined type, not an alias (alias is specific Go terminology which means it behaves exactly the same as that type). The reason this mistake compiles is because literals in Go are “untyped constants” and are automatically converted to the defined type. However, these will fail, because s and t take on the concrete type int when they’re defined:

                      var s int
                      s = 5
                      t := 5
                      1. 2

                        My understanding is that Duration*Duration is also allowed?

                    2. 8

                      The thing I dislike the most about Go’s Duration type is that you can’t multiply an int by a Duration:

                      To convert an integer number of units to a Duration, multiply:

                      seconds := 10
                      fmt.Print(time.Duration(seconds)*time.Second) // prints 10s

                      In the example above, the intent is somewhat clear due to the seconds variable name, but if you just want to have something like this:

                      some_multiplier := ...
                      delay := some_multiplier * (1 * time.Second) // won't work

                      You have to convert some_multuplier to time.Duration, which doesn’t make sense!

                      1. 2

                        Can’t you just overload the * operator?

                        1. 3

                          Go doesn’t allow for operator overloading, which I’m kind of okay with. It tends to add complexity for (what I personally consider to be) little benefit.

                          1. 3

                            On the other hand, this is the kind of case that really makes the argument for operator overloading. Having to use a bunch of alternate specific-to-the-type function implementations to do common operations gets tiresome pretty quickly.

                            1. 2

                              So Go has different operators for adding floats and for adding integers? I have seen that in some languages, but it’s nevertheless quite unusual. OTOH, I can see that it reduces complexity.

                              1. 1

                                Go has built-in overloads for operators, but user code can’t make new ones.

                                It’s similar to maps (especially pre-1.18) that are generic, but user code is unable to make another type like map.

                            2. 2

                              Go doesn’t have operator overloading

                            3. 1

                              I agree it is annoying. Would a ‘fix’ be to alter/improve the type inference (assuming that some_multiplier is only used for this purpose in the function) so that it prefers time.Duration to int for the type inferred in the assignment?

                              I’m not sure it would be an incompatible change - I think it would just make some incorrect programs correct. Even if it was incompatible, maybe go2?

                              1. 1

                                While I do think Go could do more to work with underlying types instead of declared types (time.Duration is really just an alias for int64, as a time.Duration is just a count of nanoseconds), it does make sense to me to get types to align if I want to do arithmetic with them.

                                My perfect world would be if you have an arbitrary variable with the same underlying type as another type, you could have them interact without typecasting. So

                                var multiplier int64 = 10
                                delay := multiplier * time.Second

                                would be valid Go. I get why this will probably never happen, but it would be nice.

                                1. 3

                                  That’s how C-based languages have worked, and it’s a disaster. No one can keep track of the conversion rules. See for a better solution.

                                2. 1

                                  If you define some_multiplier as a constant, it will be an untyped number, so it will work as a multiplier. Constants are deliberately exempted from the Go type system.

                              1. 15

                                The author spends a lot of time acting as though it is weird that changing the types of a function’s parameters or similar results in new code not working, which is very confusing to me. They also act like this is C specific which it isn’t. Any language that wishes to share across a build boundary needs to ensure that both sides of the boundary agree on what the interface is.

                                The fact that many modern languages eschew ABI compatibility in favor of every application have copies of every library that they depend on, and by default don’t support the basics of being a system library remains bizarre to me.

                                It’s also unnecessary: Objective-C has ABI stable ivars (though it makes ivar access more expensive than a pointer offset of course) so the implementation and data stored in parent objects doesn’t break subclasses compiled against a different SDK. Swift supports ABI compatibility even for generic types through witness tables.

                                Any language that wants to claim to be a “systems” language needs to provide a reasonable ABI stability story if it actually wants to be used for for system libraries, etc

                                1. 2

                                  It’s weird, because it’s about the C (and C++) ecosystem, which is very ossified. You can’t change ABI of anything without breaking someone, and users of a 40-year-old language really dislike anything changing and breaking.

                                  1. 12

                                    users of a 40-year-old language really dislike anything changing and breaking.

                                    I hate to do this to you, but C turns 50 this year. I still think that the 70s were 30 years ago so this is kind of a shock for me too.

                                  2. 1

                                    There’s also this post by the same author which shows other examples that don’t involve changing the type signature:

                                    • In C++, changing a copy constructor from being default to being user provided can break ABI

                                    • Attempting to standardize GCC’s C nested functions would break ABI because it would need to modify how function pointers are represented (one bit would be different)

                                    • In C++, adding new virtual functions will modify the layout of the vtable, breaking the ABI for any consumer that derived from the class, added their own functions, and thus assumed the location of their functions inside the vtable.

                                    1. 2

                                      For the first one, who’s the case where it breaks ABI (obviously it breaks API :D )?

                                      For the second one: womp womp :D More seriously how would GCC’s nested functions need to change for standardization?

                                      The last one is standard knowledge and well understood by any C++ dev who makes libraries. IIRC it’s why QT objects are all exposed via wrapper structs that make static calls that internally forward to the polymorphic implementation. I’ve often wished that there was an attribute in clang/gcc where you could make a polymorphic object actually be implemented that way automatically. It wouldn’t hide the existence of a vtable pointer, or the general field+inheritance layout problem, but it would certainly reduce some foot guns.

                                      ObjC get ABI stability by having literal hash table from the string name of the method to the method impl, and accesses ivars via an indirect load so the entire object is not fragile ABI.

                                      Swift is also ABI stable (because it’s a systems language, and you need your platform ABI to stable) through similar mechanisms to objc when crossing library boundaries, and even has ABI stability for generic objects through witness tables for protocols - logically swift protocols are equivalent to Haskell’s type classes rather than, say, ObjC’s compile time only enforcement.

                                      1. 2

                                        I forgot my favorite piece of terrible ABI horror: MSVC++ changes the size of member function pointers depending on declaration ordering :D

                                        1. 1

                                          For the first one, who’s the case where it breaks ABI

                                          I’m far from an expert, but it seems that some compilers will pass the type with a default copy constructors using only two registers, while for the user-provided constructor case, it will invoke a bit-copy operation (on a different register), so what you get is two libraries that expect their arguments to live in different registers.

                                          For the second one: womp womp :D More seriously how would GCC’s nested functions need to change for standardization?

                                          GCC nested functions are implemented by a trampoline jump with an executable stack. Not all operating systems support executable stacks (as it is commonly a entry-point to other exploits): for example, OpenBSD (so GCC had to patch their approach).

                                          A standard implementation of nested functions would need to use a different approach. The problem is that this would modify the ABI of callers: depending on how nested functions are implemented, they would need to be called in different ways. If you re-use GCC’s syntax, this means that depending on your compiler version, your nested functions would use different implementations, and hence, cause an ABI break.

                                          The last one is standard knowledge and well understood by any C++ dev who makes libraries. IIRC it’s why QT objects are all exposed via wrapper structs that make static calls that internally forward to the polymorphic implementation.

                                          According to OP, it seems that this was forgotten when implementing std::pmr::memory_resource, as it exposes a design with virtual functions:

                                          1. 1

                                            Oh, changing from no constructor to a constructor makes the type non-POD, clang at least has an attribute to deal with that:

                                    1. 4

                                      For an alternate approach to create a static website for an album or mixtape that can be hosted anywhere, see thebaer/cdr (demo).

                                      1. 3

                                        This looks great, but it is missing the ability to sell music (or merch or anything), so not quite equivalent. I love a good mixtape, though.

                                        A similar static site approach that does seem to provide a way to send money (through Liberapay):

                                        1. 2

                                          This is great! It reminds me a lot of opentape, itself a spin-off of the “original”, now defunct, muxtape.

                                        1. 1

                                          @Verdagon, it seems to me that your final Vale example is very similar to how Pony would look like if it allowed you await a message, instead of relying on callbacks. If you make input a val reference then you can safely reference it from any other actor.

                                          One question about read-only regions, though. Does the Vale compiler warn you if a reference is both used as read-only and as a regular reference at the same time?

                                            1. 2

                                              It’s good to see secret handshake being used for other things outside of SSB! For testing your implementation, in case you haven’t seen it, theres shs1-test, which does conformance tests in a language-agnostic way.

                                              1. 1

                                                Thanks! I had not seen that. I’ll add a test using it when I get a chance.

                                              1. 2

                                                Very nice! My first real Telegram bot was also a Clojure REPL. Make sure you’re sandboxing incoming commands, though.

                                                1. 3

                                                  If you’re looking for the repository, it seems to be here: (took a bit to find the link!)

                                                  1. 1

                                                    The Cockroach team itself has mentioned in the past that 500ms is a very conservative estimate. I’m sure if you’re running on AWS (that offers an atomic clock-backed NTP endpoint) you could probably lower that and opt for sleeping after every commit, like Spanner does.

                                                    Although the above is only correct if you’re able to reliably detect if your node clock skew is within acceptable intervals. I know Cockroach will shut down a node if its clock goes outside the uncertainty interval, but that might happen after you’ve already committed a transaction with a reversed timestamp relationship.

                                                    This post also highlights one other difference between Spanner and CRDB: Spanner explicitly distinguishes externally consistent transactions from regular transactions. If CRDB had gone that route, the causal reversal anomaly would only be exposed to those special transactions, and to solve the problem in the article you would just issue a regular transaction read-only transaction that took locks.

                                                    1. 11

                                                      It’s all about ecosystem. Tbh I think it’s a shame that rust is bigger than pony, not least because the explicit ownership (“capabilities”) system is much clearer than rust’s implicit model.

                                                      1. 1

                                                        And also much much harder to understand (well, at least given the resources on the ponylang website). I think this is the death knell for Pony. Sylvan Clebsch is already working on Verona at MS (and I don’t think he had been contributing much to Pony over the past few years in any case).

                                                        1. 6

                                                          at least given the resources on the ponylang website

                                                          Pony contributor here. We’re always looking for ways to improve the documentation. I admit that reference capabilities are one of the hardest things to grasp when learning the language, and if you have any ideas on how you’d prefer to see this covered, or have any suggestions on what the tutorial should cover, feel free to reach us on Zulip. We’re always happy to chat!

                                                          1. 2

                                                            Maybe Pony’s capability model was too complex. Compare and contrast with E, which has only one form of reference (the object reference) and doesn’t require callers to care about ownership. (Not to imply that E has been widely adopted.)

                                                            1. 1

                                                              Tbh I think the capability model isn’t hard to understand, it’s just not familiar. I found it easy to pick up for my one toy project, and preferred the explicit syntax to rust’s implicit one.

                                                              That said, I don’t think the website does a wonderful job of selling pony with a few glances. I have no concrete suggestions to share right now.

                                                            2. 1

                                                              In my limited experience with both, I found pony much easier to understand. And yes if the two main contributors are working on Verona, I suspect language development will slow.

                                                          1. 3

                                                            I don’t see any mention of delivery sematics in the linked repo. @houqp, perhaps you can expand on this? Right now, the linked repo seems like a kafka connector, but there’s not much in there from what I can see.

                                                            1. 2

                                                              Yes, it’s a native kafka delta lake connector. In short, the exactly once message delivery is accomplished by batching the message and kafka offset into a single Delta Table commit so they are written to the table atomically. If a message has been written to a Delta Table, trying to write the same message again will result in transaction conflicts because kafka-delta-ingest only allows the offset to go forward.

                                                            1. 5

                                                              In this case, passing a pointer into a function is still passing by value in the strictest sense, but it’s actually the pointer’s value itself that is being copied, not the thing that the pointer refers to

                                                              Is this not how every language works when handling pointers?

                                                              1. 6

                                                                I think so, but I believe the main point of the article is how there are certain types, like slices, maps, and channels, that feel as if you’re passing them by value, even though they behave like references.

                                                                This sometimes trips people up (like me), for example:

                                                                1. 8

                                                                  I learned recently that go vet will give that warning on copying any struct which implements Lock() and Unlock() methods.


                                                                  package main
                                                                  type t struct{}; func (*t) Lock() {};func (*t) Unlock() {}
                                                                  func main() { a := t{}; _ = a }

                                                                  will trigger the vet warning.

                                                                2. 2

                                                                  C++ references are distinct! For example in Python (and I imagine in Go as well) you can’t pass a reference to an integer. You can’t do

                                                                  x = 3
                                                                  # x now equals 4

                                                                  (in CPython you can by doing some stack trace trickery)

                                                                  This is kind of linked back to a fundamental C++-ism of built-ins being “the same as” other data types. Whereas Python/Java/Go/lots of other stuff have this distinction between builtins and aggregate types.

                                                                  Rust, being the true successor to C++ in so many ways, carries over references nicely tho…

                                                                  fn f(x: &mut i32){
                                                                      *x += 1;
                                                                  fn main() {
                                                                      let mut x:i32=4;
                                                                      println!("x={}", x);
                                                                      f(&mut x);
                                                                      println!("x={}", x);

                                                                  And beyond “changing the contents of an integer”, ultimately being able to change the variable itself (even replacing it with an entirely different object) is only really an option in systems languages.

                                                                  1. 1

                                                                    The only exceptions I can think of are:

                                                                    • perl - lists are copied
                                                                    • tcl - lists are strings
                                                                    • C - structurs are actually copied without explicit pointers
                                                                    • languages with explicit value types like C#
                                                                  1. 5

                                                                    It’s important to note that this protocol is specialized for data center use, so situations with high reliability low latency links, and specifically for RPC scenarios, so frequent short message exchanges. An overview of the differences between Homa and TCP:

                                                                    1. No explicit acknowledgements. Instead, GRANT packets are occasionally sent to acknowledge packets.
                                                                    2. SRPT (Shortest Receiver Processing Time) based prioritization where higher-priority queues are kept specifically for packets that need quick responses.
                                                                    3. Connectionless.
                                                                    4. At most once semantics, so the receiver does not need to make RPC methods idempotent.
                                                                    1. 2

                                                                      At most once semantics, so the receiver does not need to make RPC methods idempotent.

                                                                      It’s the other way around. Homa is at-least-once:

                                                                      Homa allows RPCs to be executed more than once: in the normal case, an RPC is executed one or more times; after an error, it could have been executed any number of times (including zero). […] Duplicates must be filtered at a level above the transport layer.


                                                                      Homa assumes that higher level software will either tolerate redundant executions of RPCs or filter them out.

                                                                      1. 1

                                                                        Huh the article says:

                                                                        At-most-once delivery semantics: Other RPC protocols are designed to ensure at-most once-delivery of a complete message, but Homa targets at-least-once semantics. This means that Homa can possibly re-execute RPC requests if there are failures in the network (and an RPC ends up being retried). While at-least-once semantics put a greater burden on the receiving system (which might have to make RPCs idempotent), relaxing the messaging semantics allows Homa receivers to adapt to failures that happen in a data center environment. As an example, Homa receivers can discard state if an RPC becomes inactive, which might happen if a client exceeds a deadline and retries.

                                                                        I haven’t read through the paper yet, but if so, then this summary is incorrect.

                                                                        1. 2

                                                                          I think the bit about “at-most-once delivery semantics” in the article should be shortened to “delivery semantics”, a quick read of the section titles might lead one to think that Homa has at-most-once. If you read the actual body, though:

                                                                          Homa targets at-least-once semantics. This means that Homa can possibly re-execute RPC requests […] While at-least-once semantics put a greater burden on the receiving system (which might have to make RPCs idempotent) relaxing the messaging semantics allows Homa receivers to adapt to failures

                                                                    1. 6

                                                                      This is the paper linked from, so I feel like they should be merged.

                                                                      Edit: linked post is now deleted, but there’s an overview of the paper here:

                                                                      1. 2

                                                                        i deleted the other story. No need for 2

                                                                      1. 5

                                                                        The current title (UK right to repair law excludes smartphones, fridges, etc) is wrong and I think editorialized, which would be against the submission guidelines. Unless I missed something fridges don’t seem to be included.

                                                                        1. 5

                                                                          Indeed, the title of the submission has now been changed to reflect that. This is what is explicitly excluded, according to the article:

                                                                          Cookers, hobs, tumble dryers, microwaves or tech such as laptops or smartphones aren’t covered.

                                                                          1. 2

                                                                            According to the linked article, refrigerators are explicitly included.

                                                                            Edit all users with a certain karma(?) can suggest title edits. If enough of them do, it is updated automatically

                                                                          1. 8

                                                                            Although the original post was tongue-in-cheek, cap-std would disallow things like totally-safe-transmute (discussed at the time), since the caller would need a root capability to access /proc/self/mem (no more sneaking filesystem calls inside libraries!)

                                                                            Having the entire standard library work with capabilities would be a great thing. Pony (and Monte too, I think) uses capabilities extensively in the standard library, which allows users to trust third party packages: if the package doesn’t use FFI (the compiler can check this) nor requires the appropriate capabilities, it won’t be able to do much: no printing to the screen, using the filesystem, or connecting to the network.

                                                                            1. 3

                                                                              Yes. While Rust cannot be capability-safe (as explored in a sibling thread), this sort of change to a library is very welcome, because it prevents many common sorts of bugs from even being possible for programmers to write. This is the process of taming, and a tamed standard library is a great idea for languages which cannot guarantee capability-safety. The Monte conversation about /proc/self/mem still exists, but is only a partial compromise of security, since filesystem access is privileged by default.

                                                                              Pony and Monte are capability-safe; they treat every object reference as a capability. Pony uses compile-time guarantees to make modules safe, while Monte uses runtime auditors to prove that modules are correct. The main effect of this, compared to Rust, is to remove the need for a tamed standard library. Instead, Pony and Monte tame the underlying operating system API directly. This is a more monolithic approach, but it removes the possibility of unsafe artifacts in standard-library code.

                                                                              1. 3

                                                                                Yeah, I reckon capabilities would have helped with the security issues surrounding procedural macros too. I hope more new languages take heed of this, it’s a nice approach!

                                                                                1. 4

                                                                                  It can’t help with proc macros, unless you run the macros in a (Rust-agnostic) process-wide sandbox like WASI. Rust is not a sandbox/VM language, and has no way to enforce it itself.

                                                                                  In Rust, the programmer is always on the trusted side. Rust safety features are for protecting programs from malicious external inputs and/or programmer mistakes when the programmer is cooperating. They’re ill-suited for protecting against programs from intentionally malicious parts of the same program.

                                                                                  1. 2

                                                                                    We might trust the compiler while compiling proc macros, though, yes? And the compiler could prevent calling functions that use ambient authority (along with unsafe rust). That would provide capability security, no?

                                                                                    1. 5

                                                                                      No, we can’t trust the compiler. It hasn’t been designed to be a security barrier. It also sits on top of LLVM and C linkers that also historically assumed that the programmer is trusted and in full control.

                                                                                      Rust will allow the programmer to break and bypass language’s rules. There are obvious officially-sanctioned holes, like #[no_mangle] (this works in Rust too) and linker options. There are less obvious holes like hash collisions of TypeId, and a few known soundness bugs. Since security within the compiler was never a concern (these are bugs on the same side of the airtight hatchway) there’s likely many many more.

                                                                                      It’s like a difference between a “Do Not Enter” sign and a vault. Both keep people out, but one is for stopping cooperating people, and the other is against determined attackers. It’s not easy to upgrade a “Do Not Enter” sign to be a vault.

                                                                                      1. 3

                                                                                        You can disagree with the premise of trusting the compiler, but I think the argument is still valid. If the compiler can be trusted, then we could have capability security for proc macros.

                                                                                        Whether to trust the compiler is a risk that some might accept, others would not.

                                                                                        1. 3

                                                                                          But this makes the situation entirely hypothetical. If Rust was a different language, with different features, and a different compiler implementation, then you could indeed trust that not-Rust compiler.

                                                                                          The Rust language as it exists today has many features that intentionally bypass compiler’s protections if the programmer wishes so.

                                                                                          1. 1

                                                                                            Between “do not enter” signs and vaults, a lot of business gets done with doors, even with a known risk that the locks that can be picked.

                                                                                            You seem to argue that there is no such thing as safe rust or that there are no norms for denying unsafe rust.

                                                                                            1. 3

                                                                                              Rust’s safety is already often misunderstood. fs::remove_dir_all("/") is safe by Rust’s definition. I really don’t want to give people an idea that you could ban a couple of features and make Rust have safety properties of JavaScript in a browser. Rust has an entirely different threat model. The “safe” subset of Rust is not a complete language, and it’s closer to being a linter for undefined behavior than a security barrier.

                                                                                              Security promises in computing are often binary. What does it help if a proc macro can’t access the filesystem through std::fs, but can by making a syscall directly? It’s a few lines of code extra for the attacker, and a false sense of security for users.

                                                                                              1. 1

                                                                                                Ok, let’s talk binary security properties. Object Capability security consists of:

                                                                                                1. Memory safety
                                                                                                2. Encapsulation
                                                                                                3. No powerful globals

                                                                                                There are plenty of formal proofs of the security properties that follow… patterns for achieving cooperation without vulnerability. See peer reviewed articles in

                                                                                                This cap-std work aims to address #3. For example, with compiler support to deny ambient authority, it addresses std::fs.

                                                                                                Safe rust, especially run on wasm, is memory safe much like JS, yes? i.e. safe modulo bugs. Making a syscall requires using asm, which is not in safe rust.

                                                                                                Rust’s encapsulation is at the module level rather than object level, but it’s there.

                                                                                                While this cap-std and tools to deny ambient authority are not as mature as std, I do want to give people an idea that this is a good approach to building scalable secure systems.

                                                                                                I grant that the relevant threat model isn’t emphasized around rust the way it is around JS, but I don’t see why rust would have to be a different language to shift this emphasis.

                                                                                                I see plenty of work on formalizing safe rust. Safety problems seem to be considered serious bugs, not intentional design decisions.

                                                                                                1. 1

                                                                                                  In presence of malicious code, Rust on WASM is exactly as safe as C on WASM. All of the safety is thanks to the WASM VM, not due to anything that Rust does.

                                                                                                  Safe Rust formalizations assume the programmer won’t try to exploit bugs in the compiler, and the Rust compiler has exploitable bugs. For example, symbol mangling uses a hash that has 1 in 2⁶⁴ chance of colliding (or less due to bday attack). I haven’t heard of anyone running into this by accident, but a determined attacker could easily compute a collision that makes their cap-approved innocent_foo() actually link to the code of evil_bar() and bypass whatever formally-proven safety the compiler tried to have.

                                                                              1. 2

                                                                                This is a very nice talk. Even as someone who’s not terribly familiar with C++, I could appreciate the comparisons and footguns that Rust will prevent.