1. -4
  1.  

  2. 20

    Can’t think of a more elitist way of referring to someone else than “no-name”s.

    Hope that I’m the one not seeing the sarcasm.

    1. 6

      I agree, I’m not sure this kind of content should have a place on lobste.rs.

      I am happy if content that is posted here is critical, but not by putting others down. If you find a better way of doing something than $organization, we want to hear about it. Even if it uses an unusual approach. Sanitizers and valgrinds are incredibly useful tools and everyone should know about them. But calling other’s “no-names” is quite uncalled for.

      That being said, I also disagree with this author’s idea: running sudo (a SUID binary) with sanitizers baked-in or under Valgrind is a terrible idea: it massive increases your surface area, because now you’re exposed to bugs in these tools too.

      These tools are useful for development and finding bugs, but I would not use them in production. For my C projects (for example passgen), I run my unit tests with them and they have proved to be indispensable.

      1. 6

        This submitter as, as I write this comment:

        • Submitted two stories, both from his own blog.
        • Posted eight comments, all of which are on the stories that he submitted.

        Both submissions have been low quality and this one has started off being offensive as well. I think this definitely meets the bar for spam and I’ve flagged the story as such.

      2. 4

        Sanitizers are not a security boundary…

        1. 3

          A setuid executable linked with a sanitizer is a root privilege escalation vulnerability

          Also, the “no-names” are ISRG, the people behind Let’s Encrypt. (If you are a non-native speaker you might not know that “no-name” is an insult saying that someone is insignificant, has failed to get a (good) reputaion, to “make a name for themselves”.)

          (Even so, in my opinion sudo is a lost cause, it is solving the wrong problem in the wrong way, and usually a much simpler tool can do the job it is used for with much less risk.)

          1. 1

            I do not really understand what you really mean. Why rust borrow checker is security boundary?

            1. 3

              Sanitizers at best make bad behavior nominally obvious, hopefully, most of the time, by exhaustion or induction/deduction in some cases. Systems with type and CFG level knowledge can make it impossible by construction.

              1. 3

                Sanitizers are probabilistic bug detection tools. They are not designed to be robust in the presence of an active adversary. The Rust borrow checker is a type system property. It is (modulo soundness bugs) a deterministic predicate over some correctness properties of the program, which will hold irrespective of input (modulo bugs in unsafe snippets and FFI).

                1. 1

                  Sanitizers are probabilistic bug detection tools

                  gwasan maybe, but not asan.

                  The Rust borrow checker is a type system property.

                  I understand that.

                  But I really do not understand why adding boundary checks at every memory access, like asan does, is not equivalent of some deterministic predicate about memory safety of running program.

                  Asan ensures us that there will not be memory errors, because before any memory error really happens we will have abort().

                  1. 3

                    gwasan maybe, but not asan.

                    Yes it is and I have no idea why you would think otherwise. ASan has no model of pointer provenance and so, at best, it deterministically catches linear overflows. It probabilistically catches other kinds of error (non-linear overflows, pointer injection, use after free).

                    Asan ensures us that there will not be memory errors, because before any memory error really happens we will have abort().

                    This is absolutely untrue. If I write a+b, where a is a pointer and b is an unchecked integer that comes from outside of the program (a very common kind of memory-safety bug that leads to vulnerabilities), then ASan will catch it if a+b lands in one of its guard regions. This will happen if the computed address points just after the end of an object, into unmapped memory, or hits a guard between two other allocations. This is great as a bug finding tool because it’s far more likely that fuzzing will trigger such a fault than that fuzzing will trigger out-of-bounds errors that are sufficiently large that they will hit an unmapped page. It is not a security tool, because an attacker who can craft the value of b can easily find a displacement that doesn’t fault but does cause the kind of memory corruption that can be used to elevate to arbitrary code execution.

                    In contrast, any such thing in a safe language carries the bounds with it. A Rust (or JavaScript, or Go, or whatever) array / slice has a base and a bounds and any indexing is checked against that range and guarantees that it will raise an error if the array index is out of bounds.

                    CHERI platforms also carry bounds and so get the same dynamic checking for C/C++ and our CHERIoT platform also provides deterministic temporal safety, so Rust has no significant confidentiality or integrity benefits there (though it can have significant availability benefits from preventing many of these bugs at compile time, rather than catching them at run time).

                    1. 1

                      an attacker who can craft the value of b can easily find a displacement that doesn’t fault but does cause the kind of memory corruption that can be used to elevate to arbitrary code execution.

                      Ok, now I understand.

                      CHERI platforms also carry bounds and so get the same dynamic checking for C/C++ and our CHERIoT platform also provides deterministic temporal safety, so Rust has no significant confidentiality or integrity benefits there (though it can have significant availability benefits from preventing many of these bugs at compile time, rather than catching them at run time).

                      I also mention about such systems in my text, where suggest to compile C into some tagged memory arch, and interpret resulting code.

                      1. 3

                        I also mention about such systems in my text, where suggest to compile C into some tagged memory arch, and interpret resulting code.

                        Now you’ve got the whole of QEMU or similar in your TCB. These emulators are a lot less well tested than real hardware and, as with sanitisers, are not intended to be security boundaries. QEMU is not a sandbox (QEMU + KVM uses KVM to enforce the sandbox boundary). Now an attacker has a choice of bugs in QEMU and bugs in your program to attack. This probably isn’t so bad for sudo, because most of the ways of breaking memory safety with CHERI QEMU that I know of require multiple threads but it’s still a huge pile of code that was never written with security in mind added to your TCB.

                        You also mention web assembly, which has no memory safety guarantees within the sandbox (and, in fact, generally has less security than native code compiled with the default set of mitigations).

                        Since joining lobste.rs, you have:

                        • Submitted two stories, both to your own blog.
                        • Posted eight comments, all to the threads about your own blog.

                        Both of the blogs that you’ve posted have been half-baked ideas that you’ve clearly not thought through properly and the second one starts by denigrating people who have thought through some of the problems and come up with a different solution to you. I’d suggest that you:

                        • Learn a bit more about the topics that you’re writing about before you write more.
                        • Write down more of the ideas than ‘hey, wouldn’t it be cool if’ style blogs. Properly think through your ideas, don’t just throw out a few sentences and call it done.
                        • Actually participate in the lobste.rs community rather than treating it as a promotion channel for your blog.
                        1. 1

                          Now you’ve got the whole of QEMU or similar in your TCB. These emulators are a lot less well tested than real hardware and, as with sanitisers, are not intended to be security boundaries.

                          Yes, I also mention about this in my text.

                          Actually participate in the lobste.rs community rather than treating it as a promotion channel for your blog.

                          Yes, it is perfectly clear to me :)

            2. 3

              The purpose of rewriting existing tools in a memory-safe language such as Rust or Go is to mechanically and provably eliminate entire categories of programmer error that have plagued low-level software from the beginning.

              The reason low-level software has historically been written in C is that most memory-safe languages assume the presence of a garbage collector and/or virtual machine. Rust is the first serious attempt in a very long time to design a new language that can compile to standalone machine code and also has a type system strong enough to prove memory-safety of an entire compilation unit. It’s not that C is great, it’s just that alternatives were worse until now.

              Writing a tool in a language that isn’t expressive enough to encode the programmer’s intent (C), then trying to hack a solution together with memory-management hooks and third-party tracing tools is foolish. Stop trying to add layers of complexity and hacks to compensate for C’s ancient design and missing features.

              1. 1

                The reason low-level software has historically been written in C is that most memory-safe languages assume the presence of a garbage collector and/or virtual machine.

                That isn’t really a problem for something like sudo. Aside from the thing that does the final exec, the whole thing could be written in interpreted, GC’d, Lua or JavaScript (with something like jsquick or DukTape) and eliminate large classes of vulnerability.

                1. 2

                  su and sudo can’t be written in an interpreted language because that would require setting the interpreter setuid, which is equivalent to giving every local user root.

                  Writing them in a GC’d language is technically possible, but most such languages have not been designed to be resistant to hostile environments. An example that comes to mind immediately is GHC’s -rtsopts, which used to be enabled by default. It was known at the time that this made GHC-compiled Haskell unsafe for setuid tools, but the defaults didn’t get changed until someone realized that argv could also be populated by CGI gateways[0].

                  So instead of having to audit your entire GC / VM / binpacker / whatever for behaviors that are unsafe for setuid, it’s better to write in a language that has as thin a runtime as possible. C and Rust work well in that role.

                  [0] https://web.archive.org/web/20100425032754/http://www.amateurtopologist.com/#post-821

                  1. 2

                    su and sudo can’t be written in an interpreted language because that would require setting the interpreter setuid, which is equivalent to giving every local user root.

                    Only if they have the ability to inject scripts. Embedding Lua or DukTape in a program adds a few hundred KiBs to the binary size and would let 99% of it be written in a well-tested type-safe environment.

                    1. 1

                      su and sudo can’t be written in an interpreted language because that would require setting the interpreter setuid, which is equivalent to giving every local user root.

                      In my text I explicitly telling about “embeddable” interpreters, which attack surface can be limited to one binary, where that interpreter was embedded.

                  2. 1

                    and also has a type system strong enough to prove memory-safety of an entire compilation unit.

                    Why borrow checker better than boundary checks on every memory access (also ensured by compiler), from a practical standpoint?

                  3. 2

                    This is painfully pompous. I didn’t make it further than could be seen without dismissing the “sign in with google” begbox.

                    Edit: I read the rest and I stand by the above comment, other than the part where I lied about not finishing the article.

                    1. 2

                      Sanitizers are developer tools, they’re not designed with security in mind. I wouldn’t be surprised if some sanitizer one day would introduce a feature that will aid developers with finding bugs, but would be classified as security hole if introduced in a “release” version of applications. I think using developer tools for security is a terrible idea.