Threads for fro

    1. 4

      Title makes it seem like this only applies to the M3 which isn’t accurate.

        1. 2

          Yes, but this was all fixed by September of 2023, so no updated/supported Apple product is known vulnerable to these issues anymore.

              1. 3

                I’m really curious what adversarial security researcher will do with this. One of the reasons that jails in FreeBSD require root privilege to create is that you can race renames in some exciting ways to allow jail escapes if you build nested jails. It looks as if the Linux model would be vulnerable to the same sort of attacks with an unprivileged user racing updates to allow a root-owned process to escape from restricted filesystem namespace. I would expect this to lead to a load of local privilege elevation vulnerabilities in Linux in the next few years. I’ve not seen any discussion of this kind of attack from the folks working on this in Linux.

                1. 1

                  i think this is exactly what is going to happen here but i reckon we’ll have to wait and see

                2. 6

                  ROP can trivially be eradicated at effectively no performance cost by using separate call and data stacks. The only problem is that it is not obvious how to do it without breaking binary compatibility (I came up with an exceedingly convoluted scheme for doing it); but, from my understanding, openbsd does not value binary compatibility. I wonder why they do not do this.

                    1. 2

                      I’m not particularly interested in contributing to openbsd, nor in building security mitigations for unsafe languages. Technically speaking, this is completely trivial; there is hardly anything to propose, and I’m sure it’s been thought of already.

                      1. 2

                        technically speaking, can you elaborate? i’m curious.

                        1. 8

                          Maintain two stacks, rather than one: a call stack, and a data stack; devote one general-purpose register to each (but elide any ‘frame pointer’, so register pressure is the same). The call stack comprises a sequence of two-word activation records: a return address, and a pointer into the data stack. The data stack contains all other data that would have been stack-allocated (spilt, passed/returned on stack, etc.) in the traditional calling convention; because this is separate in memory from the return addresses, there is no possibility of corrupting the latter through an overflown access to the former.

                          Indeed, it is possible to avoid storing any pointers to the call stack at all in memory. In the case of, for example, setjmp/longjmp: align the call stack (to, say, 1mb); then, store in the jmp_buf only the low bits (say, 20 of them) of the call stack, and when longjmping, restore only those low bits. Consequently, even an attacker with arbitrary read/write primitives would have no good way of finding the call stack.

                          This arrangement also enables much faster stack unwinding, which is desirable independently of any safety concerns. (EDIT: faster only for the purposes of tracing and profiling and such like; nonlocal exits like thrown exceptions will be as hard as ever unless you get rid of callee-saved registers.)

                          1. 7

                            The SafeStack code in LLVM does this (anything address taken goes on the main stack, anything else [including return addresses] goes on the safe stack), but I think you are overstating the security claims. It prevents stack buffer overflows from being turned into arbitrary code execution vulnerabilities. That’s a win but it doesn’t prevent ROP. Any code that’s able to get a pointer to the safe stack can corrupt it. If the location of the safe stack is predictable (even probabilistically - if you’re attacking a million machines, a 1% chance of success gives you a nice big botnet) then any pointer-injection attack lets you modify values that are on the safe stack. Speculative side channels make it fairly easy to probe the address space. Worse, it’s often easier to do pointer spraying attacks on the safe stack because it is less sparse: there’s a much higher probability that a write there will hit a return value than anywhere else.

                            Intel’s CET works in a similar way but makes the safe stack non-addressable so only explicit pushes and pops modify values on it. This makes it impossible to just take the address of it and overwrite it. For ABI compatibility, CET doesn’t replace the stack, it duplicates values spilled on the main stack and traps if they differ.

                            It’s fairly easy to demonstrate that any CFI scheme is bypassable if you don’t have memory safety. There’s been less effort on these in the last few years because they increase work factor but are eventually bypassable, then the bypasses become automated, but you’re stuck maintaining the complexity of the defence.

                            1. 1

                              I don’t see why the location of the call stack would be at all predictable; we have strong randomness, and if your randomness source is broken, you have far bigger problems. (And I gave the example of a 1mb stack because it’s nice and round, but that’s quite generous; you could go for, say, 16kb, and have space for 1024 recursive calls while still having 33 bits of entropy to protect the stack.)

                              If you can probe the address space, it seems likelier you can find and corrupt a function pointer than the call stack, as the heap will have a regular structure and be a much larger target. Of course you can corrupt the stack (memory safety is no absolute protection either!), but this seems to demote ROP from a significant exploit category whose mitigations are often defeated to basically a curiosity.

                              1. 3

                                I don’t see why the location of the call stack would be at all predictable; we have strong randomness,

                                Because it’s moderately large and you typically have only a 47-bit VA space for userspace. 33 bits of entropy is not that much to probe. On platforms that just do ASLR, once you leak one pointer you know the random displacement and so any information disclosure tells you where it is. On platforms that do full ASR, you have more probing to do, but you also have accesses to it in function prologues and epilogues and so finding a speculative execution gadget that lets you leak it is fairly easy. The structure on the shadow stack is predictable and so is a great place to inject your gadgets (it’s basically a ROP gadget machine: it’s a pile of values that go into registers and return addresses, tightly packed, so if you can do one arbitrary write somewhere into it then you can trivially build a Turing-complete weird machine).

                                Defences that depend on secrets were never robust, but since Spectre was disclosed the number of techniques for comprehensively breaking them has exploded. The techniques for breaking things like SafeStack are now nicely automated and available to script kiddies.

                            2. 1

                              hey thanks for this – very clear and concise

                    2. 3

                      I would love to see this talk if anyone has a recording!

                        1. 7

                          Vtubing is a fascinating development that I personally have no appetite for. I’m glad people can represent themselves however they like, though.

                          1. 3

                            I’ve watched enough YouTube videos in all sorts of styles that I don’t mind that part. I do find the voice hard to listen to for prolonged periods though.

                            1. 1

                              It is mostly the voice for me, as well.

                            2. 2

                              I find it interesting as well, if I organize a conference, I would prefer the speaker to show their face, not that Japanese cartoon doll, maybe is a generational thing, but I have hard time taking anyone in a formal academic setting seriously, with such an avatar 😅. That said, the person behind that thing, is pretty smart.

                            3. 2

                              Oh wow, I didn’t know you could link to interdimensional YouTube from regular internet.

                              1. 1

                                https://www.youtube.com/watch?v=hDek2cp0dmI

                                It’s fully impossible to take this concept seriously when it’s being explained to you by Sailor Moon.

                              2. 2

                                there’s a link to the video on there but the video is almost 4 hours long.

                                1. 3

                                  Not to mention less videos and more actual details.

                                      1. 1

                                        This has been around for well over 20 years. Is there something that should stand out that I’m missing that makes it more relevant now?

                                          1. 3

                                            Yeah, they’re obsessed with doling out poorly worded opinions about everything OpenBSD does.

                                            1. 1

                                              Why would you call it poorly worded? It seems like a fairly level-headed assessment of OpenBSD’s security features. There’s praise and disapproval given based on the merits of each, comparing to other platforms as well.

                                              1. 2

                                                If your takeaway from reading that website is a fairly level-headed assessment of anything then I’m not sure what to tell you. It’s my personal opinion that it’s anything but that.

                                            2. 2

                                              The person who’s maintaining the website is one of the persons who’s doing the talk but not walking the walk, i.e. a blabbermouth.

                                              Qualys on the other hand is actively trying to exploit the latest OpenSSH vulnerability and found some valid shortcomings in OpenBSD’s malloc. otto@ who wrote otto-malloc, acknowledged them and is already working on an improved version.

                                            3. 1

                                              There are more updates at the link provided. I know I’m bumping my own thread but this fascinates me.

                                              Direct to another update from Qualys: https://seclists.org/oss-sec/2023/q1/109

                                              And another from OpenBSD malloc: https://marc.info/?l=openbsd-tech&m=167715187212393&w=2

                                              1. 2

                                                The security advisory makes it sound as if this is not a big problem because it’s in a sandboxed process, which at worst runs in a empty chroot as an unprivileged user. On FreeBSD, I believe it uses Capsicum to have basically no access other than the IPC channel to the parent. On OpenBSD, it uses pledge to achieve similar rights. On Darwin, it uses the sandbox framework to drop all access to the filesystem. There seems to be a seccomp-bpf back end as well, but as with all seccomp-bpf things it’s almost impossible to tell what it actually permits, but I presume that’s the default in Linux distros?

                                                It’s not clear what harm being able to insert malicious replies into those IPC message channels would be. Can it permit authorisation with an invalid key, for example?

                                                I guess that the fallback (chroot) model, if it’s shipped anywhere, would allow an attacker to make outbound network connections originating from the ssh server, which would be enough to bypass ssh jump hosts, but I don’t think this is possible with any of the other sandbox implementations.

                                                1. 1

                                                  Even before the sandbox step, you’d still need to 1. Know where to jump. 2. Be allowed to jump there.

                                                  Fortunately we’re still a few steps before talking about potential IPC/network issues.

                                                  1. 1

                                                    I posted this mainly because of the OpenBSD malloc bypass in the update reply. I’m curious to see if they’ll be able to go through the other 2 steps they’ve listed on an unpatched system.

                                                    I’m also curious to see if there will be any adjustments made to OpenBSD’s malloc due to this.

                                                    1. 2

                                                      I’m not really surprised by the malloc hardening bypass. None of these things are robust, they just increase work factor for the attacker. We have a couple of mitigations in snmalloc for this kind of thing:

                                                      • We make the free lists a queue, rather than a stack, so that there’s a longer time before reuse (and so reuse is less deterministic).
                                                      • We store the next pointer in the free lists as a simple permutation of the value in the previous one, making it hard for an attacker to overwrite a dangling object with something that will look valid to us.

                                                      Both of these make it harder to turn a UAF into a reproducible exploit, but both can be bypassed with sufficient effort. The fact that we return frees from other threads to the originating allocator makes it hard, in a multithreaded program, to predict where an object will be reused, but you may still be able to get something with a moderately high probability of success. The free list hardening can be bypassed if you can leak a few adjacent dangling pointers on the same free list, which is possible if you can shape the heap (I.e. if the attacker can influence allocation and deallocation patterns).

                                                      OpenBSD’s malloc is somewhat more aggressive in optimising for security rather than performance, but it is still running on a substrate that isn’t memory safe. In the simple case, an arbitrary information-disclosure vulnerability (such as a usable spectre gadget) will leak all of the secrets that their (or our) mitigations depend on.

                                                      That doesn’t mean that they’re worthless: making an attacker do more work is often worth it. If an attacker has only a 5% probability of being able to use an exploit (for example) then it’s hard for a worm to spread.

                                                    1. 1

                                                      Ah, I didn’t see that. It should be merged into that then.

                                                    2. 1

                                                      can we please merge all these git posts into one?