Sooo, how is it actually a lie? At a glance, the PDF seems to be purely about some internal stuff in the (an?) OpenBSD package manager. I totally don’t understand how the title applies (apart from it being actually the subtitle of the slides, but I don’t understand either why it is so).
It might make more sense if you take it from the other side: Transmitting programs continuously over the network is highly dangerous: if one can alter the transmitted data, one can apply modifications to the OS or add a backdoor to a program.
So what to use as transport? HTTPS? The talk questions whether HTTPS is really strong enough to support transmitting package. It then extends on how to mitigate the potential weakness that HTTPS can provoke.
HTTP would still leak what packages you installed and their exact versions, very interesting stuff to know for a potential attacker.
HTTPS would also guard against potential security problems in the signing, ie layered security. If the signing process has an issue, HTTPS will provide a different or reduced security depending on your threat model.
Totally true indeed. I was mentioning that for the moment, these security issues do not seem to be considered as high threat and therefore not addressed at the moment (not that I know of).
So as HTTP does not provide a strong enough security on its own, other mechanisms are used. I like the practice of not relying at 100% on the transport to provide security.
The thing is that, HTTPS doesn’t provide only the certification that what you ask is what you get, it also encrypts the traffic (which is arguably important for package managers).
So at the moment, HTTP + signing is reasonable enough to be used a « security mechanism ».
The thing is that, HTTPS doesn’t provide only the certification that what you ask is what you get, it also encrypts the traffic (which is arguably important for package managers).
That’s something the slides dispute. Packages have predictable lengths, especially when fetched in a predictable order. Unless the client and/or server work to pad them out, the HTTPS-encrypted update traffic is as good as plaintext.
I’m completely blanking on which package manager it was, but there was a recent CVE (probably this past month) where the package manager did something unsafe with a package before verifying the signature. HTTPS would’ve mitigated the problem.
Admittedly, it’s a well-known and theoretically simpler rule to never do anything before verifying the signature, but you’re still exposing more attack surface if you don’t use HTTPS.
One of the author’s point is that HTTPS doesn’t help with secrecy. A MITM can know what packages were downloaded just by building a map of the package sizes and then correlate the encrypted download sizes to the package sizes. This would then in turn help the attacker look for vulnerabilities targeted to the known list of packages.
The author then talks about using HTTP/1.1 pipelining to mitigate the issue. The idea being that if multiple downloads are cobbled together on a single transmission, the attacker will have a combinatorial problem with potentially multiple solutions. Even better, an additional padding could be used so that downloads are always a multiple of N (in exchange for a bit of extra bandwidth).
No mention of HTTP/2 are given on the slides, I wonder if it was discussed.
HTTP2 allows you to interleave and pre-send resources, which can make web pages faster, but package managers don’t benefit from any of that. It would probably be slower.
Sooo, how is it actually a lie? At a glance, the PDF seems to be purely about some internal stuff in the (an?) OpenBSD package manager. I totally don’t understand how the title applies (apart from it being actually the subtitle of the slides, but I don’t understand either why it is so).
It might make more sense if you take it from the other side: Transmitting programs continuously over the network is highly dangerous: if one can alter the transmitted data, one can apply modifications to the OS or add a backdoor to a program.
So what to use as transport? HTTPS? The talk questions whether HTTPS is really strong enough to support transmitting package. It then extends on how to mitigate the potential weakness that HTTPS can provoke.
That’s why almost every package manager has package signatures… that’s also why many package manager are still using HTTP.
HTTP would still leak what packages you installed and their exact versions, very interesting stuff to know for a potential attacker.
HTTPS would also guard against potential security problems in the signing, ie layered security. If the signing process has an issue, HTTPS will provide a different or reduced security depending on your threat model.
Totally true indeed. I was mentioning that for the moment, these security issues do not seem to be considered as high threat and therefore not addressed at the moment (not that I know of).
So as HTTP does not provide a strong enough security on its own, other mechanisms are used. I like the practice of not relying at 100% on the transport to provide security.
The thing is that, HTTPS doesn’t provide only the certification that what you ask is what you get, it also encrypts the traffic (which is arguably important for package managers).
So at the moment, HTTP + signing is reasonable enough to be used a « security mechanism ».
That’s something the slides dispute. Packages have predictable lengths, especially when fetched in a predictable order. Unless the client and/or server work to pad them out, the HTTPS-encrypted update traffic is as good as plaintext.
I’m completely blanking on which package manager it was, but there was a recent CVE (probably this past month) where the package manager did something unsafe with a package before verifying the signature. HTTPS would’ve mitigated the problem.
Admittedly, it’s a well-known and theoretically simpler rule to never do anything before verifying the signature, but you’re still exposing more attack surface if you don’t use HTTPS.
One of the author’s point is that HTTPS doesn’t help with secrecy. A MITM can know what packages were downloaded just by building a map of the package sizes and then correlate the encrypted download sizes to the package sizes. This would then in turn help the attacker look for vulnerabilities targeted to the known list of packages.
The author then talks about using HTTP/1.1 pipelining to mitigate the issue. The idea being that if multiple downloads are cobbled together on a single transmission, the attacker will have a combinatorial problem with potentially multiple solutions. Even better, an additional padding could be used so that downloads are always a multiple of N (in exchange for a bit of extra bandwidth).
No mention of HTTP/2 are given on the slides, I wonder if it was discussed.
Clickbaity title at least for the content of the slides (maybe the author says more with his words).
That’s not accurate. Most modern web server software use http/2 for https, which is faster that plain http.
HTTP2 allows you to interleave and pre-send resources, which can make web pages faster, but package managers don’t benefit from any of that. It would probably be slower.
Keep in mind the context. This is specific to OpenBSD’s ftp(1) tool.