1. 30
  1.  

    1. 18

      If you are wondering what to do, the solution is neither to patch and fix your Fortinet device nor to buy additional attack surface from one of its equally bad competitors. It is to stop believing that adding more attack surface will increase security.

      This is probably the thing I harped on most in my years doing pentesting, but it was absolutely the hardest sell to people with organizational control. In my conversations those vendor relationships were often carried over from people with purchasing power prior and exacerbated by believing whatever trade group (marketing with a different colored hat) hype cycle states. Folks really don’t like putting things they have always been able to access behind a VPN and like turning things off even less. Businesses really have a “buy new shiny object” perspective and pretend that they are all silver bullets without having the fundamentals, which in my opinion is a near existential problem. Gotta burn that budget to make sure it stays high.

      1. 4

        Gotta burn that budget to make sure it stays high.

        This always amazes me. It happens every single year but somehow I am flabbergasted every time. In the final month of the financial year, suddenly our customers realise that they haven’t spent their entire budget so they desperately ask us if there’s any way they could give us another £100k before the year ends. We always find a way.

      2. 16

        Long post with some technical context some may find interesting and a rant of my own. Here goes …

        Fortinet technical content

        Fortinet routers (mostly under the FortiGate brand) have what may at first appear to be a surprisingly large number of built-in cryptographic keypairs in a default configuration. You’d expect such routers to probably have one for the embedded HTTPS web server, and another for the SSH server, but out-of-the-box you’ll usually find a dozen or more. Most are TLS certificates, but not all; some are SSH certificates too! So what are they all for?

        In short: Deep Packet Inspection (DPI). These devices are designed to perform what’s effectively a MitM attack to snoop on encrypted traffic transiting the device. Here’s a quick breakdown of the keypairs I see on a factory configuration on modern releases of FortiOS (the device OS):

        • TLS “Trusted” CA certificate (RSA 2048-bit)
          Signs dynamically issued certificates for DPI (i.e. certificates chain back to this CA by default).
        • TLS CA “Untrusted” certificate (RSA 2048-bit)
          As above, but for sites where the original certificate chain can’t be verified by the router. Otherwise you could end up replacing an untrusted certificate chain with a trusted one seen by users behind the device.
        • TLS Device certificate (RSA 2048-bit, RSA 4096-bit)
          For the built-in HTTPS web server (for GUI-centric administration). There’s a 4096-bit variant which is used by default on newer releases, but the 2048-bit variant is still present.
        • TLS DPI certificates (DSA 1024-bit, DSA 2048-bit, RSA 1024-bit, RSA 2048-bit, RSA 4096-bit, ECDSA 256-bit, ECDSA 384-bit, ECDSA 521-bit, Ed25519, Ed448)
          These are the private keys that are used for the dynamically issued website certificates, mapping to the original algorithm and key size. Primarily for performance I expect, because generating asymmetric keys for every site which gets DPI treatment is going to be costly.
        • SSH “Trusted” CA certificate (RSA 2048-bit)
          Refer to the earlier certificate for TLS. This is the same, but for SSH.
        • SSH “Untrusted” CA certificate (RSA 2048-bit)
          Refer to the earlier certificate for TLS. This is the same, but for SSH.
        • SSH keypairs (DSA 1024-bit, RSA 2048-bit, ECDSA 256-bit, ECDSA 384-bit, ECDSA 521-bit, Ed25519)
          Refer to the earlier certificates for TLS. This is the same, but for SSH.

        Note these keypairs are specific to the device. Last I checked they’re generated on first boot-up of a device from the factory, though they can also be regenerated if need be.

        Devices deployed in what I’ll blithely refer to as “sophisticated networks” will usually have many more certificates, as they’ll add certificates to replace the functionality provided by many/most of these built-in certificates. The CA certificates in particular are likely be replaced with an intermediate CA issued from an organisation’s internal PKI.

        Obviously, that means you have to be operating a private PKI in the first place. Running PKI is hard. Running it well is really hard. Many small and even medium-sized firms may just use the built-in certificates, and in many cases they’re probably right to do so if they aren’t confident in their ability to (securely) maintain a PKI.

        Blog post comments

        Firstly, zero of this is intended to be critical of the author; they have different knowledge and experience to me. The trick is in reconciling the valid concerns they raise with the security realities people like me have to deal with.

        What has not been widely recognized is that this leak also contains TLS and SSH private keys.

        Sure, but if I heard that the configuration of a Fortinet device had been leaked, I’d expect this to be the case. This is also true for a leak of a Cisco device configuration. I can’t comment on other manufacturers (e.g. Juniper, Palo Alto) but I’d be wholly unsurprised and even expect this to be the case.

        A quick search for the encryption method turned up a script containing code to decrypt these passwords. The encryption key is static and publicly known.

        This sounds bad, and is bad in many respects, but it’s also what I’d expect. The point of exporting these keys in the configuration is in large part to facilitate restoration if the device explodes. One of the big draws of this “enterprise” network gear is the entire configuration can be exported in a single plain-text file (not some opaque binary blob like I see in most consumer gear, assuming there’s an export at all, let alone that it will work on a different version than it was exported from, but I digress …).

        Having the entire device configuration exportable in plain-text format is ridiculously useful. You can store your core network infrastructure configurations in version control! You can diff them! You can audit them! Crucially, if the router needs to be RMA’d, or something goes badly wrong during a software update or configuration change, you can restore the exact desired configuration on it, and yes, that includes the private keys.

        The alternative is presumably to encrypt them with some “super secret” key only Fortinet has posession of which is not exposed to the administrators of these devices. Either that, or you simply don’t support restoring the keys, and thus recovering the full device configuration is impossible. I’m not a fan of either approach for reasons I expect are obvious.

        In case I lost you here with technical details, the important takeaway is that in almost all cases, it is possible to decrypt the private key. (I may share a tool to extract the keys at a later point in time.)

        Again, I’m not surprised. This isn’t really any different from passwords which are exported in Cisco configurations going back decades that are trivially reversible. The intent is to protect against shoulder-surfing attacks, not a motivated attacker in possession of the actual configuration. The configurations of your network infrastructure devices should be treated as private, and should not be shared. They can and will contain secrets.

        The use of a static encryption key is a known vulnerability, tracked as CVE-2019-6693. According to Fortinet’s advisory from 2020, this was “fixed” by introducing a setting that allows to configure a custom password.

        The negligent thing here is that setting a custom encryption key wasn’t supported until 2020. That’s appalling and Fortinet ought to have been raked over the coals for it at the time (were they?). But the approach is in my view correct when preserving the ability to export a complete device configuration is required. It’s also what Cisco does (albeit, they’ve supported it for a very long time, apparently unlike Fortinet). At least on Cisco devices, setting a custom configuration encryption key has been best-practice for as long as I can remember.

        “Fixing” default passwords by providing and documenting an option to change the password is something I have strong opinions about. It does not work.

        In principle I agree, but administration of network infrastructure is a little bit different from the standard scenario where we’re trying to avoid people without a security background having to make security decisions. In that case, I fully agree that we should be minimising the need for people to make conscious security decisions.

        However, these are explicitly security devices, and if incorrectly setup and maintained they can and will make your network less secure. These devices are designed to, among other things, provide deep visibility into network traffic that traverses them by carefully (for some value of “carefully”) removing encryption and then re-encrypting the data using different configurations. Incorrectly configuring and maintaining them will lead to poor outcomes.

        This is true of any security critical device or service. If you poorly operate or maintain any security system, particularly those that have direct interaction with end-users, you’re going to have a bad time. The alternative is taking away control from network administrators to have visibility over the configuration of their deployed devices. That’s not a win in my book.

        If you are wondering what to do, the solution is neither to patch and fix your Fortinet device nor to buy additional attack surface from one of its equally bad competitors. It is to stop believing that adding more attack surface will increase security.

        This is the point that got to me. My political and philosophical views lean very heavily towards opposing surveillance and censorship, and always have. And yes, I’ve backed those views with action repeatedly. But simultaneously, I’m responsible for the security of several corporate and cloud networks.

        In a world where 95%+ of all ingress/egress traffic to those networks is TLS encrypted, how am I supposed to be able to have a hope of protecting those networks with such limited visibility in the data entering or exiting the network? You have to surveil a network to have any hope of being able to maintain its security if you care about the data that enters and leaves it.

        It takes one (1) device on a network that’s compromised to be able to start exfiltrating sensitive data with potentially catastrophic results. The hacker(s) doing that are not doing so over HTTP. They’re doing so over TLS v1.3 using strong cryptography. And the remote endpoint is very likely a service operated by a “trusted” entity like AWS, GCP, Microsoft, etc …

        Ultimately I’m left with three options:

        1. Monitor traffic at the network level, which means MitM for the encrypted traffic unless otherwise allow-listed because I’m absolutely positive it’s going to a trusted endpoint. There aren’t many of those latter cases.
        2. Monitor traffic at the device level, which means you get stuff like Crowdstrike installed on your laptop. Effectively all I/O gets monitored and alerting comes from the endpoint rather than the network itself.
        3. Both.

        The absence of the above means I have near zero visibility on the data entering and leaving the network, and all bets are off for being able to have any confidence in the security of the network. It’s as simple as that.

        So what am I supposed to do? I’m genuinely asking.