1. 4

Used to be, 10 years ago or so, we had segmented networks, public/dmz/private. The private network was where the database and application servers lived and network communication was clear text, not ssl. Configuration was simple and everyone was happy.

The company I’m at now is pushing to secure all network communication with ssl/tls, even within the private network; ie behind the firewall, inside the datacenter they own. Even between application servers and databases (for example). This seems like overkill to me. Is this really a best practice now?

I found this document[1] that superficially seems to support this:

Protecting data in transit should be essential part of your data protection strategy. Since data will be moving back and forth from many locations, the general recommendation is that you always use SSL/TLS protocols to exchange data across different locations.

It’s an Azure doc, and I think the context is it’s a best practice to secure network communication that traverses the public internet (d’uh), not necessarily inside a corporate firewall. But without that context clearly stated, “unsophisticated” people read it as ssl/tls everywhere? I’m really baffled by this state of affairs. What are your thoughts?

[1] https://docs.microsoft.com/en-us/azure/security/azure-security-data-encryption-best-practices

  1.  

  2. 2

    See e.g. https://cloud.google.com/beyondcorp It’s a model that is suitable iff you have good operational tools (to keep all your access proxies up to date).

    1. 2

      I think it’s good. Firewalls are very brittle to attack: if an attacker gets in they have full access. If your security is fractal they have to break security down to the little atom of data they want. It’s annoying for debugging reasons but in general I would expect logs to be used more for debugging.

      1. 2

        A large enough private segment is hard to protect completely; and taking leverage of more and more systems in the protected segment is a well-documented intrusion technique.

        I think after it became undeniable that NSA obtained data from Google by NSLs but also had some covert monitoring of the unencrypted inside-the-private-datacenter communication, it became more typical to encrypt everything. At some point things might get bad enough that people would encrypt even inside-the-box traffic…

        A lot of flows are IO-limited anyway (or sometimes RAM-amount-limited, or external-network-limited), so encryption might not be as much of an extra expense as it would be if everything was CPU-limited.

        It might be reasonable to add a small hidden segment for that one thing that you cannot afford to encrypt in-transit — if there is actually a problem — but encrypting everything is a good default.

        1. 1

          But without that context clearly stated, “unsophisticated” people read it as ssl/tls everywhere? I’m really baffled by this state of affairs. What are your thoughts?

          Anytime you send plain text data over your internal network, you are making a business decision about that data (“it can be modified or read without any cost to your business”).

          The cost of TLS or IPsec is not zero, but it’s very close to it. The cost of attackers intercepting or manipulating plain text data is almost certainly much greater than zero. Reading customer data from internal databases or injecting evil malware into network traffic seems like an unacceptable risk to me.