1. 22

While this article doesn’t go into technical details it does highlight the risk that ordinary mail users face when using HTML based email.


  2. 12

    The only safe email is GPG messages received on a hardened endpoint with memory-safe code whose email system and renderers are sandboxed by a tiny kernel. Preferably on a machine dedicated to online stuff running a LiveCD or ROM-based boot. The choice to use text means the users still have to trust an insecure email client to properly parse, analyze, and reject non-text emails. On top of not beeaking from protocol-level attacks.

    1. 7

      Thus succinctly demonstrating why users will continue to choose insecure third-party-hosted email with full multimedia support.

      You’re absolutely correct, but 99.9% of users don’t put any value at all in improved security and put a lot of value on a pleasant consumption experience. The only viable approach is to improve the security of a standard, user-friendly interface.

      As far as protocol level and parser level attacks, you can get most of what you need with memory safe languages with safe parser combinators. But we’re very far even from that. Most of the email software in the world is probably written in JS and various unsafe C derivatives.

      1. 2

        The only viable approach is to improve the security of a standard, user-friendly interface.

        I agree. It’s why the text recommendation will fail even harder for most users. The same dynamics that make demand side of consumers not care about security and use overly complicated stuff for perceived benefits will cause supply side to continue sending those risky emails. At best, we can protect lots of people by pushing a nice client not that different from what they’re used to or significantly simpler (depends on audience) that has built-in security plus some must-have features/advantages for initial switch. Something like Claws (lightweight) or Gmail (simple UI) were good examples of where a secure contender could’ve won on non-security features.

        “you can get most of what you need with memory safe languages with safe parser combinators. But we’re very far even from that. Most of the email software in the world is probably written in JS and various unsafe C derivatives.”

        No kidding haha. It’s why I push for contributions to projects that protect legacy code like SAFEcode, Softbound+CETS, or Code Pointer Integrity (w/ segments). Gotta make it push-button simple for developments like those LLVM enhancements have been doing. Then, the users have to be cool with the performance of the app. It’s why I push for lighterweight stuff in general. The app gets many times faster than what they’re used to, the security slows it down a bit, and it still feels faster to them. A plus instead of a negative. This hypothetical, psychological ploy is one I’ve only tried sparingly on standalone software in the past with positive results. I don’t know if it would be effective on a large scale.

      2. [Comment removed by author]

        1. 3

          You might not be able to verify the base operatimg system on an Intel CPU with just a checksum. The first checksum might be subverted or ths malware hiding in peripherals. We’re talking receiving a message from untrusted source, though.

          Snowden leaks showed NSA couldn’t beat GPG at least on a wide scale. In my designs, I always used a sealing method with whitelisting so the potentially-malicious payload doesn’t even make it to client until a trusted component has vetted source and integrity. The old, high-assurance, mail guards did this. Nexor also does with a proxy in front of things like Outlook.

          1. [Comment removed by author]

            1. 4

              “As long as the code reading the email can be verified from an external source, it’s all good.”

              Until it runs on malicious data that exploits a vulnerability in the code and escalates privileges. As has happened continually with so many mail clients per vulnerability trackers. There’s stopping the attack itself on your code, reducing what attackers can do with a successful attack, detecting an attack, and recovering from an attack. These are the lenses through which we look at attacks in defense. Let’s look at your method anyway.

              Ok. So, it’s laypeople with their buying habits using this. I recommended a software solution that could be done as a bundle which mostly operates invisibly to them. It stops injections cold plus isolates whatever succeeds. Your solution is a hardware device they have to buy and use properly that uses checksums to verify an app before running or maybe regularly in parallel. That’s already apples to oranges but I”ll bite. Especially since I recommended the exact same thing as a complement to my software advice for years now. Let’s look at the failure modes in case you missed them or I overthought their risk:

              1. Layperson will install the thing, tool gets the hashes (not checksums!), and begin using the setup. The hashes are for malicious software, though, since this user was of a subset already compromised. Gives false confidence instead. Mitigation here is a clean install on (if possible) a new machine before installing the security system. User didn’t do that. Alternatively, the company licensing the software for the device has a list of every version of every popular piece of software along with its hash. Most don’t do that, though.

              2. External tool is looking for stuff in RAM that’s code or static data. Attacker has code outside RAM (i.e. peripheral devices) just using portions for dynamic data which it won’t hash. Those are the portions that will temporarily hold secrets, buffering for files, and buffering for networks. The external tool fails again. This is uncommon attack for now, though.

              3. External tool is defeated permanently or temporarily by ROP or JITing apps. I barely understand ROP as we mitigated it before it was invented with memory safety, some segmented architectures, and/or separation kernels. Clever data attacks were what we call a known unknown where new stuff is expected and accounted for best we can. What I read of it is the new attacks try to use existing code that should be there (your tool says clean) with data it wasn’t expecting [that your tool isn’t looking at]. Depending on implementation, an external tool looking at static stuff might miss it entirely since it happened in data fields it doesn’t hash. Likewise, JIT-using platforms have dynamic code with a runtime that will probably not be integrated with that external tool. Either one can be used for a bypass. There’s a lot of apps using interpreted or JIT’d languages.

              4. This stuff starts happening a lot or the attacker knows an integrity-checking tool runs before use of mail client. So, the attacks is programmed to happen after client starts. The mitigation might be to run it often in parallel with the host. The drawback versus my recommendations of prevention is an attack will get through if it can’t run often. Attackers often use kits with built-in functionality to vacuum up secrets, escalate, and/or deliver more payload. If the external tool doesn’t re-check fast enough, it might only serve to tell the users someone got their data or forged transactions from them.

              So, there’s a few problems high-assurance security had looking into such a setup back in the 90’s with things like DiamondTek LAN or Boeing’s Embedded FireWall (EFW) in OASIS Architecture. Recent work in that area focuses on measuring analog, timing, and other properties of devices to profile them during normal operation. Then, any malware in them is [maybe] detected during abnormalities. That’s stronger than prior tools which were basically PCI coprocessors or connected to buses with probes. Even then same groups are investing in preventative stuff since monitoring-based stuff has always been a cat and mouse game where the cats stay well-fed. Like malware and antivirus.

              So, there’s a quick review of that. Sleepy after a long day so people feel free to catchy anything I overlooked or slipped on.

      3. 2

        And with disabled “Reply” button so employee will not disclose anything by replying to message.

        “Nigerian spam” is a kind of phishing, but usually without any links, and often in plain text format. Sender tries to trick user by replying with some info to them, they don’t even need to use links and web forms for that.

        1. 1

          I’d like to see more of these:
          “You have replied to this person 7 times before” vs. “You can have never talked to this person before!” and require extra confirmation before replying or clicking on links. Gmail does a bit of this, it will disable images of suspicious emails, I think due to heuristics most normal emails from new senders will get through unflagged though?
          Certificates/signing for domains/banks to prove who they are (I’m not sure if that is the exact use case for PGP? I’m no expert) and then flagging anything that hasn’t been signed.
          Less/no reliance on images. As the other discussions have outlined, users like images for ease of use, companies like images for click throughs, so this is unlikely to happen any time soon. And if 100% of emails were text only, the baddies’ text would undoubtedly become more sophisticated.