1. 63

  2. 26

    I was admittedly quite skeptical until I saw the proposed alternative (just directly saying “so and so is compromised, now what”). That seems quite reasonable, actually, and helps remove a lot of the fun parts extra legwork involved in these assessments.

    1. 38

      I know of a situation where a Red Team member broke an engineer’s car window to steal their laptop.

      Jesus Christ. Are pentesters not legally liable for stuff like this? The article describes this as a grey area, but this seems downright criminal.

      In one of those cases, the pentester did this repeatedly (about a half-dozen times), to the point that the staffer thought she was being stalked and called the police. She later quit; the company’s lucky she didn’t sue.

      For fuck’s sake. Again, this behavior isn’t inappropriate: it’s illegal (or at the very least borderline so).

      These are just useless and uninteresting results. I don’t have to pay for a pentest to know that my organization is vulnerable to phishing attacks, because all organizations are vulnerable to phishing attacks. And if “employees can be phished” is the finding I’m trying to remediate, well, that’s not a risk that can really be effectively addressed.

      This is wonderfully put.

      (As a bonus, this technique also simulates insider threats, and the remediation you’ll do helps protect against them, too.)

      Didn’t even think about that. This article is fascinating.

      1. 16

        yeah you could just telllll the business, “Hey I saw the laptop in the car, someone could break a window and grab the laptop”

        1. 11

          They are fully liable unless the legal framework was set (which in no way happened in the window breaking case), and often times the clients don’t really understand what they are signing up for nor do they actually seem to care until the bad things start to happen. I’m not defending that shitty pentest group, because they should be shamed, but sometimes I’ve had clients not understand what they adamantly declare that they need for a pentest. For example, I had a client sysadmin who wanted an external and internal pentest, but only gave me the info for the external hosts. After stepping him through the ROE and making him understand clearly that he was keeping the scope for internal infrastructure fully open and what that would mean we got the ball rolling. After cracking the perimeter it was a massacre. We stepped him through our process with updates along the way and he was enraged when we got domain admin. It’s not the first time I’ve seen the “they won’t be able to get in, so I’ll just make this ROE draft easy on myself and doing it as lazily as possible”. Some people just don’t take it seriously.

        2. 12

          In addition to this, one thing I’d love for penetration test shops to stop singling specific people wherever possible. I’ve noticed that if the testers point out that the Mary user was compromised which led to a big chain of events, Mary will often get a portion of the blame when really it was only a factor. I always give a big final list of compromised accounts and machines, which makes it less likely for a single person to take a fall when it was an organizational failure.

          1. 4

            I found this blog post about red team practices at Facebook to be relevant.


            They seem to focus more on assuming that someone had compromised many things. It’s generally a waste of time to actually use social engineering or sniffing against the real company. Someone could be digging through the garbage for months before they find something useful. Thus, you assume someone has been phishing for months or digging thorough garbage or old computers that the company throws out or someone had planted some laptop somewhere a few months ago. Or maybe assume someone had quit and they’re using a device that’s still connected to the internet somewhere that was left in a hallway before they were escorted out.

            After assuming these things, your teams goal is to have systems and experience in place to fight against attacks if they were to happen on the inside.

            1. 4

              I mostly agree, with the exception of calling into customer service (which the article specifically mentions) - for all intents and purposes that’s an open interface and if it turns out you can get valuable information out of it then there is remediation work to be done - either training the CSRs better or limiting what they have access to.

              1. 5

                I agree that there’s value in auditing ‘are CSRs following our security rules’ (outside of a pen-test).

                If you want to know whether the rules themselves are sufficient, you can give the pen-testers a copy to analyze and skip the part where they treat your staff like shit to see who cracks.

                1. 3

                  With respect to CSRs, probably the most attention should be paid to their management and incentives. If they get rewarded for keeping whoever is on the phone happy, no matter what, and punished for refusing people, even if they’re asking to break the rules, then all demands to follow the rules no matter what aren’t going to have much effect.

                2. 4

                  And, what’s the remediation action after this attack? Usually, it’s “more phishing training”, as if that’s ever been shown to reduce the likelihood that and organization falls to a phishing attack.

                  These are just useless and uninteresting results. I don’t have to pay for a pentest to know that my organization is vulnerable to phishing attacks, because all organizations are vulnerable to phishing attacks. And if “employees can be phished” is the finding I’m trying to remediate, well, that’s not a risk that can really be effectively addressed.

                  So maybe look at methods for actually fixing that? My company tried to do “training” for a while; now they send everyone “simulated phishing” emails every so often so that people get used to handling them appropriately, which is much more effective.

                  If you’ve decided that social engineering is outside your security boundary and you build your posture around “all employees can be phished”, well, sure, that’s a legitimate approach and maybe even the best one. But it’s not the only way to approach things.

                  1. 2

                    Give up on trying to prevent phishing, invest in phishing-proof authentication methods like U2F.

                    1. 0

                      “simulated phishing”

                      Fun story: I used to work for a company that decided to get “serious” about security. By “serious” I mean they hired outside contractors for pen-tests, but when an internal resource pointed out, “Hey, this Intranet app is super vulnerable to SQL injection,” they said, “wait for the results from the contractors.”

                      That’s not the fun part. The fun part is that when internal developers said, “hey, you know the HR app? Yeah, if you muck with the cookies it’ll dump any info about anyone in the company, if you can guess their UID,” the company response was, “don’t say that out loud.”

                      The outside contractors sent phishing emails to many IT employees. The phishing emails didn’t ask for any uniquely identifiable information, beyond, “confirm that the account we sent this email to is the account we sent this email to” Many IT staff failed this check, and thus there was much wailing and gnashing about the fact that not even our MOST IT SAVVY employees could defend against phishing attacks.

                    2. -4

                      Why not, it’s a valid and likely hole in your defences. The bad guys don’t say, “Social engineering is too hard, we’ll just stick to the network stuff”. If anything some of them probably say, your network and physical security is too good/too risky, let’s just talk to good old Jeff behind the front desk.
                      Jeff is susceptible to:
                      Anyone in a FedEx shirt with a package that says they have an urgent delivery.
                      The “IT” company ringing up, saying there is a problem and he must tell them his logo and password to fix the problem.
                      He always has his head in his phone as he walks home from work, even as he walks past that dark alley.
                      Women at bars with Russian accents, he even buys their drinks for them before he passes out in his apartment and they use his swipe card the very same night to steal all your computers.

                      1. 14

                        Why not

                        If you had read the article you would know that it gives a lot of great reasons why this is not a good idea.

                        1. 11

                          So, again, that was why I was initially skeptical on the matter, but the author concludes with the good point that if you just plan on a breach, then you can do better work.

                          Like, finding out how many lead pipes it takes to get to the center of a tootsie-pop is not a as helpful as knowing what your procedures are once a breach of credentials or attack is successful. It’s focusing on the wrong thing.