1. 26
  1.  

  2. 42

    This is what running a VM in Microsoft Azure means. Microsoft controls the underlying datacenter hardware, host OS, hypervisors etc.

    There is no defense against sustained physical access.

    1. 10

      I think the difference raymii is making is between online and offline access - yes, they can unrack the servers and look at your disks with a magnifying glass, but online access where they can log in live to your running instance is a different threat model. If you rack hardware somewhere, sure, they have your hardware, but they most likely don’t have (an equivalent of) the root password. This story surprised me.

      1. 18

        But we’re talking about virtual machines here, right? So you don’t need to unrack anything; your magnifying glass is just /proc/$(pgrep qemu)/mem (or whatever the hyper-v equivalent is), to peruse at your leisure, online, from the host.

        (And even in the case of rented physical servers, there are still probably BMCs and such in scope that could achieve analogous things.)

        1. 2

          But that is still more work than just executing commands via an agent that’s already running. You still have to do something to get root access to a specific machine, instead of being able to script against some agent and accessing all machines.

          Leaving your door unlocked is one thing; setting it wide open with a sign “Enter here” is another.

          1. 2

            On the plus side, though it is “easy” it also appears to be logged and observable within the VM, which is the part most obviously unlike actual backdoors.

        2. 13

          There is absolutely nothing to be done from within a VM to prevent the host flipping a bit and backdooring it arbitrarily, or snapshotting it without shutting it down and doing the same. I’d be very surprised all the big names don’t have this functionality available internally – at least Google support live migration, which is the same tech.

          There are open toolkits for doing arbitrarily nasty poking and introspection to a running VM, e.g. volatility framework

          Hard to point fingers at Microsoft here

          1. 3

            Moreover, Live Migration of VM’s is a functionality available in widely deployed VMware ESXi software since 90’s. I suppose even longer than that on Big Iron.

          2. 2

            They can access the memory. That is equivalent to a root password. IMHO CPU supported memory encryption like Intel SGX is snake-oil at most, if you are targeted by the phisycal host of your VM.

            Hosting in the cloud is a matter of trust and threat analysis.

          3. 5

            I’m really suprised, it seems that everybody thinks it’s common knowledge and they seem to think it’s normal. I don’t like my hosting provider having this level of access to my data and machines. We are smart enough to find a solution to this, hosting infrastructure without giving up on all security…

            1. 26

              With managed virtualized infrastructure, “this level of access” is completely unavoidable. They run the virtualized hardware your “server” is running on; they have complete memory and CPU state access, and they can change anything they want.

              I guess it makes backdooring things marginally simpler to write a guest-side agent, but their actual capabilities are totally unchanged.

              This is something that ought to be common knowledge, but unfortunately doesn’t seem to be.

              1. 1

                The risk of your provider taking a snapshot of your disk and ram is always there with virtualization. But, you could encrypt the disk, which would make it harder for them (they have to scan ram for the key, then decrypt). But just an agent with root privileges… what bothers me the most I guess is that it is not made clear. A note in /etc/issue or motd with “we have full root acces in your vm, read http://kb.ms.com/kb77777 for more info” would make it clear right from the get-go.

                1. 10

                  (they have to scan ram for the key, then decrypt)

                  Not even that, just put a backdoor in the BIOS, boot loader, initramfs, or whatever code is used to unlock the encrypted disk to intercept key entry.

                  1. 3

                    Do you know of any isolated / trusted vm like solution? Where provider access is mitigated?

                    1. 12

                      No. Even the various “gov clouds” are mainly about isolation from other customers and data center location.

                      The cloud providers are executing the cpu instructions that the VM image provided by you (or picked from the store) contains. There isn’t any escaping that access level.

                      The only option is to actually run your own physical hardware that you trust in an environment you consider good enough.

                      1. 4

                        In my comment about host and TLA resistance, I had a requirement for setups resistant to domestic TLA’s that might give orders for secrets to be turned over or use advanced attacks (which are getting cheaper/popular). It can be repurposed for an untrusted, host setup.

                        “If it has to be U.S. and it’s serious, use foreign operated anti-tamper setup. The idea is all sensitive computations are run on a computer stored in a tamper detecting container that can detect radiation, temperature changes, power surges, excessive microwaves, etc. Tamper detection = data wipe or thermite. The container will be an EMSEC safe and the sensors/PC’s will always be located in a different spot in it. The system is foreign built and operated with the user having no control of its operation except what software runs in deprivileged VM’s in it. Status is monitored remotely. It helps to modify code so that most sensitive stuff like keys are stored in certain spot in memory that will be erased almost instantly.”

                        The clouds aren’t built anything like this. They have total control like those in physical possession of hardware and software almost always have total control. They can do what they want. You won’t be able to see them do it most of the time without some clever detection mechanisms for security-relevant parts of the stack. That’s before we get to hardware risks.

                        Bottom line: external providers of computing services should always considered trusted with full access to your data and services. By default. Every time. It’s why I encourage self-hosting of secrets. I also encourage pen, paper, and people for most confidential stuff. Computers aren’t as trustworthy.

                        1. 3

                          What is your threat model?

                          There is something based on selinux for Xen https://wiki.xen.org/wiki/Xen_Security_Modules_:_XSM-FLASK which can by design prevent the privileged “dom0” from reading the memory of nonprivileged guest domains. But that assumes you trust your provider to actually implement this when they say they do.

                      2. 7

                        A note in /etc/issue or motd with “we have full root acces in your vm, read http://kb.ms.com/kb77777 for more info” would make it clear right from the get-go.

                        I think this is a combination of “common knowledge, so not worth mentioning specially” for users who already know this and “let sleeping dogs lie” for people who don’t already know. I mean, why press people with their noses on a fact that the competitor is equally mum about? Seems like a bad PR move; you’d get clueless people all alarmed and leaving your platform for reasons that are totally bogus, as any competitor has the same kind of access.

                2. 19

                  That is a problem with any VM style hosting, isn’t it? You can never check if they have modified the virtualization technology underneath to get access to all your data.

                  1. 10

                    There’s always that joke that the cloud is just “someone else’s computer” - but it’s true, and no one should be surprised. You need to trust your cloud vendor.

                    If your application has isolation requirements from the cloud vendor, you run with your own hardware. If you have isolation requirements from other tenants, you run in a cloud environment that will provide them.

                  2. 12

                    seems more like a front door to me.

                    still nothing preventing microsoft from implementing a backdoor in a way you can’t detect.

                    1. 3

                      Hidden somewhere in a documentation page. The bloke who clicks together an ubuntu machine and follows the DO guide to install Wordpress probably won’t know,

                      1. 1

                        they also won’t know about all the other ways microsoft is probably tracking your usage, recording your root password, making their own snapshots, bundling up that data to send to the NSA, etc.

                    2. 8

                      I’m not seeing how this is a backdoor, unless is isn’t possible to control access to this feature for users with different portal access roles.

                      1. 7

                        This is why we still use dedicated hardware kept in locked cages inside secure datacenters.

                        1. 5

                          This is why we still use dedicated hardware kept in locked cages inside secure datacenters.

                          What is your threat model? What actors are you looking to protect yourself from?

                          Don’t get me wrong: there’s absolutely nothing wrong with not using a cloud vendor for any number of reasons. I’m just curious what lead you down to this decision.

                          1. 4

                            In the case I wrote about it’s primarily due to the hardware being used being pushed to its limits (about 5,000-6,000 cores in total); however the data involved is extremely sensitive in nature and so must be protected from all potential threats both electronic and physical in nature.

                            1. 2

                              My best guess would be physical access. With the access controlled in your own DC and encrypting the disks, plus notifications on when doors / racks open (or cameras), you at least know there is a breach.

                              With vm’s, there is no way to know.

                              1. 1

                                Well, this automatically stops all inadvertent or intentional leaks that come from just sharing the physical machine. There’s both endless attack surface plus a ton of focus on it right now by folks developing attacks.

                                It might also reduce complexity, aka downtime, if their setup is more boring than all the wild things the cloud providers must be doing to squeeze all kinds of companies, services, and features into shared boxes and spaces meeting their many requirements.

                                The dedicated boxes also have better, more-predictable performance than shared boxes. This ties into no downtime a bit. I’ll call it maximum results with less surprises caused by others.

                                Just a few, potential benefits of bare metal to consider.

                            2. 4

                              Same with EC2…. What’s your point, fam?

                              1. 7

                                His point is the DO referral link front and center in the blog post, despite admitting that they also install a similar agent by default and both being easily removed by uninstalling the respective package.

                                1. 4

                                  DO make it an option when you create the VM (checkbox). That checkbox has a link with more information. (Or at least the last time I created a VM it was the case). In the case of azure, it’s not made clear unless you look and dig into documentation.

                                  1. 4

                                    My point is that in both your submission title and in your blog post title you could replace Microsoft with Digital Ocean and they would both still be true:

                                    “Linux on DigitalOcean? DO has root-access by default (do-agent)”

                                    “Linux on DigitalOcean? Disable this built-in root-access backdoor (do-agent)”

                                    I gave you the benefit of the doubt that these titles were not clickbait or meant to elicit FUD until I saw a referral link in the second paragraph before even getting to any real content. The only information preceding your referral link is a warning paragraph that, again, one could substitute the company you are referring and it would still be 100% true:

                                    “Are you running Linux on DigitalOcean? Then by default anyone with access to your DO account can run commands as root in your VM, reset SSH keys, user passwords and SSH configuration. This article explains what the backdoor is, what it is meant to do, how it can be disabled and removed and what the implications are.”

                                    I genuinely apologize if I’m wrong and your intentions were well-meaning, but the entire thing reads like thinly-veiled referral link blogspam. Please consider moving the referral link to the end of the post.

                                    Edit: I am most likely jumping to conclusions based on a strong personal aversion to referral links. Thank you for taking the time to write this up and contribute content to lobste.rs. In the future I’ll be better about keeping my feedback constructive and free of snark.

                                    1. 2

                                      This point was about azure because I recently discovered in azure. In a good way, we had a one off vm with no documentation or access, but it was doing a production thing. Using this “feature” of azure, I was able to access the vm quickly. No reboot, console, single user mode required, saved me a lot of time.

                                      But as said, I don’t think that except of a group of tech people are aware of this feature. Yes cloud providers have access to your vm, but this is the easiest way (compared to shutting down, mounting a disk or taking a snapshot or other suggestions made in this topic.)

                                      DO has the checkbox. Huge difference, ms has not, it’s just there. I get why, Not all people that have a server have the skills to manage it properly. Give them a web portal instead of a console and their happy.

                                      For the referral links, it on all my articles. I’m still in the old scheme at DO, so instead of credits I get 25 dollars when a referral reached that in billing. It differs hugely, last few months it has been either nothing or 50 or 25. Not a goldmine. The Google ads make more “profit”, or at least a consistent number, also not above 100 if you’re wondering. Cash goes directly to the hosting and domains, plus things I write about on the site.

                                      I do make it very clear that it is a ref link. In the link and in the picture. It’s on the top of the article instead of at the end because I don’t want to beg and disturb too much. My content is mostly guides, so people read until the end of the page. If they then leave, be my guest I hope the content has helped. The ads and link are on top, so when you’re “in the content”, I don’t bother. If you ever come back or take the time to scroll up and think, oh, this has helped, then maybe they see the link. My bounce rate is over 90 percent.

                                      I’ve not considered that the placement makes it look “blogspammy”. Thank you for bringing it to my attention, I think I’m going to experiment with positions. Since my reasoning (explained above) did not take that into account, I don’t want to look like a spamblog, hate those myself as well

                                      1. 1

                                        Take my knee-jerk reaction with a grain of salt. In light of a lot of negativity that’s been happening in this community I took a little time to introspect and comments like mine above just contribute to the problem. Your post had plenty of good detail and was informative to a lot of people. It wasn’t fair for me to accuse you of blogspam based on a few coincidental facts and a lot of speculation. I hope I haven’t deterred you from posting more content here in the future!

                                        1. 3

                                          (unrelated, but I love your NES articles. Was poking around and saw you’re the guy behind that)

                                          1. 4

                                            Thanks! I happened across https://nesdoug.com/ a few years ago and before I knew it I was deep in the rabbit hole.

                                            I’ve been (very slowly and sporadically) working on a FOSS/FOSH Wi-Fi adapter that plugs into (and is powered by) the NES controller port (including firmware, protocol, 6502 assembly library with C headers, sample ROM, sample game server, and Mesen emulator plugin for those not lucky enough to have a working NES). It was a fun challenge (ab)using the CLK/OUT pins to send data from the NES in a way that doesn’t interfere with the other controller (there are a lot of weird quirks with shared pins, inverted logic, and even interference from the APU on the NTSC model). I have a very sloppy prototype that I’m hoping to get in a state worth sharing in the not too distant future, and plans (pipe dreams) to make a version that uses the expansion port, which would allow all kinds of performance optimization with direct access to the CPU data bus and being able to directly communicate with the cartridge via the expansion pins.

                                            The extreme conclusion of this idea is a single cartridge that people can buy that would allow them to easily download and play homebrew NES games right on their original hardware without modifications. The cartridge could have a modern ARM CPU and a framework that abstracts the inner workings of the NES so people can easily write games in high-level languages like Lua, Python, or JS in addition to traditional ROMs.

                                            I love the idea of resurrecting old technology and making it fun and accessible by mixing it with new technology. Thanks again for being understanding!

                                            1. 1

                                              This is so cooool! X.X

                                      2. 1

                                        None of these people actually need an agent in most cases, because people encrypting their root disks via a key that your provider doesn’t know is unfortunately pretty rare-and-difficult-to-do right now. They can just read it, yo.