1. 22
  1.  

  2. 5

    OK I LOVE LOVE LOVE this idea in concept, but for me the big question is security. You’re inviting people to expose their machines to the internet from behind NAT and other things that generally protect end users.

    What questions have the authors taken to ensure that people aren’t un-intentionally hanging a big ole’ “HACK ME I’M CLUELESS!” sign out for everyone to see?

    1. 11

      I’m the author, hello!

      I think it’s important to differentiate between incidental risks which are “just part of the deal”, versus specific risks associated with “whatever the self-hoster chooses to do”.

      It sounds like you are specifically concerned with “whatever the self-hoster chooses to do”. I haven’t spent much time attempting to address this yet. Even just minimizing the incidental risks, the stuff that I have control over as a Greenhouse developer, has been challenging.

      I have thought about it a lot, though. It’s not in the application yet, but I thought about implementing sophisticated “Are you sure you want to do this?” dialogs. I could come up with multiple different heuristic factors (well known folder locations, file permissions, total number of files, histogram of file types, etc etc) to attempt to warn users when they are trying to serve something publicly that maybe they shouldn’t.

      The same goes for ports. If you serve port 22 publicly on Linux, your SSH server configuration had better be up to snuff! I could identify certain ports or names of programs that open ports which should probably never be served publicly, or should at least incur a stern warning to the self-hoster.

      Ultimately, I think the responsibility for security has to fall at least in part on the self-hoster. That’s just part of the deal with self-hosting. With great power comes great responsibility. Greenhouse is supposed to empower self-hosters. If they don’t have enough power to become dangerous, I would have failed!

      I’d like to point out that it’s possible to publish stuff you didn’t mean to publish with google drive or other SAAS as well. Isn’t that how a whole lot of “hacks” happen? Someone finds an open S3 bucket somewhere with global anonymous read permissions? This isn’t a problem unique to Greenhouse.

      The self-hoster’s responsibility for not just the configuration and data, but also the process(es) is unique to greenhouse though. So insecure processes like ancient apache versions, vulnerable WordPress plugins, etc, are a concern.

      But I’d like to point out, if someone is setting up wordpress on their own server, like a Raspberry Pi for example, I don’t think they are risking much more than they would be if they were setting up wordpress on a cloud instance.

      In terms of the incidental risks that all greenhouse users have to take, I’ve done almost everything I can to minimize them and I am committed to continuously improving the security of the system in general.

      • It’s not using a VPN, it’s using a reverse tunnel. This means that the absolute minimum amount (only what the user requested) is available to the servers that Greenhouse operates and the internet at large.

      • The software running on the self-hoster’s computer is in charge of what ports can be connected to / what folders are public. It doesn’t take orders from anywhere. The “client” (the self-hosting software) is as authoritative as possible. All of this helps mitigate attacks that originate on the servers that Greenhouse operates.

      • The self-hosting software runs under its own operating system user account. This helps minimize the blast radius in case it were to become compromised somehow. It also allows the self-hosting software to have exclusive read access on its secret files, like its TLS encryption keys.

      • I am bundling in an up to date version of Caddy Server. As far as I know, Caddy Server does not contain any serious vulnerabilities.

      • All of the communications between the processes involved in the self-hosting software are using TLS. All but one of them is using mTLS. With some additional research into how all of the different operating systems handle secret values that specific programs should have access to, I can probably get it to 100% mTLS.

      1. 3

        Wow thank you for your thoughtful and considered response!

        I totally agree. I do think it’s ultimately the user’s responsibility. Part of why I asked is because I’ve struggled with this myself.

        Something I’ve been trying to write forever is this idea I have called OpenParlour. It’s meant to be software that will allow anyone to run a small forum server for their small group - e.g. friends, family, hobby groups or whatever. I’d always thought I’d use something like ngrok to enable them to open a port to the internet, but I’ve struggled with the idea of responsibility around this.

        Sure, I’ll do everything in my power to ensure that the server is secure, but ultimately that only goes so far. An attacker might leverage flaws in the language or tools I use as one example. And I keep thinking “So I’m inviting people to create a security risk for themselves that they might not be ABLE to understand or come to grips with!”.

        However I think what you’re proposing is something a little different, and I think the ethics might be a bit clearer for you because the the nature of the service you’re offering (host anything) implies that the user takes responsibility for the risks.

        Good luck and I look forward to tracking your progress!

        1. 4

          An attacker might leverage flaws in the language or tools I use … So I’m inviting people to create a security risk for themselves that they might not be ABLE to understand or come to grips with!

          I used to work in full stack development / DevOps for media and IoT companies… The way I see it, no matter where you look, it’s imperfect software all the way down, no matter whether you are using a service that’s offered by “the professionals” or something which is open, libre, and powered by “passionate hobbyists”.

          In my experience in the industry, often times the professionals and the executives steering them are not incentivized to give a shit about security anyways. So I would like to believe that in many cases the “little guy” products are actually more secure. Everyone is taking a huge risk by playing the centralized saas game anyways. I think that by comparison, its perfectly acceptable for us to ask folks to trust our software! Especially when we do everything in the open and discuss / accept contributions in the open as well!

      2. 3

        from behind NAT

        NAT does not provide any protection or security for end users.

        other things that generally protect end users.

        Other things being firewall for restricting (but not preventing) access to the internal services, but ultimately authentication and encryption, i.e. mTLS, for preventing unauthorized access. And at this point the service should be secure enough to be potentially accessible on the Internet.

      3. 4

        If you live in a dormitory or if you get internet access via wifi from your neighbor or via tethering to your phone, you may not be able to perform the necessary modifications to your router’s configuration even if you knew how.

        Another use case are things like carrier-grade NAT, where you share an IP address with multiple people. I can play with my router settings all I want, but no way I can expose any service to the internet. Workarounds exists, but it’s a bit annoying at times.

        Note that http://greenhouse.io already exists. If this takes off you may find yourself on the receiving end of some unpleasant letters. Just FYI.