1. 31
  1. 21

    Another remote access app may be welcome, but the claims are exuberant, especially when comparing itself to a codebase and specification that’s been around a long time and proved it’s worth and mettle.

    What Is It?

    It’s like SSH, but more secure, and with cool modern features. It is not an implementation of SSH, it is a new, modern protocol.

    Does Oxy have…

    + Years of testing and battle hardening? No, it's super green. But hey, if you try it you'll help make it less green!

    Questions I’d like to see answered:

    • Who designed it?
    • Who implemented it?
    • Who reviewed the design and implementation?
    • Who evaluated the use and implementation of the crypto?
    • How was it tested to be shown “more secure”?

    The site links a protocol specification. Excellent. But that’s just the beginning.

    Curious to know the same about OpenSSH, see the specifications page? Want to know more about the project and where it came from, see the home and history pages.

    Oxy is a new project and it takes time to build out a project site. Keep working on both the app and the site. But seriously, don’t expect security-minded folks to believe it’s “like SSH, but more secure” based on assertion. Deliver some serious smack-down proof. And then do it long enough to prove the project has the chops and the commitment to do it day in and day out for decades.

    Security is about more than claims. Until we see more evidence, I’ll watch Oxy and test it, but I’m not dropping ssh and nearly 20 years of demonstrated commitment to the principles and procedures for delivering secure software.

    1. 14

      Yeah…. OK.

      curl -O https://oxy-secure.app/oxy
      chmod +x oxy
      ./oxy --help

      I know this practice has, again and again, been discussed to various levels of “this is no different than your package manager over HTTPS.”

      But, this is shocking: “trust the security of your network to a brand new protocol implemented in this convenient, 3 command installable, binary. Be sure to download it as root, just for good measure. What do you have to lose?”

      1. 6

        I upvoted you, and then looked closer realizing that they aren’t asking people to run a bash script from the Internet. Perhaps you misread, as I did, what the instructions were saying?

        It’s basically standard practice for people to download compiled binaries from the Internet and run them (certainly on Windows and macOS). curl is not being run as root. And just below those instructions are compile-your-own binary instructions, for those who would rather compile it themselves:

        cargo install --git https://github.com/oxy-secure/oxy
        ~/.cargo/bin/oxy --help

        Which, incidentally, has the exact same trust assumptions (X.509) as the downloaded binary.

        1. 5

          they aren’t asking people to run a bash script from the Internet.

          No. What they’re doing is kind of worse. Instead of telling you to download and run a bash script from the internet, (which you could reasonably inspect first), they’re telling you to run a binary that obscures the fact that it’s doing something malicious.

          And, sure, you can cargo install it after auditing the source, but can I be reasonably sure that the binary is derived from that source? No. I can’t. Assuming reproducible builds, I could reproduce the build and compare a checksum, but that binary could have been created from a slightly different, and malicious source tree. When given the choice of downloading a recently compiled binary, and waiting 30 seconds for rust to build it (with the cargo instructions), maybe I say, “eh, that’s OK, I’ll just take the pre-compiled one.” Social engineering at it’s finest! Tell people they can wait, or have it right now….

          But, let’s assume that it’s “accepted practice” to download and run random binaries off of a mysterious website that doesn’t even list its authors… Maybe we should… I don’t know… stop doing that????

          But, but, but, package servers are just protected by X.509, too! you say. Sure. The transport is protected by that. But, there’s also (usually) some level of trust associated with a package server. In the case of most distributions you’ve got signed packages. In the case of homebrew, you have the ability to choose where you get your formulae from, which has implications in the trust model. I don’t know much more about homebrew, but I assume they at least compare known checksums from the formulae to checksums of downloaded source tarballs?

          https://secure.app/oxy was put on the internet by someone – looking at the commit history, https://github.com/jennamagius – whose discoverable online presence is: “Hi, I’m Jenna” (via https://jenna.app/ redirected from jennamagius.github.io). If that isn’t suspicious to you… god speed.

          1. 2

            I’ll double down on that saying this is a remote, access tool. Those are front doors for the good folks or backdoors for the bad folks depending on how they’re implemented or how much (if any) monitoring is happening. High-value target. One should only use a RAT that’s been thoroughly vetted by people that have a track record breaking bad protocols, crypto, etc. Actually, these are such necessary and risky tools that they’re among the few I think deserve all the assurance we can throw at them. All the way up to formal proof. Plus, ability for much independent verification.

            Until multiple, independent assessments confirm quality/security, I’d ignore whatever the new RAT tool is to stick with OpenSSH or something with lots of review and use in the field. Those wanting improvements can enhance their code or UI piece by piece carefully testing and vetting the changes for now. For reliability, too, since more bugs will have been shaken out. Next worst thing to hackers getting in your system is you not getting in your own system due to immature software breaking. They tend to do it at worst times, too.

            1. 1

              Also, I realize I didn’t really respond to your specific claim “… (certainly on Windows and macOS)”

              I don’t use Windows anymore – not in 18 years at this point. But, my understanding is that they are adopting a “store” model to combat this practice. The same with Apple and the Mac App Store. It’s true that you can still download and run random Apps on OS X, but you’re given plenty of warnings, and the practice is pretty discouraged by Apple.

              If for some reason someone like GitHub decides to not use the Mac App Store to distribute Atom, well, it’s perhaps the case that you trust GitHub to host and provide an untampered with binary, because you actually trust your other data to GitHub.

              1. 3

                I will reply to both of your replies here.

                So, you raised several concerns. Let’s go through them again.

                Be sure to download it as root, just for good measure.

                I pointed out the authors (whoever they are), never suggested you do this, and their instructions do not tell people to do that. So, that’s one down, let’s move on to the next concern.

                they’re telling you to run a binary that obscures the fact that it’s doing something malicious.

                This appears to be your other main concern, the basic idea of installing software not-from-source.

                This is a common practice on macOS, Windows, and Linux. I would venture to say that 99.99% of users do this.

                But, in later comment, you bring up app stores:

                But, my understanding is that they are adopting a “store” model to combat this practice. The same with Apple and the Mac App Store. It’s true that you can still download and run random Apps on OS X, but you’re given plenty of warnings, and the practice is pretty discouraged by Apple.

                I will point out that your original comment, to which I was replying to, never mentioned anything about being upset that oxy was not registered in an app store. Yes, you did mention package managers, but both app stores and package managers are known to distribute malware from time to time, and many of them come with differing trust assumptions (some worse than others).

                So a package manager or app store is no guarantee that the binary you’re installing is safe at all, and you’re back to square one with your trust assumptions.

                https://jenna.app/ redirected from jennamagius.github.io). If that isn’t suspicious to you… god speed.

                Now this is a perfectly reasonable concern. Had you raised the trustworthiness of the particular author of the software as your concern in your original comment to which I replied, I would never have replied, because that’s a legitimate concern.

                1. 1

                  I pointed out the authors (whoever they are), never suggested you do this, and their instructions do not tell people to do that.

                  Of course they didn’t. I was adding a figurative eye roll, which I’m pretty sure went right past you—I am sorry that I failed to make that more clear.

                  Naturally, some number of people installing this software in the recommended way will want to copy this into /usr/bin, or /usr/local/bin, though. How many people blindly ./configure && make && sudo make install?

                  On to the next point!

                  package managers

                  I am not upset by the fact that it’s not in a package manager. I am upset that it’s promoting a shitty practice, which has no auditability, no update mechanism, and no oversight whatsoever.

                  Package managers are not perfect, as you have pointed out. However, they represent an additional check in the process for someone to think twice about including it, and, in doing so, take some responsibility, and a hit in reputation/ trust, when they do something that results in malware, or something else malicious. At least, that should be the case…

                  trustworthiness of author

                  It stands to reason that a person creating a security tool such as this, and claiming it is so much better than other solutions understands that the installation practice being described is controversial. This is at least doubly/quadruply true for a RAT tool.

                  I see no reason why skepticism to the 10,000th degree isn’t being applied here…

                  Have we all just given up on security? I mean…

                  1. 3

                    I see no reason why skepticism to the 10,000th degree isn’t being applied here…

                    Have we all just given up on security? I mean…

                    It is not too uncommon for the author of a piece of security software to want to remain anonymous.

                    There is nothing wrong with expressing concern, but if you do it, it should be (a) relevant/legitimate concern, and (b) balanced appropriately in the event that your suspicions of the project turn out to be misplaced. Someone out there did, after all, spend a lot of time putting effort into creating an alleged improved, rustified RAT, and if their work is legitimate they deserve kudos for that.

                    1. 2

                      I’m one of the people that pushes look at the work, not the author. I’ll take software from the NSA if it’s rigorously vetted by 3rd parties I trust with a matching signature. That philosophy is what old, security certifications tried to achieve on highest levels. However, I do accept looking at the author as a heuristic for making quick decisions if not much else is available. One thing we see a lot in INFOSEC is people good at secure protocols have a track record of… writing secure code or protocols. They get good by publishing some work, getting it reviewed, often getting their asses handed to them, fixing it, and repeat. It might be shared more privately with instructors or fellow hackers doing same process. There will be references, prior work, prior writings describing work… something to evaluate… for either their actual identity or their alias they stick with.

                      The other heuristic is that unproven or unevaluatable people publishing new protocols get it wrong in security-breaking ways. This happens so much it should be assumed by default. Insecurity should be assumed by default anyway but especially with unknown developers. Again, the best route is evaluating the protocol and code itself. That said, people have a working protocol already with limited time on their hands. The heuristic might be used to save time avoiding unestablished or unvetted authors’ work since 99+% it will be broken anyway. In this case, avoiding work based on strange author is about saving time and/or avoiding insecurity.

                      So, there’s two ways of looking at the unknown author that would lead one to avoid their work until someone with right skills and spare time to donate evaluates it carefully.

                      1. 3

                        That’s certainly fair. I don’t really disagree with any of that. I’m not suggesting anyone feel like wasting their time, only that critiques be on-point and people not be berated for doing good work (if that’s what they did).

                        Speaking of on-point critiques, I’m surprised nobody raised the concern that these releases are not GPG signed. That should be standard practice for all software, and certainly security-critical software.

                        1. 1

                          I’m surprised nobody raised the concern that these releases are not GPG signed.

                          Unless this, more or less anonymous person, has a key signed by many trusted keys / people, how would that increase trust?

                          My points above about package servers signing, or at least providing checksums, points at trust in the actual distributed assets. I may not trust the particular $SIGNER of a package, but I might trust others who trust $SIGNER, and accept that if $THEY trust $SIGNER, it’s probably OK for me to trust $SIGNER, too. That’s the model of the Web of Trust, and the model that every package server I know of (whether it be from freebsd, openbsd, or some random GNU/Linux distribution) works.

                          1. 1

                            The point of GPG signing releases has nothing to do with web-of-trust.

                            It is about establishing a direct line of trust to the author of the software to protect against third-party tampering. It doesn’t matter if they’re anonymous.

                            1. 1

                              Yeah, agreed. I honestly think it’s quite frustrating how GPG entangles web-of-trust with its other features. It creates a lot of confusion.

                              1. 2

                                @itistoday, earlier in this long thread you suggest:

                                Which, incidentally, has the exact same trust assumptions (X.509) as the downloaded binary.

                                (To be clear: this was in response to download the binary, vs download the source and compile the binary)

                                So, you trust the author’s X.509 certificate enough to assume it’s not tampered with on download, but don’t trust the author put it there in the first place? And, who, even has the authority to make a release? We don’t know! So, we still have to be suspicious even if it’s signed.

                                Let’s discuss this scenario:

                                I’m a l33t h4x0r and I pwn3d oxy-secure.app’s servers. I want to put a rouge oxy up there. Since the key who signed the old oxy binary is just a one off anyway (because it’s unknown to everyone), I’ll sign my malicious oxy binary with a one off key, too, and update the HTML referencing how to get this new key! My l33t social engineering skills suggest that I should use the same email address and for the name, use “Original Name - NEW KEY” (or something else that implies I’m still the same person, I just made a mistake)

                                $ gpg --gen-key
                                $ gpg --sign malicious-oxy

                                I replace https://oxy-secure.app/oxy , and the signature file with my malicious ones, and even publish my new public key somewhere, and no one is the wiser! (I then twist my handlebar mustache, and let out an evil snicker)

                                NOW, if as a user, I happened to import the previous signing key, I might notice that this is different and it might raise some eyebrows. Just like I might notice that the SSL cert’s fingerprint changed as it started pointing to my server oxy-notsosecure.app/oxy… But, given this author is unknown, I also might not bat an eye at my plausible explanation of: “oh, what an idiot! They forgot to backup their key!”

                                If I’m being fair, yes, a signed binary, even with an unestablished key can help here. It introduces additional levels of potential doubt at the authenticity of the binary. But, even if the original oxy is signed, I’m still taking a giant risk by accepting the fact that I’m downloading a random binary from the internet built by some random anonymous person, and they may (or may not) have malicious intent, or not have the skills to back up the claims they’ve made (in the case where it’s actually not malicious intent).

                                If the key is known to other people I know, as it’s part of the web of trust, it’s a little easier to believe that the risk is less malicious intent and more, “the author might still be making exuberant claims.”

                      2. 1

                        This is no longer productive, and I am taking your response as:

                        a) my concern is irrelevant b) I’ll have egg on my face when this turns out to be the RAT that saves us all.

                        In response, I am just going to invite you to @akpoff’s well written comment, which also expresses concerns. Maybe they are more “relevant”: https://lobste.rs/s/3hrwqf/oxy_security_focused_remote_access_tool#c_0hsv4p

          2. 6

            Are you confident that every single user of your systems is going to out-of-band verify that that is the correct host key?

            If your production infrastructure has not solved this problem already, you should fix your infrastructure. There are multiple ways.

            1. Use OpenSSH with an internal CA
            2. Automate collection of server public ssh fingerprints and deployment of known_hosts files to all systems and clients (we do it via LDAP + other glue)
            3. Utilize a third party tool that can do this for you (e.g., krypt.co)

            Your users should never see the message “the authenticity of (host) cannot be established”

            1. 4

              Makes me wonder how Oxy actually authenticates hosts. The author hates on TOFU but mentions no alternatives AFAICS, not even those available in OpenSSH?

              1. 3

                It only authenticates keys, and it makes key management YOUR problem. see https://github.com/oxy-secure/oxy/blob/master/protocol.txt for more details.

                I.e. you have to copy over keys from the server to the client before the client can connect(and possibly the other way from the client to the server, depending on where you generate them).

                1. 1

                  Key management is already your problem.

                  ssh’s default simply lets you pretend that it isn’t.

                  1. 2

                    Very true. I didn’t mean to imply otherwise.

            2. 5

              Because knock packets use a timestamp to limit knock-reuse attacks, servers and clients must have synchronized clocks. Clock skew greater than 60 seconds is likely to cause the knock process to fail, resulting in a “connection refused” error when establishing the TCP connection.

              Given this, if my server’s clock does fall out of sync, how do I connect to it to fix it if Oxy is my only access method?

              1. 1

                Remember to set up ntp when you set up your server.

                1. 1

                  Software crashes and networks disconnect. Even if you start NTP you can still end up with an inaccessible server.

                  1. 0

                    So if you have no network connectivity for ntp to update, how are you going to log in with any remote access tool?

                    Software crash of ntpd is unrealistic.

                    1. 1

                      NTP has an exponential back off mechanism for retrying connection, it’s possible that it could get out of sync and not have ticked over when the network is back up, causing time skew but still having the machine allow network connections. Depending on the configuration, it could also plain shut down when it cannot get a connection and not restart when the connection is back.

                      Saying that software crashes are unrealistic is in itself an unrealistic view in my opinion. Software WILL crash.

                      Allowing clock skew to prevent connections to the machine definitely adds more moving parts to the remote access process and adds risk to that process failing.

              2. 5

                Having port knocking baked into the toolset itself is a huge win; setting up port knocking for other daemons isn’t exactly hard, but it’s non-trivial enough that lots of people simply don’t bother.

                I applaud this effort, and at face value the only thing it loses to OpenSSH on is however many years of hardening, which of course will change with time.

                Cool project to follow.

                1. 8

                  It’s like SSH, but more secure, and with cool modern features.

                  And less portable and will take forever to compile 😕

                  1. 5

                    Both arguments will probably be less and less valid with time passing though…

                    1. 4

                      How often do you compile vs. use?

                      1. 3

                        As someone involved in the packaging team on FreeBSD: I’m compiling all the time, and we have lots of users that prefer to compile ports instead of use packages for various reasons as well.

                        1. 5

                          I meant, after you compile, how often do you then use the resulting compiled artifact? I submit that the ratio of time spent compiling against time spent using approaches zero for most anyone, regardless of how long it takes to compile the thing being used.

                          1. 1

                            That depends on various factors. This is an OS with rolling-release packages. If I compile my own packages and update regularly, I will be re-compiling Oxy every time a direct dependency of Oxy gets updated in the tree.

                            1. 4

                              I’m familiar with FreeBSD ports :)

                              It sounds like all you’re saying is, “All Rust programs take an unacceptably long time to compile,” which, fine, but you can see how that sounds when it’s laid out plainly.

                              1. 5

                                To be fair to @feld, compile times continue to be a number one request from users, and something we’re constantly working at improving.

                                1. 4

                                  It’s appreciated. My #2 complaint as someone involved in packaging echoes the problems with the Go ecosystem: the way dependencies are managed is not great. Crates are only a marginal improvement over Go’s “you need a thousand checkouts from github of these exact hashes” issue we encounter.

                                  We want a stable ecosystem where we can package up the dependencies and lots of software can use the same dependencies with stable SEMVER release engineering. Unfortunately that’s just not the reality right now, so each software we package comes with a huge laundry list of distfiles/tarballs that need to be downloaded just to compile. As a consequence it also isn’t possible for someone to install from packages all dependencies for some software so they could do their own local development.

                                  Note: we can’t just cheat and use git as a build dependency (or whatever other tooling that wallpapers over git). Our entire package building process has to happen in a cleanroom environment without any network access. This is intentionally done for security and reproducibility.

                                  edit: here’s a particularly egregious example in Go. Look at how many dependencies we have to download that cannot be shared with other software. This all has to be audited and tracked by hand as well, which makes even minor updates of the software a daunting task.


                                  1. 3

                                    That use-case should be well supported; it’s what Firefox and other Linux distros do. They handle it in different ways; Firefox uses vendoring, Debian/Fedora convert cargo packages to .deb/.rpm and use them like any other dependency.

                                    Reproducibility has been a goal from day 1; that’s why lockfiles exist. Build scripts are the tricky bit, but most are good about it. I don’t know of any popular package that’s not well behaved in this regard.

                                    1. 1

                                      I’m fairly certain feld wants the OS packager to manage the dependencies, not just a giant multi-project tarball.

                                      1. 1

                                        Sure; that’s what I said Linux distros do.

                                    2. 2

                                      Application authors should just publish release tarballs with vendored dependencies.

                                      Check out this port: https://bugs.freebsd.org/bugzilla/attachment.cgi?id=194079&action=diff It looks like any normal port, just with BUILD_DEPENDS=cargo:lang/rust. One single distfile. That contains all the Rust stuff.

                      2. 4

                        Adding an obvious link to the github page I think would also make a worthwhile addition. It wasn’t until reading the comments here on lobste.rs that I realized this was open source. I was assuming I had to take the authors word for it when they said it was written in rust etc.

                        For the record, here’s the github page if anyone else fell victim: https://github.com/oxy-secure/oxy

                        1. 2

                          I was assuming I had to take the authors word for it when they said it was written in rust etc.

                          This sounds like a dangerous mindset. Rust is likely a safer language for writing software that talks to the network, than, say C. However, which language it’s written in has no bearing on the numerous claims it makes about its security and stature as a “secure” RAT…well, other than the memory safety claim…assuming it doesn’t use unsafe anywhere, of course.

                          1. 0

                            Calm down broham. I don’t believe everything I read on the Internet.

                        2. 4

                          “Protocol-level faculties that let you read, write, and hash specific chunks out of the middle of large files without making you transfer the whole large file? does SSH have that? Nope.” - sftp does have that actually. SFTP is basically just a protocol that fowards file descriptors.

                          two thing that always bothered me about openssh:

                          the naming id_rsa and id_rsa.pub means tab completion can cause you to accidentally send your secret key. I would have called it id_rsa.priv

                          It would be neat if it had more ways to support machine to machine workflows. I use ssh to forward unix sockets to do secure cluster networking, force commands are ok, but they are not the easiest to use.

                          1. 1

                            You don’t need the id_rsa.pub file. In fact, I delete mine.

                            If you’re using files, you can use:

                            ssh-keygen -y -f ~/.ssh/id_rsa

                            If you use ssh-agent you can use:

                            ssh-add -L

                            If you’re using a smartcard you can use:

                            pkcs15-tool --read-ssh-key 69 # or whatever your key number is

                            and so on…

                          2. 4

                            hah, the domain name and headline here made me think it’s some fancy paid VNC client for Macs. I was pleasantly surprised when I clicked :)

                            Also, “hidden service” here means port knocking, not Tor onion services.

                            1. 2

                              The UDP port knocking part is funky, in a good way. It is the reason I’m going to play with Oxy a little.

                              Mind you, port knocking daemons already exist that do something similar: run some code whenever someone knocks on a port. That code can open holes in the firewall, to allow access to the hidden service, from the knocking source. It shouldn’t be too hard to restrict things further so not only the knocked ports matter, but the contents of the knocks too. This feature does not need to live in the daemon itself - yet, it is novel for Oxy to include such a feature.

                              1. 2

                                This is really neat! The only issue is the ability to use it in different environments. OsX compatibility would really make this more usable. From the looks of the issues list Windows as client is never going to happen. Android would be really cool too but very unlikely

                                1. 4

                                  Try the dev branch, my PR for kqueue support was just merged. Should work on macOS.

                                  1. 2

                                    Nifty! That got me past the transportation compilation but in the end it still chokes on some of the *nix bindings (getgrouplists setgroups). Super promising though and thank you for dropping me the note :)