1. 7

    I would have rather seen the HardenedBSD code just get merged back into FreeBSD, I’m sure there are loads of reasons, but I’ve never managed to see them, their website doesn’t make that clear. I imagine it’s because of mostly non-technical reasons.

    That said, It’s great that HardenedBSD is now setup to live longer, and I hope it has a great future, as it sits in a niche that only OpenBSD really sits in, and it’s great to see some competition/diversity in this space!

    1. 13

      Originally, that’s what HardenedBSD was meant for: simply a place for Oliver and me to collaborate on our clean-room reimplementation of grsecurity to FreeBSD. All features were to be upstreamed. However, it took us two years in our attempt to upstream ASLR. That attempt failed and resulted in a lot of burnout with the upstreaming process.

      HardenedBSD still does attempt the upstreaming of a few things here and there, but usually more simplistic things: We contributed a lot to the new bectl jail command. We’ve hardened a couple aspects of bhyve, even giving it the ability to work in a jailed environment.

      The picture looks a bit different today. HardenedBSD now aims to give the FreeBSD community more choices. Given grsecurity’s/PaX’s inspiring history of pissing off exploit authors, HardenedBSD will continue to align itself with grsecurity where possible. We hope to perform a clean-room reimplementation of all publicly documented grsecurity features. And that’s only the start. :)

      edit[0]: grammar

      1. 6

        I’m sorry if this is a bad place to ask, but would you mind giving the pitch for using HardenedBSD over OpenBSD?

        1. 19

          I view any OS as simply a tool. HardenedBSD’s goal isn’t to “win users over.” Rather, it’s to perform a clean-room reimplementation of grsecurity. By using HardenedBSD, you get all the amazing features of FreeBSD (ZFS, DTrace, Jails, bhyve, Capsicum, etc.) with state-of-the-art and robust exploit mitigations. We’re the only operating system that applies non-Cross-DSO CFI across the entire base operating system. We’re actively working on Cross-DSO CFI support.

          I think OpenBSD is doing interesting things with regards to security research, but OpenBSD has fundamental paradigms may not be compatible with grsecurity’s. For example: by default, it’s not allowed to create an RWX memory mapping with mmap(2) on both HardenedBSD and OpenBSD. However, HardenedBSD takes this one step further: if a mapping has ever been writable, it can never be marked executable (and vice-versa).

          On HardenedBSD:

          void *mapping = mmap(NULL, getpagesize(), PROT_READ | PROT_WRITE | PROT_EXEC, ...); /* The mapping is created, but RW, not RWX. */
          mprotect(mapping, getpagesize(), PROT_READ | PROT_EXEC); /* <- this will explicitly fail */
          
          munmap(mapping, getpagesize());
          
          mapping = mmap(NULL, getpagesize(), PROT_READ | PROT_EXEC, ...); /* <- Totally cool */
          mprotect(mapping, getpagesize(), PROT_READ | PROT_WRITE); /* <- this will explicitly fail */
          

          It’s the protection around mprotect(2) that OpenBSD lacks. Theo’s disinclined to implement such a protection, because users will need to toggle a flag on a per-binary basis for those applications that violate the above example (web browsers like Firefox and Chromium being the most notable examples). OpenBSD implemented WX_NEEDED relatively recently, so my thought is that users could use the WX_NEEDED toggle to disable the extra mprotect restriction. But, not many OpenBSD folk like that idea. For more information on exactly how our implementation works, please look at the section in the HardenedBSD Handbook on our PaX NOEXEC implementation.

          I cannot stress strongly enough that the above example wasn’t given to be argumentative. Rather, I wanted to give an example of diverging core beliefs. I have a lot of respect for the OpenBSD community.

          Even though I’m the co-founder of HardenedBSD, I’m not going to say “everyone should use HardenedBSD exclusively!” Instead, use the right tool for the job. HardenedBSD fits 99% of the work I do. I have Win10 and Linux VMs for those few things not possible in HardenedBSD (or any of the BSDs).

          1. 3

            So how will JITs work on HardenedBSD? is the sequence:

            mmap(PROT_WRITE);
            // write data
            mprotect(PROT_EXEC);
            

            allowed?

            1. 5

              By default, migrating a memory mapping from writable to executable is disallowed (and vice-versa).

              HardenedBSD provides a utility that users can use to tell the OS “I’d like to disable exploit mitigation just for this particular application.” Take a look at the section I linked to in the comment above.

          2. 9

            Just to expound on the different philosophies approach, OpenBSD would never bring ZFS, Bluetooth, etc into the OS, something HardenedBSD does.

            OpenBSD has a focus on minimalism, which is great from a maintainability and security perspective. Sometimes that means you miss out on things that could make your life easier. That said OpenBSD still has a lot going for it. I run both, depending on need.

            If I remember right, just the ZFS sources by themselves are larger than the entire OpenBSD kernel sources, which gives ZFS a LOT of attack surface. That’s not to say ZFS isn’t awesome, it totally is, but if you don’t need ZFS for a particular compute job, not including it gives you a lot smaller surface for bad people to attack.

            1. 5

              If I remember right, just the ZFS sources by themselves are larger than the entire OpenBSD kernel sources, which gives ZFS a LOT of attack surface.

              I would find a fork of HardenedBSD without ZFS (and perhaps DTrace) very interesting. :)

              1. 3

                Why fork? Just don’t load the kernel modules…

                1. 4

                  There have been quite a number of changes to the kernel to accommodate ZFS. It’d be interesting to see if the kernel can be made to be more simple when ZFS is fully removed.

                  1. 1

                    You may want to take a look at dragonflybsd then.

              2. 4

                Besides being large, I think what makes me slightly wary of ZFS is that it also has a large interface with the rest of the system, and was originally developed in tandem with Solaris/Illumos design and data structures. So any OS that diverges from Solaris in big or small ways requires some porting or abstraction layer, which can result in bugs even when the original code was correct. Here’s a good writeup of such an issue from ZFS-On-Linux.

        1. 5

          Working some more on my “Security best practices” pages: https://www.zie.one/en/security/ And going to help the local dog park at their work party.

          1. 2

            I used migadu.com for a long time, but am switching to self-hosted kolab now, as I want the synced calendars/reminders/etc across devices.

            I like migadu because they are 1) outside the US, 2) charge based on USAGE, not on the # of accounts/domains(of which I have a bunch), which is nice. Also they are cheap. But I don’t like that they run a JS based mail server, which is not so fabulous.

            1. 2

              Seems to fly in the face of the reproducible builds movement :) One could compile it for a reproducible build, ensure happiness and then recompile with this for security I guess.

              1. 1

                It was actually one of my counterpoints to reproducible builds. Reproducible builds in deployment = reproducible attacks. The diversity approach makes systems as different as possible to counter attacks. In the big picture, the attacks reproducible builds address are rare whereas the attacks diversifying compiles address are really common. Better to optimize for common case. So, diversified builds are better for security than reproducible builds in common case.

                So, I pushed the approach recommended by Paul Karger who invented the attack Thompson wrote about later. The early pioneers said we needed (at a minimum) secure SCM, a secure distribution method, safe languages to reduce accidental vulnerabilities, verified toolchains for compiles, and customers build locally from source. Customers should also be able to re-run any analyses, tests, and so on. This was standard practice for systems certified to high-assurance security (Orange Book B3/A1 classes). We have even better tools now for doing those exact things commercially, FOSS, formally-verified, informal-but-lean, and so on. So, we can use what works with reproducible builds still an option esp for debugging.

                1. 8

                  With load-time randomization you can both have and eat that reproducible-build cake.

                  1. 1

                    cool idea, Thanks for posting!

                    1. 1

                      That’s news to me. Thanks for the tip!

                    2. 1

                      It was actually one of my counterpoints to reproducible builds.

                      Running a build for each deployment is extremely impractical. Also, when binaries are generated and signed centrally you have guarantees that the same binary is being tested by many organization. Finally, different binaries will behave slightly differently, leading to more difficult debugging.

                      Hence the efforts on randomizing locations at load time.

                      1. 0

                        The existing software that people are buying and using sometimes has long slowdowns on installs, setup, load, and/or update. Building the Pascal-based GEMSOS from source would’ve taken a few seconds on todays hardware. I think that’s pretty practical compared to the above. It’s the slow, overly-complicated toolchains that make things like Gentoo impractical. Better ones could increase number of projects that can build from source.

                        Of course, it was just an option: they can have binaries if they want them. The SCM and transport security protect them if developer’s are non-malicious. The rest of the certification requirements attempted to address sloppy and malicous developers. Most things were covered. Load-time randomization can be another option.

                    3. 1

                      It looks like the builds are seeded, so it may be possible to reconstruct a pristine image given the seed.

                    1. 3

                      While I basically agree here, the problem is, if you are a new developer and you search the internet on how to build a menu for my website, basically all you will get back is using giant JS frameworks that take up gobs of space, instead of the few lines of CSS and HTML5 you need(without any JS) to actually build a menu. I don’t have a good solution to this, but I see this as a major contributor for why this craziness keeps growing in size.

                      I think it also doesn’t help that when we do get new things like webauthn, but then only get a JS interface to use them. Somewhat forcing our hand to require JS if you want nice things. That doesn’t mean we have to shove 500MB of JS to the user to use webauthn, but we can’t do it with just HTML and a form anymore.

                      1. 7

                        That’s because nobody should need to search the internet for how to make a menu. It’s a list of links. It’s something you learn in the first hour of a lecture on HTML, chapter 1 of a book on HTML.

                        You probably neither need nor want to use webauthn. Certainly not yet! It was published as a candidate recommendation this year. Give others a chance to do the experimenting. Web standards used to take 10 years to get implemented. Maybe don’t wait quite that long, but I’m sure you’ll do fine with an <input type="password"> for a few years yet.

                        1. 2

                          I was just using both as an example, I apologize for not being clear.

                          Yes a menu is just a list of links, but most people want drop-down or hamburger menu’s now, and that requires either some CSS or some JS. Again, go looking and all the examples will be in JS, unless you go searching specifically for CSS examples.

                          This is true of just about everything you want to do in HTML/Web land, the JS examples are super easy to find, the CSS equivalents are hard to find, and plain HTML examples are super hard to find.

                          Anyways, I basically agree webauthn isn’t really ready for production use, but again, both of these were examples, and webauthn just because it’s something I’m currently playing with. You can find lots of new web tech that is essentially JS only, despite it not needing to be, from a technical perspective. This is what I’m saying.

                          1. 2

                            I understand it’s just an example, but that’s my point really: it’s yet another example of something people way overcomplicate for no good reason. ‘It’s the first google result’ just isn’t good enough. It’s basic competency to actually know what things do and how to do things in HTML and CSS, and not accomplish everything by just blindly copy-pasting whatever the first google result for your task is.

                            Web authentication? Sure it’s just an example, but what it’s an example of is people reinventing the wheel. What ‘new’ web technology isn’t just a shitty Javascript version of older web technology that’s worked for decades?

                            1. 1

                              LOL, “overcomplicate for no good reason” seems to be the entire point of so many JS projects.

                              I think we agree more than we disagree.

                              New developers have to learn somehow, and from existing sites and examples tend to be a very common way people learn. I agree web developers in general could probably use more learning around CSS and HTML, since there is a LOT there, and they aren’t as easy as people tend to think on the surface.

                              Well webauthn has a good reason for existing, We all generally know that plain USER/PASS isn’t good enough anymore, especially when people use such crappy passwords, and developers do such a crappy job of handling and storing authentication information. There are alternative solutions to FIDO U2F/webauthn, but none of it has had much, if any success, when it comes to easy to use, strong 2FA. The best we have is TOTP at this point, and it’s not nearly as strong cryptographically as U2F is. I don’t know of any web technology that’s worked for decades that competes with it. Google has fallen in love with it, and as far as I know, requires it for every employee.

                              The closest would probably be mutual/client based TLS cert authentication, but it’s semi-broken in every browser and has been for decades, the UI is miserable, and nobody has ever had a very successful deployment work out long-term (that I’m aware of). I know there was a TLS Cert vendor that played with it, and Debian played with it some, both aimed at very technical audiences, and I don’t think anyone enjoyed it. I’d love to be proven wrong however!

                              Mutual TLS auth works better outside of the browser, things like PostgreSQL generally get it right, but it’s still far from widely deployed/easy to use, even after having decades of existence.

                              That said, I’m sure there are tons of examples of crappy wackiness invented in web browser land. I have to be honest, I don’t make a living in web development land, and try to avoid it for the most part, so I could be wrong on some of this.

                        2. 1

                          Maybe check out Dynamic Drive. I used to get CSS-based effects off it for DHTML sites in early 2000’s. I haven’t dug into the site to see if they still have lots of CSS vs Javascript, though. A quick glance at menus shows CSS menus are still in there. If there’s plenty CSS left, you can give it to web developers to check out after teaching them the benefits of CSS over Javascript.

                          I also noticed the first link on the left is an image optimizer. Using one is recommended in the article.

                          EDIT: The eFluid menu actually replaces the site’s menu during the demo. That’s neat.

                          1. 3

                            An interesting project that shows how modern layouts can be built without JavaScript is W3C.CSS.

                            /cc @milesrout @zie

                            1. 3

                              Thanks for the link. Those are nice demos. I’d rather them not have the editor, though, so I easily see it in full screen. Could have a separate link for the source or live editing as is common elsewhere.

                        1. 7

                          Github has large businesses paying lots for their service. They dont need to blast you with adverts and subscription reminders. A news website has very few paying customers anymore so what else are they to do. You can give lectures about keeping ui clean and using no js all day but that doesn’t bring money in.

                          The real issue is how these clickbait NYT articles keep getting to the top of HN when they usually have little to no substance.

                          1. 7

                            One can deliver ads, without needing 10’s of MB’s of data. I’m not a fan of the ads either, but to say ads are required to be giant ram and bandwidth sucking monsters is blatantly false. That’s not a requirement for ads, that is just where we have gotten as the ad industry has infected the Internet, not a technical requirement for advertising.

                            But even with MB’s of ad-infested insanity plastered everywhere, the rest of the site doesn’t also need to add to the craziness with MB’s of junk for what is essentially a page of text.

                            1. 2

                              This is true, we could replicate the ad infestedness of a website with a tiny fraction of the processing power needed. But I think it’s more complex than that. To understand how to fix the problem we need to know how we got to the problem.

                              Who is making websites slow? Is it the site developers, the ad network developers or the managers? It’s quite clear that most of the time its the ad network scripts that slow websites down as the web jumps to warp speed with an ad blocker but why do some websites (Primarily news websites) have 1000 different ad network scripts and tracking scripts? If you ask the site developers they would probably tell you they hate it and wish they could remove most of them but its the managers that request tracking script #283 be added and the devs don’t get much of a say in it so posting an article on a developer focused website telling them something they already agree with is next to useless.

                              This is the primary reason AMP makes websites fast. Not because there is any tech magic that makes it fast. But because it lets developers say to managers “We can’t do that. It’s impossible on AMP”

                              There is also another case where big websites are slow and horrible to use on mobile. Twitter and reddit are like this. I think here the reason is to make you use the mobile app so telling them to make their websites work faster will also do nothing because they don’t want you using the website.

                          1. 3
                            • commit and push changes on personal branch, tell someone ready for merge.
                            • any other team member reviews and merges into stable and pushes to the stable repo.
                            • CI runs make deploy which will run tests, builds and then deploys it.

                            Personal Projects:

                            • make deploy does the right thing.

                            We use Makefile as our entry point into our projects, make is everywhere, and it’s mature, well tested software, the warts are well known, etc. make test will just work, regardless of the language or tools actually used to run the tests, etc. i.e. for Rust code, we use cargo to run tests, for Python code we use pytest w/ hypothesis, but you don’t have to remember those details, you just remember make test.

                            1. 2

                              Always had a prejudice with Make, wrote a makefile the other day, it changed my mind. I still find the syntax and documentation messy, but it’s good for what it’s intended for. I plan on spreading it’s use at work.

                              1. 2

                                Good luck! I agree it’s not perfect, it definitely has warts, but it’s very mature software that’s not going away anytime soon. It will die about the time the C language dies, which won’t be in my lifetime.

                                Other build software, it’s anyone’s guess how long it will be maintained.

                            1. 4

                              I made one in the mid 90’s, with the /etc/issue saying: root password is PASSWORD

                              ok PASSWORD wasn’t the actual password, I forget now what it was, but it literally was in /etc/issue as plain ascii text, so it was trivial to “root” the box! :)

                              It worked out really well for a few years, until script kiddies eventually found it and kept erasing everything, so I shut it down. Yes, the machine was accessible on the public internet with a DNS name. It was hosted at the local ISP I helped run.

                              It was a great little community, the hostname was never publicly posted saying it was open to the public, but through word of mouth or via curious people that would see the hostname and go huh, what’s that machine do! :)

                              Had maybe 100 users on it, before it died.

                              1. 3

                                One of the cool ideas I’ve run across (I think from Paul Graham’s On Lisp) is petrification of a program - stabilizing and formalizing the program past the quick and dirty stage. I know that type hints/gradual typing are helping this, but would love to see more ideas (besides @andyc’s Oil) that can transition shell/quick scripts to something with more types, error handling, composability (besides pipes).

                                1. 3

                                  There is the Oh shell: https://github.com/michaelmacinnis/oh

                                  1. 2

                                    Excellent point. I finished watching the BSDCan video (from the lobsters discussion) , but haven’t dug into playing with it yet.

                                1. 2

                                  I can’t decide if Let’s Encrypt is a godsend or a threat.

                                  On one hand, it let you support HTTPS for free.
                                  On the other, they collect an enourmous power worldwide.

                                  1. 8

                                    Agreed, they are quickly becoming the only game in town worth playing with when it comes to TLS certs. Luckily they are a non-profit, so they have more transparency than say Google, who took over our email.

                                    It’s awesome that we have easy, free TLS certs, but there shouldn’t be a single provider for such things.

                                    1. 3

                                      Is there anything preventing another (or another ten) free CAs from existing? Let’s Encrypt just showed everyone how, and their protocol isn’t a secret.

                                      1. 6

                                        OpenCA tried for a long time, and I think now has pretty much given up: https://www.openca.org/ and just exist in their own little bubble now.

                                        Basically nobody wants to certify you unless you are willing to pay out the nose and are considered friendly to the way of doing things. LE bought their way in I’m sure, to get their cert cross-signed, which is how they managed so “quickly” and it still took YEARS.

                                        1. 1

                                          Have you ever tried to create a CA?

                                          1. 3

                                            I’ve created lots of CAs, trusted by at most 250 people. :)

                                            Of course it’s not easy to make a new generally-trusted CA — nor would I want it to be. It’s a big complicated expensive thing to do properly. But if you’re willing to do the work, and can arrange the funding, is anything stopping you? I don’t know that browser vendors are against the idea of multiple free CAs.

                                            1. 3

                                              Obviously I was not talking about the technical stuffs.

                                              One of my previous boss explored the matter. He had the technical staff already but he wanted to become an official authority. It was more or less 2005.

                                              After a few time (and a lot of money spent in legal consulting) he gave up.

                                              He said: “it’s easier to open a bank”.

                                              In a sense, it’s reasonable, as the European laws want to protect citizens from unsafe organisations.

                                              But, it’s definitely not a technical problem.

                                        2. 1

                                          Luckily they are a non-profit

                                          Linux Foundation is a 501(c)(6) organization, a business league that is not organized for profit and no part of the net earnings goes to the benefit of any private shareholder or individual.
                                          The fact all shareholders benefit from its work without a direct economical gain, doesn’t means it has the public good at heart. Even less the public good of the whole world.

                                          It sound a lot like another attempt to centralize the Internet, always around the same center.

                                          It’s awesome that we have easy, free TLS certs, but there shouldn’t be a single provider for such things.

                                          And such certificates protect people from a lot of relatively cheap attacks. That’s why I’m in doubt.

                                          Probably, issuing TLS certificates should be a public service free for each citizen of a state.

                                          1. 3

                                            Oh Jeez. Thanks, I didn’t realize it was not a 501c3, When LE was first coming around they talked about being a non-profit and I just assumed. That’s what happens when I assume.

                                            Proof, so we aren’t just taking @Shamar’s word for it:

                                            Linux Foundation Bylaws: https://www.linuxfoundation.org/bylaws/

                                            Section 2.1 states the 501(c)(6) designation with the IRS.

                                            My point stands, that we do get more transparency this way than we would if they were a private for-profit company, but I agree it’s definitely not ideal.

                                            So you think local cities, counties, states and countries should get in the TLS cert business? That would be interesting.

                                            1. 5

                                              It’s true the Linux Foundation isn’t a 501(c)(3) but the Linux Foundation doesn’t control Let’s Encrypt, the Internet Security Research Group does. And the ISRG is a 501(c)(3).

                                              So your initial post is correct and Shamar is mistaken.

                                              1. 1

                                                The Linux Foundation will provide general and administrative support services, as well as services related to fundraising, financial management, contract and vendor management, and human resources.

                                                This is from the page linked by @philpennock.

                                                I wonder what is left to do for the Let’s Encrypt staff! :-)

                                                I’m amused by how easily people forget that organisations are composed by people.

                                                What if Linux Foundation decides to drop its support?
                                                No funds. No finance. No contracts. No human resources.
                                                Oh and no hosting, too.

                                                But hey! I’m mistaken! ;-)

                                                1. 2

                                                  Unless you have inside information on the contract, saying LE depends on the Linux Foundation is pure speculation.

                                                  I can speculate too. Should the Linux Foundation withdraw support there are plenty of companies and organisations that have a vested interest in keeping LetsEncrypt afloat. They’ll be fine.

                                                  1. 1

                                                    Agreed.

                                                    Feel free to think that it’s a philanthropic endeavour!
                                                    I will continue to think it’s a political one.

                                                    The point (and as I said I cannot answer yet) is if the global risk of a single US organisation being able to break most of HTTPS traffic world wide is worth the benefit of free certificates.

                                                    1. 3

                                                      Any trusted CA can MITM, though, not just the one that issued the certificate. So the problem is (and always has been) much, much worse than that.

                                                      1. 1

                                                        Good point! I stand corrected. :-)

                                                        Still note how it’s easier for the certificate issuer to go unnoticed.

                                            2. 4

                                              What’s Linux Foundation got to do with it? Let’s Encrypt is run by ISRG, Internet Security Research Group, an organization from the IAB/IETF family if memory serves.

                                              They’re a 501(c)(3).

                                              1. 2

                                                LF provide hosting and support services, yes. Much as I pay AWS to run some things for me, which doesn’t lead to Amazon being in charge. https://letsencrypt.org/2015/04/09/isrg-lf-collaboration.html explains the connection.

                                                1. 1

                                                  Look at the home page, top-right.

                                                  1. 2

                                                    The Linux Foundation provides hosting, fundraising and other services. LetsEncrypt collaborates with them but is run by the ISRG:

                                                    Let’s Encrypt is a free, automated, and open certificate authority brought to you by the non-profit Internet Security Research Group (ISRG).

                                          1. 3

                                            Rails’ credentials/secrets file is the devil. So I recently integrated envkey.com with my app, and it was a breeze to do. Might be a pricier than the AWS solution, but the capabilities I get are pretty nice.

                                            Being a super small startup, I preferred paying EnvKey some money to offload the dev effort to come up with something which would never be as good as the EnvKey solution.

                                            A few months in, and so far so good!

                                            1. 1

                                              Envkey.com looks interesting, and there’s definitely some merit to using a third party to store and encrypt your credentials over using aws to encrypt credentials for aws services.

                                              $20/month isn’t terrible, but it’s a bit pricey and per-seat pricing feels a little out of line with the value of the service they’re providing. But who am I to judge a SaaS that looks like it’s paying the rent?

                                              I worry about one thing: how do you securely deploy your envkey api key?

                                              This is the same problem with HashiCorp Vault or any external secret keeper. There’s a secret which unlocks all your other secrets…that makes it the most important secret. How are you injecting that secret into your application? The whole reason the AWS Parameter store is viable is that access to download and decrypt your secrets isn’t controlled by a key stored on the machine. It’s controlled by the EC2 or container’s instance role.

                                              1. 2

                                                Hashicorp Vault has many ways to authenticate and get a token, you can tie to EC2, or you can auth against LDAP/Github, AppRole(where you can tie it to specific machine(s)/applications, etc. But it is definitely a turtles all the way down approach. The goal of Vault is to only have to worry about deploying the token and vault will then handle ALL of your secret/sensitive information for you, with transit, DB and the other backends. So at least the problem becomes “manageable” since it’s only the 1 token you have to get out there.

                                            1. 2

                                              I nearly posted this as an ‘ask’: Slack is not good for $WORK’s use case because it does not have an on-premise option. What on-premise alternatives are people using/would you recommend?

                                              1. 4

                                                I’ve used Mattermost before, which AFAIK has an on-prem version - just as a user, not setup or admin so I can’t speak to that end.

                                                1. 6

                                                  I’ve heard rumblings about Zulip being a decent option too. I haven’t used it myself though.

                                                  1. 2

                                                    Same, actually. It does look very interesting, I’d be highly interested in whether anyone has any experience with it?

                                                    1. 1

                                                      Zulip looks pretty solid, thanks for mentioning it. We may give it a try…

                                                    2. 2

                                                      We’ve used mattermost for a few years now, it’s pretty easy to setup and maintain, you basically just replace the go binary every 30 days with the new version. We just recently moved to the integrated version with Gitlab, and now Gitlab handles it for us, even easier now, since Gitlab is just a system package you upgrade.

                                                      1. 2

                                                        A lot of people have said Mattermost, might be a good drop-in replacement. According to the orange site they’re considering dropping a “welcome from Hipchat” introductory offer, which is probably a smart move.

                                                        1. 2

                                                          IIRC mattermost is open core. I’ve heard good things about zulip. Personally, I like matrix, which federates and bridges

                                                        2. 3

                                                          Matrix is fairly nice to use. I had some issues hosting it though.

                                                        1. 9

                                                          Many of the author’s experiences speaking with senior government match my own.

                                                          However, there’s one element that I think is very easily lost in this conversation, and which I want to highlight: there is no group I spend more time trying to convince of the importance of security than other software engineers.

                                                          Software engineers are the only group of people I’ve ever had push back when I say we desperately need to move to memory safe programming languages. All manner of non-engineers, when I’ve explained the damages wrought by C/C++, and how nearly every mass-vulnerability they know about has a shared root cause, generally understand why this is an important problem, and want to discuss ideas about how do we resolve this.

                                                          Engineers complain to me that rewriting things is hard, and besides if you’re disciplined in writing C and use sanitizers and fuzzers you’ll be ok. Rust isn’t ergonomic enough, and we’ve got a really good hiring pipeline for C++ engineers.

                                                          If we want to build software safety into everything we do, we need to get engineers on board, because they’re the obstacle.

                                                          1. 11

                                                            People don’t even use sanitizers and fuzzers, so I’m not sure why you would expect them to rewrite in Rust. It’s literally 1000x less effort.

                                                            As far as I can tell, CloudFlare’s CloudBleed bug would have been found if they compiled with ASAN and fed about 100 HTML pages into it. You don’t even have to install anything; it’s built right into your compiler! (both gcc and Clang)

                                                            I also don’t agree that “nearly every mass vulnerability has a shared root cause”. For example, you could have written ShellShock in Rust, Python, or any other language. It’s basically a “self shell-code injection” and has very little to do with memory safety (despite a number of people being confused by this.)

                                                            The core problem is the sheer complexity and number of lines of unaudited code, and the fact that core software like bash has exactly one maintainer. There are actually too many people trying to learn Rust and too few people maintaining software that everybody actually uses.

                                                            In some sense, Rust can make things worse, because it leads to more source code. We already have memory-safe languages: Python, Ruby, JavaScript, Java, C#, Erlang, Clojure, OCaml, etc.

                                                            Software engineers should definitely spend more time on security, and need to be educated more. But the jump to Rust is a non-sequitur. Rust is great for kernels where the above languages don’t work, and where C and C++ are too unsafe. But kernels are only a part of the software landscape, and they don’t contain the majority of security bugs.

                                                            I would guess that most data breaches these days have nothing to do with memory safety, and have more to do with bugs similar to the ones in the OWASP top 10 (e.g. XSS, etc.)

                                                            https://www.owasp.org/images/7/72/OWASP_Top_10-2017_%28en%29.pdf.pdf


                                                            Edit: as another example, Mirai has nothing to do with memory safety:

                                                            https://en.wikipedia.org/wiki/Mirai_(malware)

                                                            All it does it try default passwords, which gives you some idea of where the “bar” is. Rewriting software in Rust has nothing to do with that, and will actually hurt because it takes effort and mindshare away from solutions with a better cost/benefit ratio. And don’t get me wrong, I think Rust has its uses. I just see people overstating them quite frequently, with the “why don’t more people get Rust?” type of attitude.

                                                            1. 2

                                                              There were languages like Opa that tried to address what happened on web app side. They got ignored just like people ignore safety in C. Apathy is the greatest enemy of security. It’s another reason we’re pushing the memory-safe, higher-level languages, though, with libraries for stuff likely to be security-critical. The apathetic programmers do less damage on average that way. Things that were code injections become denial of service. That’s an improvement.

                                                            2. 2

                                                              not only software engineers, almost the entire IT industry has buried it’s head in the sand and is trying desperately hard to hide from the problem, because “security is too hard”. We are pulling teeth to get people to even do the minimal upgrades to things. I recently had a software vendor refusing to support anything other than TLS 1.0. After many exchanges back and forth, including an article from Microsoft(and basically every other sane person) saying they were dropping all support of older TLS protocols because of their insecurity, they finally said, OK we will look into it. I’m sure we all have stories like this.

                                                              If you can’t even bother to take the minimum of steps to upgrade your security stacks after more than a decade,(TLS1.0 released in 1999 and TLS 1.2 is almost exactly a decade old now) because it’s “too hard”, trying to get people to move off of memory unsafe languages like C/C++ is a non-starter.

                                                              But I agree with you, and the author.

                                                              1. 2

                                                                I would like to use TLS 1.3 for an existing product. It’s in C and Lua. The current system is network driven using select() (or poll() or epoll() depending upon the platform). The trouble I’m having is finding a library that is easy, or even a bit complicated but sane to use. The evented nature means I an notified when data comes in, and I want to feed this to the TLS library instead of having the TLS library manage the sockets for me. But the documentation is dense, the tutorials only cover blocking calls, and that’s when they’re readable! Couple this with the whole “don’t you even #$@#$# think of implementing crypto” that is screamed from the roof tops and no wonder software engineers steer away from this crap.

                                                                I want a crypto library that just handles the crypto stuff. Don’t do the network, I already have a framework for that. I just need a way to feed data into it, and get data out of it, and tell me if the certificate is good or not. That’s all I’m looking for.

                                                                1. 2

                                                                  OpenBSD’s libtls.

                                                                  1. 2

                                                                    TLS 1.3 is not quite ready for production use, unless you are an early adopter like Cloudfare. Easy to use API’s that are well-reviewed are not there yet.

                                                                    Crypto Libraries: OpenBSD’s libtls like @kristapsdz mentioned, or libsodium/nacl or OpenSSL. If it’s just for your internal connections and don’t actually need TLS, just talking to libsodium or NaCL for an encrypted stream of bytes is probably your best bet, using XSalsa20+Poly1305. See: https://latacora.singles/2018/04/03/cryptographic-right-answers.html

                                                                    TLS is a complicated protocol(TLS1.3 reduces a LOT of complexity, it’s still very complicated).

                                                                    If you are deploying to Apple, Microsoft or OpenBSD platforms, you should just tie to the OS provided services, that provide TLS. Let them handle all of that for you(including the socket). Apple and MS platforms have high-level API’s that will do all the security crap for you. OpenBSD has libtls.

                                                                    On other platforms(Linux, etc), you should probably just use OpenSSL. Yes it’s a fairly gross API, but it’s pretty well-maintained nowadays(5 years ago, it would not qualify as well maintained.). The other option is libsodium/NaCL.

                                                                    1. 1

                                                                      Okay, fine. Are there any crypto libraries that are easy to use for whatever is current today? My problem is: a company that is providing us information today via DNS has been invaded by a bunch of hipster developers [1] who drunk the REST Kool-Aid™ so I need a way to make an HTTPS call in an event driven architecture and not blow our Super Scary SLAs with the Monopolistic Phone Company (which would case the all-important money to flow the other way), so your advice to let OS provided TLS services control the socket is a non-starter.

                                                                      And for the record, the stuff I write is deployed to Solaris. For reasons that exceed my pay grade.

                                                                      So I read the Cryptographic Right Answers you linked to and … okay. That didn’t help me in the slightest.

                                                                      The program I’m working on is in C, and not written by me (so it’s in “maintenance mode”). It works, and rewriting it from scratch is probably also a non-starter.

                                                                      Are you getting a sense of the uphill battle this is?

                                                                      [1] Forgive my snarky demeanor. I am not happy about this.

                                                                      Edit: further clarification on what I have to work with.

                                                                      1. 1

                                                                        I get it, it sucks sometimes. I’m guessing you are not currently doing any TLS at all? So you can’t just upgrade the libraries you are currently using for TLS, whatever they are.

                                                                        In my vendor example, the vendor already implemented TLS (1.0) and then promptly stopped. They have never bothered to upgrade to newer versions of TLS. I don’t know the details of their implementation, obviously, since it’s closed-source; but unless they went crazy and wrote their own crypto code, upgrading their crypto libraries is probably all that’s required. I’m not saying it’s necessarily easy to do that, but this is something everyone should do at least once every decade, just to keep the code from rotting a terrible death anyways. TLS 1.2 becomes a decade old standard next month.

                                                                        I don’t work on Solaris platforms (and haven’t in at least a decade, so you are probably better off checking with other Solaris people). Oracle might have a TLS library these days, I have no clue. I tend to avoid Oracle land whenever possible. I’m sorry you have to play in their sandbox.

                                                                        I agree the Crypto right-answers page isn’t useful for you, since you just want TLS, It’s target is for developers who need more than TLS. I used it here mostly as proof of why I recommended XSalsa20+Poly1305 for symmetric encryption. Again, you know you need TLS, so it’s a non-useful document for you at this point.

                                                                        Event driven IO is possible with OpenSSL, but it’s not super easy see: https://www.openssl.org/docs/faq.html#PROG11. Then again, nothing around event driven IO is super easy. Haproxy and Nginx both manage to do it, and are both open-source implementations of TLS, so you have working code you can go examine. Plus it might give you access to developers who have done event driven IO with TLS. I haven’t ever written that implementation, so I can’t help with those specifics.

                                                                        OpenSSL is working on making their API’s easier to use, but it’s a long, slow haul, but it’s definitely a known problem, and they are working on it.

                                                                        As for letting the OS do the work for you, you are correct there are definitely use-cases where it won’t work, and it seems you fit the bill. For most applications, letting the OS do it for you is generally the best answer, especially around Crypto which can be hard to get right, and of course only applies to the platforms that offer such things(Apple, MS, etc). Which is why I started there ;)

                                                                        Anyways, good luck! Sorry I can’t just point to a nice easy example, for you. Maybe someone else around here can.

                                                                        1. 1

                                                                          I’m not even using TCP! This is all driven with UDP. TCP complicates things but is manageable. Adding a crap API between TCP and my application? Yeah, I can see why no one is lining up to secure their code.

                                                                          1. 1

                                                                            I think there is a communication issue here.

                                                                            The vendor you are connecting with over HTTPS supports UDP packets on a REST API interface? really? Crazier things have happened I guess.

                                                                            I think what you are saying is you are doing DNS over UDP for now, but are being forced into HTTPS over TCP?

                                                                            DNS over UDP is very far away from a HTTPS rest API.

                                                                            Anyways, for being an HTTPS client, against a HTTPS REST API over TCP, you have 2 decent options:

                                                                            Event driven/async: use libevent, example code: https://github.com/libevent/libevent/blob/master/sample/https-client.c

                                                                            But most people will be boring, and use something like libcurl (https://curl.haxx.se/docs/features.html) and do blocking I/O. If they have enough network load, they will setup a pool of workers.

                                                                            1. 2

                                                                              Right now, we’re looking up NAPTR records over DNS (RFC-3401 to RFC-3404). The summary is that one can query name information for a given phone number (so 561-555-5678 is ACME Corp.). The vendor wants to switch to a REST API and return JSON. Normally I would roll my eyes at this but the context I’m working in is more realtime—as in Alice is calling Bob and we need to look up the information as the call is being placed! WE have a hard deadline with the Monopolistic Phone Company to provide this information [1].

                                                                              We don’t use libevent but I’ll look at the code anyway and try to make heads and tails.

                                                                              [1] Why are we querying a vendor this for? Well, it used to be in house, but now “we lease this back from the company we sold it to - that way it comes under the monthly current budget and not the capital account.” (at least, that’s my rational for it).

                                                                              1. 2

                                                                                Tell me how it goes. Fwiw, you might want to take a quick look at mbed TLS. Sure it wants to wrap a socket fd in its own context and use read/write on it, but you can still poll that fd and then just call the relevant mbedtls function when you have data coming in. It does also support non-blocking operation.

                                                                                https://tls.mbed.org/api/net__sockets_8h.html#a2ee4acdc24ef78c9acf5068a423b8c30 https://tls.mbed.org/api/net__sockets_8h.html#a03af351ec420bbeb5e91357abcfb3663

                                                                                https://tls.mbed.org/api/structmbedtls__net__context.html

                                                                                https://tls.mbed.org/kb/how-to/mbedtls-tutorial (non-blocking io not covered in the tutorial but it doesn’t change things much)

                                                                                I’ve no experience with UDP (yet – soon I should), but if you’re doing that, well, mbedtls should handle DTLS too: https://tls.mbed.org/kb/how-to/dtls-tutorial (There’s even a note relevant to event based i/o)

                                                                                We use mbedtls at work in a heavily event based system with libev. Sorry, no war stories yet, I only got the job a few weeks ago.

                                                                                1. 1

                                                                                  Right, let’s add MORE latency for a real-time-ish system. Always a great idea! :)

                                                                1. 2

                                                                  Don’t all VCSs have tools to modify history? I think svnadmin does: http://oliverguenther.de/2016/01/rewriting-subversion-history/ (assuming there aren’t any blockchain-based VCSs. I daren’t look)

                                                                  If the distinction being drawn is ‘admin’ vs ‘user’ tooling, I guess - like workflow - git punts that to the surrounding culture and environment (as it does “which version is the ‘master’” - which is the same feature/bug of any DVCS).

                                                                  I admit I like being able to say “v234” but really, what that means is “v234 of the (single) upstream repo which can change any time the upstream repo manager runs svnadmin”.

                                                                  There’s nothing to stop github putting a sequential “v1, v2, v3, …” on commits to master or otherwise blessing some workflow.

                                                                  I think the differences aren’t so much about features + capability and tooling as culture.

                                                                  1. 2

                                                                    git is a merkle-tree-based system, which is what I assume you meant by “blockchain-based” in this context

                                                                    1. 1

                                                                      Yes it is, but no - that’s not what I meant. I mean that I expect every VCS to be able to rewrite history since the data files are under control of the admin. git can do it, svn can do it. You can edit RCS files by hand if you want to (unsure if there is tooling to do it).

                                                                      i.e. linus can rewrite his git history. It will be out of sync with other people, but that is then a social issue, not a technical one (I admit this is a fine point).

                                                                      The only time you can’t rewrite history is in the “public immutable” world of blockchain - since the data files aren’t under your control. I don’t know if someone has built a vcs like that and my comment was really just a side swipe at blockchain hype.

                                                                      1. 1

                                                                        you can if you get 51%

                                                                        1. 1

                                                                          https://github.com/clehner/git-ssb not exactly blockchain, but immutable history just the same.

                                                                    1. 6

                                                                      Are you confident that every single user of your systems is going to out-of-band verify that that is the correct host key?

                                                                      If your production infrastructure has not solved this problem already, you should fix your infrastructure. There are multiple ways.

                                                                      1. Use OpenSSH with an internal CA
                                                                      2. Automate collection of server public ssh fingerprints and deployment of known_hosts files to all systems and clients (we do it via LDAP + other glue)
                                                                      3. Utilize a third party tool that can do this for you (e.g., krypt.co)

                                                                      Your users should never see the message “the authenticity of (host) cannot be established”

                                                                      1. 4

                                                                        Makes me wonder how Oxy actually authenticates hosts. The author hates on TOFU but mentions no alternatives AFAICS, not even those available in OpenSSH?

                                                                        1. 3

                                                                          It only authenticates keys, and it makes key management YOUR problem. see https://github.com/oxy-secure/oxy/blob/master/protocol.txt for more details.

                                                                          I.e. you have to copy over keys from the server to the client before the client can connect(and possibly the other way from the client to the server, depending on where you generate them).

                                                                          1. 1

                                                                            Key management is already your problem.

                                                                            ssh’s default simply lets you pretend that it isn’t.

                                                                            1. 2

                                                                              Very true. I didn’t mean to imply otherwise.

                                                                      1. 3

                                                                        Is there a comprehensive and/or up-to-date set of recommendations for simple, static HTTP servers anywhere?

                                                                        After years of trying to lock down Apache, PHP, CMSs, etc. and keep up to date on vulnerabilities and patches, I opted to switch to a static site and a simple HTTP server to reduce my attack surface and the possibility of misconfiguration.

                                                                        thttpd seems to be the classic option, but I’m a little wary of it due to past security issues apparent lack of maintainance (would be fine if it were “done”, but security issues make that less credible). I’m currently using darkhttpd after seeing it recommended on http://suckless.org/rocks

                                                                        Edit: I upvoted the third-party hosting suggestions (S3, CloudFlare, etc.) since that’s clearly the most practical; for personal stuff I still prefer self-hosted FOSS though :)

                                                                        1. 4

                                                                          If all you need is static http you don’t have to host it yourself. I host my blog in Amazon S3 (because I wanted to add SSL and GitHub didn’t support that last year) and for the last 13 months it’s costs me about $0.91 / month, and about two thirds of that is Route 53 :-)

                                                                          AWS gives you free SSL certificates, which was one of the main drivers for me to go with that approach.

                                                                          1. 3

                                                                            I use S3 / CloudFront for static HTTP content. It’s idiot proof (important for idiots like me!), highly reliable, and I spend less every year on it than I spend on a cup of coffee.

                                                                            The only real security risk I worried about was that someone could DDoS the site and run up my bill, but I deployed a CloudWatch alarm tied to a Lambda to monitor this. It’s never fired. I think at my worst month I used 3% of my outbound budget :)

                                                                            1. 1

                                                                              I’ve always wondered why AWS doesn’t provide a spending limit feature… it can’t be due to a technical reason, right? I know their service is supposed to be more complex, but even the cheapest VPS provider gives you this option, often enabled by default. I can only conclude they decided they don’t want that kind of customer.

                                                                              1. 1

                                                                                I also worried about the risk of “DDoS causing unexpexted cost” when I was looking for a place to host my private DNS zones. To me it appeared that the free Cloudflare plan (https://www.cloudflare.com/plans/) was the best fit (basically free unmetered service).

                                                                                Would using that same free plan be a safer choice than Cloudfront from a cost perspective?

                                                                              2. 3

                                                                                You’d be hard pressed to go wrong with httpd from the OpenBSD project. It’s quite stable, it’s been in OpenBSD base for a while now. It’s lack of features definitely keeps it in the simple category. :)

                                                                                There is also NGINX stable branch. it’s not as simple as OpenBSD’s option, but is stable, maintained and is well hardened by being very popular.

                                                                                1. 3

                                                                                  In hurricane architecture, they used Nginx (dynamic caching) -> Varnish (static caching) -> HAProxy (crypto) -> optional Cloudfare for acceleration/DDOS. Looked like a nice default for something that needed a balance of flexibility, security, and performance. Depending on one’s needs, Nginx might get swapped for a simpler server but it gets lots of security review.

                                                                                  I’ll also note for OP both this list of web servers.

                                                                                2. 1

                                                                                  Check out this.

                                                                                  1. 1

                                                                                    Yeah, I also like this similar list, but neither provide value judgements about e.g. whether it’s sane to leave such things exposed to the Internet unattended for many years (except for OS security updates).

                                                                                1. 8

                                                                                  This is getting more and more common since GDPR. A way to “bypass” these kind of tactics is to enable GDPR / cookie consent blocking with an ad blocker (at least this is possible with uBlock Origin). It automatically hides these annoying banners/popups without forcing you to opt-in.

                                                                                  1. 3

                                                                                    It’s even more fun when you consider how many of these websites then set the cookies that you’d actually have to opt in…

                                                                                    1. 1

                                                                                      How do you do this with uBlock Origin? I didn’t see a setting about GDPR or cookie/consent blocking.

                                                                                      1. 12

                                                                                        If you go in uBlock Origin preferences → Filter lists, under “Annoyances” there’s “Fanboy’s Cookiemonster List” which hides “we use cookies” banners (and apparently will also hide GDPR banners).

                                                                                        1. 1

                                                                                          <3 THANKS!

                                                                                    1. 1

                                                                                      Is there a graph showing how well it holds up?

                                                                                      1. 1

                                                                                        No, but I can tell you right now it doesn’t.

                                                                                        1. 1

                                                                                          Not the specifics, but the over-arching ideas pretty much hold up I’d say.

                                                                                          • Open Systems: Sure Oracle hasn’t died yet, but even MS is even getting on the Open bandwagon to some degree.
                                                                                          • Software Distribution Channels: well OK the Internet ate the CDROM up, but retail software in a store is 99% dead, he called that.
                                                                                          • Kernel/base source code explosion: Drivers def. take up way too much room in the kernel :)
                                                                                          • Multiprocessor: def. true
                                                                                          • Networking: well OK 3 directions, Internet/WAN, Wireless(LAN) and high-speed LAN(fiber and friends)
                                                                                          • Java: pretty much true, minus the systems programming part.
                                                                                          • Nomadic devices: smartphones totally made this true.
                                                                                          1. 1

                                                                                            I was mainly referring to the title claim of “2^(Year-1984) Million Instructions per Second” because OP was asking for a graph.

                                                                                      1. 3

                                                                                        I like the truly p2p aspect here, but it’s a big red flag that SSB seems to refer to a specific node.js implementation and not to a wider protocol with multiple implementations. I did a bit of digging and couldn’t find anything, but maybe I missed something?

                                                                                        1. 4

                                                                                          The protocol is defined: https://ssbc.github.io/scuttlebutt-protocol-guide/

                                                                                          rust client: https://crates.io/crates/ssb-client

                                                                                          Other versions(go, c, etc) are being worked on as well.

                                                                                          1. 3

                                                                                            A pity the signing / marshalling algorithm is such a PITA to implement (the signature must be the last key/value pair in the JSON document, and it signs the bytes of the document up to that point).

                                                                                            1. 2

                                                                                              and order has to be maintained. Indeed. Not sure why they designed it that way.

                                                                                              1. 1

                                                                                                At least being able to produce a known canonical order is important for signing. And the signature cannot be part of that which it signs.

                                                                                                1. 1

                                                                                                  Oh yeah - the canonical form is nonexistent, you just sign whatever bytes you’ve written so far.

                                                                                                  If you were signing a message body (eg a json string value) it would be different - but as it stands relays have to implement white space compatible json marshalling with the sender.

                                                                                                  1. 1

                                                                                                    duh! sorry, you are right! asleep at the wheel apparently when I wrote that :)

                                                                                              2. 2

                                                                                                Having alternate clients is a good start, but is it still true that there’s only one server implementation?

                                                                                                1. 1

                                                                                                  I believe someone is working on a go implementation, but I don’t know where the code may be, and I’m not on my SSB machine to try and find it. But there is definitely only one that’s usable at the moment, that I’m aware of…

                                                                                                  and I agree, it’s a good start. It’s also not smartphone/mobile ready yet either, but work is happening on that front as well.

                                                                                            1. 3

                                                                                              This link may be an easier one to understand for people not familiar with SSB https://git.scuttlebot.io/%25RPKzL382v2fAia5HuDNHD5kkFdlP7bGvXQApSXqOBwc%3D.sha256

                                                                                              It talks about how to move your code from Github into a decentralized SSB. Even if you don’t want to actually do the conversion, it explains how it all works.