1. 3
    • commit and push changes on personal branch, tell someone ready for merge.
    • any other team member reviews and merges into stable and pushes to the stable repo.
    • CI runs make deploy which will run tests, builds and then deploys it.

    Personal Projects:

    • make deploy does the right thing.

    We use Makefile as our entry point into our projects, make is everywhere, and it’s mature, well tested software, the warts are well known, etc. make test will just work, regardless of the language or tools actually used to run the tests, etc. i.e. for Rust code, we use cargo to run tests, for Python code we use pytest w/ hypothesis, but you don’t have to remember those details, you just remember make test.

    1. 2

      Always had a prejudice with Make, wrote a makefile the other day, it changed my mind. I still find the syntax and documentation messy, but it’s good for what it’s intended for. I plan on spreading it’s use at work.

      1. 2

        Good luck! I agree it’s not perfect, it definitely has warts, but it’s very mature software that’s not going away anytime soon. It will die about the time the C language dies, which won’t be in my lifetime.

        Other build software, it’s anyone’s guess how long it will be maintained.

    1. 4

      I made one in the mid 90’s, with the /etc/issue saying: root password is PASSWORD

      ok PASSWORD wasn’t the actual password, I forget now what it was, but it literally was in /etc/issue as plain ascii text, so it was trivial to “root” the box! :)

      It worked out really well for a few years, until script kiddies eventually found it and kept erasing everything, so I shut it down. Yes, the machine was accessible on the public internet with a DNS name. It was hosted at the local ISP I helped run.

      It was a great little community, the hostname was never publicly posted saying it was open to the public, but through word of mouth or via curious people that would see the hostname and go huh, what’s that machine do! :)

      Had maybe 100 users on it, before it died.

      1. 3

        One of the cool ideas I’ve run across (I think from Paul Graham’s On Lisp) is petrification of a program - stabilizing and formalizing the program past the quick and dirty stage. I know that type hints/gradual typing are helping this, but would love to see more ideas (besides @andyc’s Oil) that can transition shell/quick scripts to something with more types, error handling, composability (besides pipes).

        1. 3

          There is the Oh shell: https://github.com/michaelmacinnis/oh

          1. 2

            Excellent point. I finished watching the BSDCan video (from the lobsters discussion) , but haven’t dug into playing with it yet.

        1. 2

          I can’t decide if Let’s Encrypt is a godsend or a threat.

          On one hand, it let you support HTTPS for free.
          On the other, they collect an enourmous power worldwide.

          1. 8

            Agreed, they are quickly becoming the only game in town worth playing with when it comes to TLS certs. Luckily they are a non-profit, so they have more transparency than say Google, who took over our email.

            It’s awesome that we have easy, free TLS certs, but there shouldn’t be a single provider for such things.

            1. 3

              Is there anything preventing another (or another ten) free CAs from existing? Let’s Encrypt just showed everyone how, and their protocol isn’t a secret.

              1. 6

                OpenCA tried for a long time, and I think now has pretty much given up: https://www.openca.org/ and just exist in their own little bubble now.

                Basically nobody wants to certify you unless you are willing to pay out the nose and are considered friendly to the way of doing things. LE bought their way in I’m sure, to get their cert cross-signed, which is how they managed so “quickly” and it still took YEARS.

                1. 1

                  Have you ever tried to create a CA?

                  1. 3

                    I’ve created lots of CAs, trusted by at most 250 people. :)

                    Of course it’s not easy to make a new generally-trusted CA — nor would I want it to be. It’s a big complicated expensive thing to do properly. But if you’re willing to do the work, and can arrange the funding, is anything stopping you? I don’t know that browser vendors are against the idea of multiple free CAs.

                    1. 3

                      Obviously I was not talking about the technical stuffs.

                      One of my previous boss explored the matter. He had the technical staff already but he wanted to become an official authority. It was more or less 2005.

                      After a few time (and a lot of money spent in legal consulting) he gave up.

                      He said: “it’s easier to open a bank”.

                      In a sense, it’s reasonable, as the European laws want to protect citizens from unsafe organisations.

                      But, it’s definitely not a technical problem.

                2. 1

                  Luckily they are a non-profit

                  Linux Foundation is a 501(c)(6) organization, a business league that is not organized for profit and no part of the net earnings goes to the benefit of any private shareholder or individual.
                  The fact all shareholders benefit from its work without a direct economical gain, doesn’t means it has the public good at heart. Even less the public good of the whole world.

                  It sound a lot like another attempt to centralize the Internet, always around the same center.

                  It’s awesome that we have easy, free TLS certs, but there shouldn’t be a single provider for such things.

                  And such certificates protect people from a lot of relatively cheap attacks. That’s why I’m in doubt.

                  Probably, issuing TLS certificates should be a public service free for each citizen of a state.

                  1. 3

                    Oh Jeez. Thanks, I didn’t realize it was not a 501c3, When LE was first coming around they talked about being a non-profit and I just assumed. That’s what happens when I assume.

                    Proof, so we aren’t just taking @Shamar’s word for it:

                    Linux Foundation Bylaws: https://www.linuxfoundation.org/bylaws/

                    Section 2.1 states the 501(c)(6) designation with the IRS.

                    My point stands, that we do get more transparency this way than we would if they were a private for-profit company, but I agree it’s definitely not ideal.

                    So you think local cities, counties, states and countries should get in the TLS cert business? That would be interesting.

                    1. 5

                      It’s true the Linux Foundation isn’t a 501(c)(3) but the Linux Foundation doesn’t control Let’s Encrypt, the Internet Security Research Group does. And the ISRG is a 501(c)(3).

                      So your initial post is correct and Shamar is mistaken.

                      1. 1

                        The Linux Foundation will provide general and administrative support services, as well as services related to fundraising, financial management, contract and vendor management, and human resources.

                        This is from the page linked by @philpennock.

                        I wonder what is left to do for the Let’s Encrypt staff! :-)

                        I’m amused by how easily people forget that organisations are composed by people.

                        What if Linux Foundation decides to drop its support?
                        No funds. No finance. No contracts. No human resources.
                        Oh and no hosting, too.

                        But hey! I’m mistaken! ;-)

                        1. 2

                          Unless you have inside information on the contract, saying LE depends on the Linux Foundation is pure speculation.

                          I can speculate too. Should the Linux Foundation withdraw support there are plenty of companies and organisations that have a vested interest in keeping LetsEncrypt afloat. They’ll be fine.

                          1. 1

                            Agreed.

                            Feel free to think that it’s a philanthropic endeavour!
                            I will continue to think it’s a political one.

                            The point (and as I said I cannot answer yet) is if the global risk of a single US organisation being able to break most of HTTPS traffic world wide is worth the benefit of free certificates.

                            1. 3

                              Any trusted CA can MITM, though, not just the one that issued the certificate. So the problem is (and always has been) much, much worse than that.

                              1. 1

                                Good point! I stand corrected. :-)

                                Still note how it’s easier for the certificate issuer to go unnoticed.

                    2. 4

                      What’s Linux Foundation got to do with it? Let’s Encrypt is run by ISRG, Internet Security Research Group, an organization from the IAB/IETF family if memory serves.

                      They’re a 501(c)(3).

                      1. 2

                        LF provide hosting and support services, yes. Much as I pay AWS to run some things for me, which doesn’t lead to Amazon being in charge. https://letsencrypt.org/2015/04/09/isrg-lf-collaboration.html explains the connection.

                        1. 1

                          Look at the home page, top-right.

                          1. 2

                            The Linux Foundation provides hosting, fundraising and other services. LetsEncrypt collaborates with them but is run by the ISRG:

                            Let’s Encrypt is a free, automated, and open certificate authority brought to you by the non-profit Internet Security Research Group (ISRG).

                  1. 3

                    Rails’ credentials/secrets file is the devil. So I recently integrated envkey.com with my app, and it was a breeze to do. Might be a pricier than the AWS solution, but the capabilities I get are pretty nice.

                    Being a super small startup, I preferred paying EnvKey some money to offload the dev effort to come up with something which would never be as good as the EnvKey solution.

                    A few months in, and so far so good!

                    1. 1

                      Envkey.com looks interesting, and there’s definitely some merit to using a third party to store and encrypt your credentials over using aws to encrypt credentials for aws services.

                      $20/month isn’t terrible, but it’s a bit pricey and per-seat pricing feels a little out of line with the value of the service they’re providing. But who am I to judge a SaaS that looks like it’s paying the rent?

                      I worry about one thing: how do you securely deploy your envkey api key?

                      This is the same problem with HashiCorp Vault or any external secret keeper. There’s a secret which unlocks all your other secrets…that makes it the most important secret. How are you injecting that secret into your application? The whole reason the AWS Parameter store is viable is that access to download and decrypt your secrets isn’t controlled by a key stored on the machine. It’s controlled by the EC2 or container’s instance role.

                      1. 2

                        Hashicorp Vault has many ways to authenticate and get a token, you can tie to EC2, or you can auth against LDAP/Github, AppRole(where you can tie it to specific machine(s)/applications, etc. But it is definitely a turtles all the way down approach. The goal of Vault is to only have to worry about deploying the token and vault will then handle ALL of your secret/sensitive information for you, with transit, DB and the other backends. So at least the problem becomes “manageable” since it’s only the 1 token you have to get out there.

                    1. 2

                      I nearly posted this as an ‘ask’: Slack is not good for $WORK’s use case because it does not have an on-premise option. What on-premise alternatives are people using/would you recommend?

                      1. 4

                        I’ve used Mattermost before, which AFAIK has an on-prem version - just as a user, not setup or admin so I can’t speak to that end.

                        1. 6

                          I’ve heard rumblings about Zulip being a decent option too. I haven’t used it myself though.

                          1. 2

                            Same, actually. It does look very interesting, I’d be highly interested in whether anyone has any experience with it?

                            1. 1

                              Zulip looks pretty solid, thanks for mentioning it. We may give it a try…

                            2. 2

                              We’ve used mattermost for a few years now, it’s pretty easy to setup and maintain, you basically just replace the go binary every 30 days with the new version. We just recently moved to the integrated version with Gitlab, and now Gitlab handles it for us, even easier now, since Gitlab is just a system package you upgrade.

                              1. 2

                                A lot of people have said Mattermost, might be a good drop-in replacement. According to the orange site they’re considering dropping a “welcome from Hipchat” introductory offer, which is probably a smart move.

                                1. 2

                                  IIRC mattermost is open core. I’ve heard good things about zulip. Personally, I like matrix, which federates and bridges

                                2. 3

                                  Matrix is fairly nice to use. I had some issues hosting it though.

                                1. 9

                                  Many of the author’s experiences speaking with senior government match my own.

                                  However, there’s one element that I think is very easily lost in this conversation, and which I want to highlight: there is no group I spend more time trying to convince of the importance of security than other software engineers.

                                  Software engineers are the only group of people I’ve ever had push back when I say we desperately need to move to memory safe programming languages. All manner of non-engineers, when I’ve explained the damages wrought by C/C++, and how nearly every mass-vulnerability they know about has a shared root cause, generally understand why this is an important problem, and want to discuss ideas about how do we resolve this.

                                  Engineers complain to me that rewriting things is hard, and besides if you’re disciplined in writing C and use sanitizers and fuzzers you’ll be ok. Rust isn’t ergonomic enough, and we’ve got a really good hiring pipeline for C++ engineers.

                                  If we want to build software safety into everything we do, we need to get engineers on board, because they’re the obstacle.

                                  1. 11

                                    People don’t even use sanitizers and fuzzers, so I’m not sure why you would expect them to rewrite in Rust. It’s literally 1000x less effort.

                                    As far as I can tell, CloudFlare’s CloudBleed bug would have been found if they compiled with ASAN and fed about 100 HTML pages into it. You don’t even have to install anything; it’s built right into your compiler! (both gcc and Clang)

                                    I also don’t agree that “nearly every mass vulnerability has a shared root cause”. For example, you could have written ShellShock in Rust, Python, or any other language. It’s basically a “self shell-code injection” and has very little to do with memory safety (despite a number of people being confused by this.)

                                    The core problem is the sheer complexity and number of lines of unaudited code, and the fact that core software like bash has exactly one maintainer. There are actually too many people trying to learn Rust and too few people maintaining software that everybody actually uses.

                                    In some sense, Rust can make things worse, because it leads to more source code. We already have memory-safe languages: Python, Ruby, JavaScript, Java, C#, Erlang, Clojure, OCaml, etc.

                                    Software engineers should definitely spend more time on security, and need to be educated more. But the jump to Rust is a non-sequitur. Rust is great for kernels where the above languages don’t work, and where C and C++ are too unsafe. But kernels are only a part of the software landscape, and they don’t contain the majority of security bugs.

                                    I would guess that most data breaches these days have nothing to do with memory safety, and have more to do with bugs similar to the ones in the OWASP top 10 (e.g. XSS, etc.)

                                    https://www.owasp.org/images/7/72/OWASP_Top_10-2017_%28en%29.pdf.pdf


                                    Edit: as another example, Mirai has nothing to do with memory safety:

                                    https://en.wikipedia.org/wiki/Mirai_(malware)

                                    All it does it try default passwords, which gives you some idea of where the “bar” is. Rewriting software in Rust has nothing to do with that, and will actually hurt because it takes effort and mindshare away from solutions with a better cost/benefit ratio. And don’t get me wrong, I think Rust has its uses. I just see people overstating them quite frequently, with the “why don’t more people get Rust?” type of attitude.

                                    1. 2

                                      There were languages like Opa that tried to address what happened on web app side. They got ignored just like people ignore safety in C. Apathy is the greatest enemy of security. It’s another reason we’re pushing the memory-safe, higher-level languages, though, with libraries for stuff likely to be security-critical. The apathetic programmers do less damage on average that way. Things that were code injections become denial of service. That’s an improvement.

                                    2. 2

                                      not only software engineers, almost the entire IT industry has buried it’s head in the sand and is trying desperately hard to hide from the problem, because “security is too hard”. We are pulling teeth to get people to even do the minimal upgrades to things. I recently had a software vendor refusing to support anything other than TLS 1.0. After many exchanges back and forth, including an article from Microsoft(and basically every other sane person) saying they were dropping all support of older TLS protocols because of their insecurity, they finally said, OK we will look into it. I’m sure we all have stories like this.

                                      If you can’t even bother to take the minimum of steps to upgrade your security stacks after more than a decade,(TLS1.0 released in 1999 and TLS 1.2 is almost exactly a decade old now) because it’s “too hard”, trying to get people to move off of memory unsafe languages like C/C++ is a non-starter.

                                      But I agree with you, and the author.

                                      1. 2

                                        I would like to use TLS 1.3 for an existing product. It’s in C and Lua. The current system is network driven using select() (or poll() or epoll() depending upon the platform). The trouble I’m having is finding a library that is easy, or even a bit complicated but sane to use. The evented nature means I an notified when data comes in, and I want to feed this to the TLS library instead of having the TLS library manage the sockets for me. But the documentation is dense, the tutorials only cover blocking calls, and that’s when they’re readable! Couple this with the whole “don’t you even #$@#$# think of implementing crypto” that is screamed from the roof tops and no wonder software engineers steer away from this crap.

                                        I want a crypto library that just handles the crypto stuff. Don’t do the network, I already have a framework for that. I just need a way to feed data into it, and get data out of it, and tell me if the certificate is good or not. That’s all I’m looking for.

                                        1. 2

                                          OpenBSD’s libtls.

                                          1. 2

                                            TLS 1.3 is not quite ready for production use, unless you are an early adopter like Cloudfare. Easy to use API’s that are well-reviewed are not there yet.

                                            Crypto Libraries: OpenBSD’s libtls like @kristapsdz mentioned, or libsodium/nacl or OpenSSL. If it’s just for your internal connections and don’t actually need TLS, just talking to libsodium or NaCL for an encrypted stream of bytes is probably your best bet, using XSalsa20+Poly1305. See: https://latacora.singles/2018/04/03/cryptographic-right-answers.html

                                            TLS is a complicated protocol(TLS1.3 reduces a LOT of complexity, it’s still very complicated).

                                            If you are deploying to Apple, Microsoft or OpenBSD platforms, you should just tie to the OS provided services, that provide TLS. Let them handle all of that for you(including the socket). Apple and MS platforms have high-level API’s that will do all the security crap for you. OpenBSD has libtls.

                                            On other platforms(Linux, etc), you should probably just use OpenSSL. Yes it’s a fairly gross API, but it’s pretty well-maintained nowadays(5 years ago, it would not qualify as well maintained.). The other option is libsodium/NaCL.

                                            1. 1

                                              Okay, fine. Are there any crypto libraries that are easy to use for whatever is current today? My problem is: a company that is providing us information today via DNS has been invaded by a bunch of hipster developers [1] who drunk the REST Kool-Aid™ so I need a way to make an HTTPS call in an event driven architecture and not blow our Super Scary SLAs with the Monopolistic Phone Company (which would case the all-important money to flow the other way), so your advice to let OS provided TLS services control the socket is a non-starter.

                                              And for the record, the stuff I write is deployed to Solaris. For reasons that exceed my pay grade.

                                              So I read the Cryptographic Right Answers you linked to and … okay. That didn’t help me in the slightest.

                                              The program I’m working on is in C, and not written by me (so it’s in “maintenance mode”). It works, and rewriting it from scratch is probably also a non-starter.

                                              Are you getting a sense of the uphill battle this is?

                                              [1] Forgive my snarky demeanor. I am not happy about this.

                                              Edit: further clarification on what I have to work with.

                                              1. 1

                                                I get it, it sucks sometimes. I’m guessing you are not currently doing any TLS at all? So you can’t just upgrade the libraries you are currently using for TLS, whatever they are.

                                                In my vendor example, the vendor already implemented TLS (1.0) and then promptly stopped. They have never bothered to upgrade to newer versions of TLS. I don’t know the details of their implementation, obviously, since it’s closed-source; but unless they went crazy and wrote their own crypto code, upgrading their crypto libraries is probably all that’s required. I’m not saying it’s necessarily easy to do that, but this is something everyone should do at least once every decade, just to keep the code from rotting a terrible death anyways. TLS 1.2 becomes a decade old standard next month.

                                                I don’t work on Solaris platforms (and haven’t in at least a decade, so you are probably better off checking with other Solaris people). Oracle might have a TLS library these days, I have no clue. I tend to avoid Oracle land whenever possible. I’m sorry you have to play in their sandbox.

                                                I agree the Crypto right-answers page isn’t useful for you, since you just want TLS, It’s target is for developers who need more than TLS. I used it here mostly as proof of why I recommended XSalsa20+Poly1305 for symmetric encryption. Again, you know you need TLS, so it’s a non-useful document for you at this point.

                                                Event driven IO is possible with OpenSSL, but it’s not super easy see: https://www.openssl.org/docs/faq.html#PROG11. Then again, nothing around event driven IO is super easy. Haproxy and Nginx both manage to do it, and are both open-source implementations of TLS, so you have working code you can go examine. Plus it might give you access to developers who have done event driven IO with TLS. I haven’t ever written that implementation, so I can’t help with those specifics.

                                                OpenSSL is working on making their API’s easier to use, but it’s a long, slow haul, but it’s definitely a known problem, and they are working on it.

                                                As for letting the OS do the work for you, you are correct there are definitely use-cases where it won’t work, and it seems you fit the bill. For most applications, letting the OS do it for you is generally the best answer, especially around Crypto which can be hard to get right, and of course only applies to the platforms that offer such things(Apple, MS, etc). Which is why I started there ;)

                                                Anyways, good luck! Sorry I can’t just point to a nice easy example, for you. Maybe someone else around here can.

                                                1. 1

                                                  I’m not even using TCP! This is all driven with UDP. TCP complicates things but is manageable. Adding a crap API between TCP and my application? Yeah, I can see why no one is lining up to secure their code.

                                                  1. 1

                                                    I think there is a communication issue here.

                                                    The vendor you are connecting with over HTTPS supports UDP packets on a REST API interface? really? Crazier things have happened I guess.

                                                    I think what you are saying is you are doing DNS over UDP for now, but are being forced into HTTPS over TCP?

                                                    DNS over UDP is very far away from a HTTPS rest API.

                                                    Anyways, for being an HTTPS client, against a HTTPS REST API over TCP, you have 2 decent options:

                                                    Event driven/async: use libevent, example code: https://github.com/libevent/libevent/blob/master/sample/https-client.c

                                                    But most people will be boring, and use something like libcurl (https://curl.haxx.se/docs/features.html) and do blocking I/O. If they have enough network load, they will setup a pool of workers.

                                                    1. 2

                                                      Right now, we’re looking up NAPTR records over DNS (RFC-3401 to RFC-3404). The summary is that one can query name information for a given phone number (so 561-555-5678 is ACME Corp.). The vendor wants to switch to a REST API and return JSON. Normally I would roll my eyes at this but the context I’m working in is more realtime—as in Alice is calling Bob and we need to look up the information as the call is being placed! WE have a hard deadline with the Monopolistic Phone Company to provide this information [1].

                                                      We don’t use libevent but I’ll look at the code anyway and try to make heads and tails.

                                                      [1] Why are we querying a vendor this for? Well, it used to be in house, but now “we lease this back from the company we sold it to - that way it comes under the monthly current budget and not the capital account.” (at least, that’s my rational for it).

                                                      1. 2

                                                        Tell me how it goes. Fwiw, you might want to take a quick look at mbed TLS. Sure it wants to wrap a socket fd in its own context and use read/write on it, but you can still poll that fd and then just call the relevant mbedtls function when you have data coming in. It does also support non-blocking operation.

                                                        https://tls.mbed.org/api/net__sockets_8h.html#a2ee4acdc24ef78c9acf5068a423b8c30 https://tls.mbed.org/api/net__sockets_8h.html#a03af351ec420bbeb5e91357abcfb3663

                                                        https://tls.mbed.org/api/structmbedtls__net__context.html

                                                        https://tls.mbed.org/kb/how-to/mbedtls-tutorial (non-blocking io not covered in the tutorial but it doesn’t change things much)

                                                        I’ve no experience with UDP (yet – soon I should), but if you’re doing that, well, mbedtls should handle DTLS too: https://tls.mbed.org/kb/how-to/dtls-tutorial (There’s even a note relevant to event based i/o)

                                                        We use mbedtls at work in a heavily event based system with libev. Sorry, no war stories yet, I only got the job a few weeks ago.

                                                        1. 1

                                                          Right, let’s add MORE latency for a real-time-ish system. Always a great idea! :)

                                        1. 2

                                          Don’t all VCSs have tools to modify history? I think svnadmin does: http://oliverguenther.de/2016/01/rewriting-subversion-history/ (assuming there aren’t any blockchain-based VCSs. I daren’t look)

                                          If the distinction being drawn is ‘admin’ vs ‘user’ tooling, I guess - like workflow - git punts that to the surrounding culture and environment (as it does “which version is the ‘master’” - which is the same feature/bug of any DVCS).

                                          I admit I like being able to say “v234” but really, what that means is “v234 of the (single) upstream repo which can change any time the upstream repo manager runs svnadmin”.

                                          There’s nothing to stop github putting a sequential “v1, v2, v3, …” on commits to master or otherwise blessing some workflow.

                                          I think the differences aren’t so much about features + capability and tooling as culture.

                                          1. 2

                                            git is a merkle-tree-based system, which is what I assume you meant by “blockchain-based” in this context

                                            1. 1

                                              Yes it is, but no - that’s not what I meant. I mean that I expect every VCS to be able to rewrite history since the data files are under control of the admin. git can do it, svn can do it. You can edit RCS files by hand if you want to (unsure if there is tooling to do it).

                                              i.e. linus can rewrite his git history. It will be out of sync with other people, but that is then a social issue, not a technical one (I admit this is a fine point).

                                              The only time you can’t rewrite history is in the “public immutable” world of blockchain - since the data files aren’t under your control. I don’t know if someone has built a vcs like that and my comment was really just a side swipe at blockchain hype.

                                              1. 1

                                                you can if you get 51%

                                                1. 1

                                                  https://github.com/clehner/git-ssb not exactly blockchain, but immutable history just the same.

                                            1. 6

                                              Are you confident that every single user of your systems is going to out-of-band verify that that is the correct host key?

                                              If your production infrastructure has not solved this problem already, you should fix your infrastructure. There are multiple ways.

                                              1. Use OpenSSH with an internal CA
                                              2. Automate collection of server public ssh fingerprints and deployment of known_hosts files to all systems and clients (we do it via LDAP + other glue)
                                              3. Utilize a third party tool that can do this for you (e.g., krypt.co)

                                              Your users should never see the message “the authenticity of (host) cannot be established”

                                              1. 4

                                                Makes me wonder how Oxy actually authenticates hosts. The author hates on TOFU but mentions no alternatives AFAICS, not even those available in OpenSSH?

                                                1. 3

                                                  It only authenticates keys, and it makes key management YOUR problem. see https://github.com/oxy-secure/oxy/blob/master/protocol.txt for more details.

                                                  I.e. you have to copy over keys from the server to the client before the client can connect(and possibly the other way from the client to the server, depending on where you generate them).

                                                  1. 1

                                                    Key management is already your problem.

                                                    ssh’s default simply lets you pretend that it isn’t.

                                                    1. 2

                                                      Very true. I didn’t mean to imply otherwise.

                                              1. 3

                                                Is there a comprehensive and/or up-to-date set of recommendations for simple, static HTTP servers anywhere?

                                                After years of trying to lock down Apache, PHP, CMSs, etc. and keep up to date on vulnerabilities and patches, I opted to switch to a static site and a simple HTTP server to reduce my attack surface and the possibility of misconfiguration.

                                                thttpd seems to be the classic option, but I’m a little wary of it due to past security issues apparent lack of maintainance (would be fine if it were “done”, but security issues make that less credible). I’m currently using darkhttpd after seeing it recommended on http://suckless.org/rocks

                                                Edit: I upvoted the third-party hosting suggestions (S3, CloudFlare, etc.) since that’s clearly the most practical; for personal stuff I still prefer self-hosted FOSS though :)

                                                1. 4

                                                  If all you need is static http you don’t have to host it yourself. I host my blog in Amazon S3 (because I wanted to add SSL and GitHub didn’t support that last year) and for the last 13 months it’s costs me about $0.91 / month, and about two thirds of that is Route 53 :-)

                                                  AWS gives you free SSL certificates, which was one of the main drivers for me to go with that approach.

                                                  1. 3

                                                    I use S3 / CloudFront for static HTTP content. It’s idiot proof (important for idiots like me!), highly reliable, and I spend less every year on it than I spend on a cup of coffee.

                                                    The only real security risk I worried about was that someone could DDoS the site and run up my bill, but I deployed a CloudWatch alarm tied to a Lambda to monitor this. It’s never fired. I think at my worst month I used 3% of my outbound budget :)

                                                    1. 1

                                                      I’ve always wondered why AWS doesn’t provide a spending limit feature… it can’t be due to a technical reason, right? I know their service is supposed to be more complex, but even the cheapest VPS provider gives you this option, often enabled by default. I can only conclude they decided they don’t want that kind of customer.

                                                      1. 1

                                                        I also worried about the risk of “DDoS causing unexpexted cost” when I was looking for a place to host my private DNS zones. To me it appeared that the free Cloudflare plan (https://www.cloudflare.com/plans/) was the best fit (basically free unmetered service).

                                                        Would using that same free plan be a safer choice than Cloudfront from a cost perspective?

                                                      2. 3

                                                        You’d be hard pressed to go wrong with httpd from the OpenBSD project. It’s quite stable, it’s been in OpenBSD base for a while now. It’s lack of features definitely keeps it in the simple category. :)

                                                        There is also NGINX stable branch. it’s not as simple as OpenBSD’s option, but is stable, maintained and is well hardened by being very popular.

                                                        1. 3

                                                          In hurricane architecture, they used Nginx (dynamic caching) -> Varnish (static caching) -> HAProxy (crypto) -> optional Cloudfare for acceleration/DDOS. Looked like a nice default for something that needed a balance of flexibility, security, and performance. Depending on one’s needs, Nginx might get swapped for a simpler server but it gets lots of security review.

                                                          I’ll also note for OP both this list of web servers.

                                                        2. 1

                                                          Check out this.

                                                          1. 1

                                                            Yeah, I also like this similar list, but neither provide value judgements about e.g. whether it’s sane to leave such things exposed to the Internet unattended for many years (except for OS security updates).

                                                        1. 8

                                                          This is getting more and more common since GDPR. A way to “bypass” these kind of tactics is to enable GDPR / cookie consent blocking with an ad blocker (at least this is possible with uBlock Origin). It automatically hides these annoying banners/popups without forcing you to opt-in.

                                                          1. 3

                                                            It’s even more fun when you consider how many of these websites then set the cookies that you’d actually have to opt in…

                                                            1. 1

                                                              How do you do this with uBlock Origin? I didn’t see a setting about GDPR or cookie/consent blocking.

                                                              1. 12

                                                                If you go in uBlock Origin preferences → Filter lists, under “Annoyances” there’s “Fanboy’s Cookiemonster List” which hides “we use cookies” banners (and apparently will also hide GDPR banners).

                                                                1. 1

                                                                  <3 THANKS!

                                                            1. 1

                                                              Is there a graph showing how well it holds up?

                                                              1. 1

                                                                No, but I can tell you right now it doesn’t.

                                                                1. 1

                                                                  Not the specifics, but the over-arching ideas pretty much hold up I’d say.

                                                                  • Open Systems: Sure Oracle hasn’t died yet, but even MS is even getting on the Open bandwagon to some degree.
                                                                  • Software Distribution Channels: well OK the Internet ate the CDROM up, but retail software in a store is 99% dead, he called that.
                                                                  • Kernel/base source code explosion: Drivers def. take up way too much room in the kernel :)
                                                                  • Multiprocessor: def. true
                                                                  • Networking: well OK 3 directions, Internet/WAN, Wireless(LAN) and high-speed LAN(fiber and friends)
                                                                  • Java: pretty much true, minus the systems programming part.
                                                                  • Nomadic devices: smartphones totally made this true.
                                                                  1. 1

                                                                    I was mainly referring to the title claim of “2^(Year-1984) Million Instructions per Second” because OP was asking for a graph.

                                                              1. 3

                                                                I like the truly p2p aspect here, but it’s a big red flag that SSB seems to refer to a specific node.js implementation and not to a wider protocol with multiple implementations. I did a bit of digging and couldn’t find anything, but maybe I missed something?

                                                                1. 4

                                                                  The protocol is defined: https://ssbc.github.io/scuttlebutt-protocol-guide/

                                                                  rust client: https://crates.io/crates/ssb-client

                                                                  Other versions(go, c, etc) are being worked on as well.

                                                                  1. 3

                                                                    A pity the signing / marshalling algorithm is such a PITA to implement (the signature must be the last key/value pair in the JSON document, and it signs the bytes of the document up to that point).

                                                                    1. 2

                                                                      and order has to be maintained. Indeed. Not sure why they designed it that way.

                                                                      1. 1

                                                                        At least being able to produce a known canonical order is important for signing. And the signature cannot be part of that which it signs.

                                                                        1. 1

                                                                          Oh yeah - the canonical form is nonexistent, you just sign whatever bytes you’ve written so far.

                                                                          If you were signing a message body (eg a json string value) it would be different - but as it stands relays have to implement white space compatible json marshalling with the sender.

                                                                          1. 1

                                                                            duh! sorry, you are right! asleep at the wheel apparently when I wrote that :)

                                                                      2. 2

                                                                        Having alternate clients is a good start, but is it still true that there’s only one server implementation?

                                                                        1. 1

                                                                          I believe someone is working on a go implementation, but I don’t know where the code may be, and I’m not on my SSB machine to try and find it. But there is definitely only one that’s usable at the moment, that I’m aware of…

                                                                          and I agree, it’s a good start. It’s also not smartphone/mobile ready yet either, but work is happening on that front as well.

                                                                    1. 3

                                                                      This link may be an easier one to understand for people not familiar with SSB https://git.scuttlebot.io/%25RPKzL382v2fAia5HuDNHD5kkFdlP7bGvXQApSXqOBwc%3D.sha256

                                                                      It talks about how to move your code from Github into a decentralized SSB. Even if you don’t want to actually do the conversion, it explains how it all works.

                                                                      1. 7

                                                                        There is satellite based SMS, one such product: https://www.findmespot.com/en/index.php?cid=666. Important events you can almost certainly shrink into an SMS. something like: location 45 ebola +1

                                                                        Another option is to use a gossip protocol, something like https://www.scuttlebutt.nz/. It’s not really there for Mobile yet, but there are some Android implementations. Basically it lets any scuttlebutt user bring the data back, not just the one who entered it into their device, but securely knowing which user input the data. So the local staff will input the data into their device, and sync with the people that travel across the region(s). As travelers wander into better connectivity, they could then sync to your pub server, getting the data to you, while the local staff are still local doing their thing, and never had to leave. I’d recommend trying to get transport people that travel regularly through the region to be your traveling scuttlebutt nodes (bus drivers, water carriers, etc).

                                                                        The protocol is described here: https://ssbc.github.io/scuttlebutt-protocol-guide/

                                                                        The protocol is sufficiently general enough that you can run the git VCS across it.

                                                                        I run a public scuttlebutt pub here: https://www.zie.one/ if you want to try it out.

                                                                        1. 2

                                                                          In a lot of cases we need a bit more information than what a single SMS can provide.

                                                                          We’re already working on a sort of gossip protocol using the C zyre libraries on top of 0mq as well as bluetooth, it’s in development still but about 90% done, it’s a good idea, but it’s predicated on coming into contact with someone who is running our apps/systems in a timely manner, which may not be the case in a lot of instances. By the time they come into contact with another person they would probably be in a place that has internet or a connection of some sort.

                                                                          1. 3

                                                                            I don’t know your specific use-case, but at the very least you could easily get, “send someone out to collect more information” out of an SMS. Also, if you encode data, you can pack a fair bit of data into an SMS, given the constraints. But iridium and other satellite communications providers are not limited to SMS, the spot I showed is just the consumer level version of this, but if you are large enough you can reach out to the providers themselves and possibly work out some deal with them.

                                                                            Cool on the gossip protocol. The upside to doing something like SSB(scuttlebutt) is you get use-cases outside of just your domain, so the chances of people, such as transportation drivers, wanting to use it much higher. The more people you can get running a gossip protocol the faster you can sync your data across large swaths of unconnected/sporadically connected populations.

                                                                            Anyways, good luck!

                                                                        1. 1

                                                                          We use Jenkins, but all it does for us is accept webhooks from our central VCS repo on each commit and run make(or other build system) against the new revisions. And to properly yell when things go wrong.

                                                                          Pushing all this stuff into your CI doesn’t seem to make a lot of sense, as then you get locked in, for little to no gain as I can see.

                                                                          We treat Jenkins as a distributed task queue, with nice VCS features. Really if Nomad’s task queue or celery or what have you added support for webhooks and a nice way to yell and scream when something went wrong, it would 100% replace Jenkins for us in no time.

                                                                          1. 1

                                                                            At $PREV_JOB we used Jenkinsfiles extensively until some people complained that they couldn’t replicate it on their machine. We were building more and more complex tests scenarios and people wanted to run some parts of them on their laptops before pushing. Some complained that they had to read Groovy code to wrap they head around how tests were launched, but the worse for people were when doing offline remote… So some people started to write bash scripts to launch stuff (make was another candidate that people found too weird).

                                                                            1. 1

                                                                              Make isn’t that weird, but, it’s not well understood, sadly. But I do agree with your $PREV_JOB, that all of your tests should be runnable , pretty much anywhere. That’s certainly our philosophy.

                                                                              I’ve been playing with tmuxp using tmux, and running dependencies for testing that way, so that it can all be interactive very easily. Not sure it works out very well for very complex testing scenarios, but it seems to work out for low to medium complexity so far.

                                                                          1. 18

                                                                            Definitely way complicated. Nomad(https://nomadproject.io/) is what we chose, because it is so operationally simple. You can wrap your head around it easily.

                                                                            1. 7

                                                                              I haven’t used either in production yet, but isn’t the use case of nomad much more restricted then kubernetes? It’s only the scheduling part and leaves it to the use define, for example, ingress through a load balancer and so on?

                                                                              1. 10

                                                                                Yes, Load balancing is your problem. Nomad is ONLY a task scheduler across a cluster of machines. Which is why it’s not rocket science.

                                                                                You say I need X cpu and X memory and I need these files out on disk(or this docker image) and run this command.

                                                                                It will enforce your task gets exactly X memory, X cpu and X disk, so you can’t over-provision at the task level.

                                                                                It handles batch(i.e. cron) and spark workloads, system jobs(run on every node) and services (any long-running task). For instance with nomad batch jobs you can almost entirely replace Celery and other distributed task queues, in a platform and language agnostic way!

                                                                                I’m not sure I’d say the use-case is much more restricted, since you can do load balancing and all the other things k8s does, but you use specialized tools for these things:

                                                                                • For HTTPS traffic you can use Fabio, Traefik, HAProxy, Nginx, etc.
                                                                                • For TCP traffic you can use Fabio, Relayd, etc.

                                                                                These are outside of Nomad’s scope, except that you can run those jobs inside of Nomad just fine.

                                                                                edit: and it’s all declarative, a total win.

                                                                                1. 1

                                                                                  Why not haproxy for tcp too?

                                                                                  1. 1

                                                                                    I don’t actually use HAProxy, so I can’t really comment on if it does TCP as well, if it does, AWESOME. I was definitely not trying to be limiting, hence the etc. at the end of both of those.

                                                                                    We use Nginx and Relayd.

                                                                                    1. 2

                                                                                      It does TCP. See the reliability and security sections of the web site to see why you might want it.

                                                                                      1. 2

                                                                                        Thanks!

                                                                              2. 4

                                                                                Oooh, the fact that it’s by HashiCorp is a good sign. I’ll have to read up on this. Thanks!

                                                                              1. 2

                                                                                Ah, yes it is. If it is on GitHub for example, anyone can use it, modify it, contribute to it. They can even add some of the stuff the author talks about, readmes, documentation and comments. They can submit bugs and suggestions, they can work on features or fixes. “Just” putting it up is often good enough.

                                                                                1. 3

                                                                                  Agreed, I regularly send patches against README and docs for things that I use, as I’m learning how to use them. It’s just good manners to take your newly acquired knowledge and help others with it. Doubly so if you find something in the documentation that doesn’t exist in the executable.

                                                                                1. 20

                                                                                  Bitwarden is my tool of choice for this. I haven’t been a fan of other more CLI-centric password managers as they usually don’t have browser integration. The usability of using an in-browser UI to generate a random password and the prompts to save it when I submit forms are very important IMO. Nothing has come close to that while also being open source.

                                                                                  1. 3

                                                                                    One thing that irks me about Bitwarden is having to provide an email address and getting an installation id & key if I’d like to self host it for myself. Please correct me if I’m wrong but from what I understand, even for using it without the “premium” features one still needs to perform this step.

                                                                                    If so, I think I’ll stick with my pass + rofi-pass + Password Store for Android combo for now.

                                                                                    1. 5

                                                                                      This is true, there are ways around it, if you work a little, since it is OSS. However, there are a few 3rd party tools, 2 of which are server implementations: bitwarden-go(https://github.com/VictorNine/bitwarden-go) and bitwarden-ruby(https://github.com/jcs/bitwarden-ruby).

                                                                                      There is also a CLI tool (https://fossil.birl.ca/bitwarden-cli/doc/trunk/docs/build/html/index.html)

                                                                                    2. 2

                                                                                      Are you self-hosting it or using the hosted version? I’m somehow always sceptical of having hosted password storage, even if it’s encrypted and everything.

                                                                                      1. 1

                                                                                        If it’s not encrypted, they see your secrets. If it is encrypted, they’re in control of your secrets. In self-hosted setup, you are in control of your secrets. If encrypted, you might loose them. If sync’d to third party (preferably multiple), you still might loose key. If on scattered paper copies, each in safe place, you probably won’t. For some failures, write-once (i.e. CD-R) or append-only storage can help where a clean copy can be reproduced from the pieces.

                                                                                        That’s pretty much my style of doing this. It’s not as easy as 1Password or something, though. There’s the real tradeoff.

                                                                                        1. 2

                                                                                          It is encrypted, here is a link on how the crypto works in english: https://fossil.birl.ca/bitwarden-cli/doc/trunk/docs/build/html/crypto.html

                                                                                          I agree Bitwarden is not quite as user friendly(or as secure if using local vaults) as 1Password, but for an OSS app, it’s definitely at the top of the list on user friendliness of password managers.

                                                                                          I run a server locally on my LAN, and my phone/etc sync to it. I definitely don’t want my secrets out in the cloud somewhere, no matter how encrypted they might be.

                                                                                    1. 7

                                                                                      I jumped on the K8s train moderately early, and have since jumped right back off owing to the rapidly accelerating unnecessary extra complexity.

                                                                                      I’m sympathetic to the idea that enterprise requires a sometimes bewildering array of configuration options, and that the usual array of systems-screwer-uppers (e.g., Red Hat, IBM) were naturally going to show up to the party to try to increase consultant billing time, but man did that thing get messy and confused in a hurry. It almost makes you sympathize with the go development philosophy.

                                                                                      1. 3

                                                                                        It feels like the K8s train replaced the OpenStack train.

                                                                                        1. 2

                                                                                          Now consider that there are organization that deploy OpenStack on a hypervisor, then kubernetes on that openstack :)

                                                                                        2. 2

                                                                                          LOL I couldn’t agree more. “systems-screwer-uppers” I hadn’t heard that before. beautiful turn of phrase!