1. 4

    This seems to be a lot of, erm, work. Is there a spin of OpenBSD or another BSD that just kinda… works out of the box?

    1. 2

      OpenBSD-based distros show up periodically but usually disappear. I occasionally Google them to see what’s out there hoping we eventually get an Ubuntu or Mint… even a fraction of that focused on critical things… based on it. The ones I remember finding were Anonym.OS, OliveBSD (link’s dead), and MirOS. Last one still has a website up.

      1. 1

        TrueOS, nee PC-BSD, based upon FreeBSD?

        1. 1

          This is close to what I was looking for. I see Project Trident is a spin-off of that…

          1. 2

            GhostBSD too.

            TrueOS itself was a ready to use desktop, but now they’re moving towards just being a fork with some differences (LibreSSL, OpenRC etc.)

        2. 1

          I’ve been very happy with NixOS for that. I’m using it at work for +6months and when I received my new xps13, it was so straightforward to have something close to what I’m used to, that it would be very hard for me to go back to something else…

        1. 7

          MTA-STS reintroduces a security problem which we deliberately cut out of DANE when the spec was locked down to forcibly exclude Usages 0 and 1 of the TLSA records (PKIX-TA and PKIX-EE). In effect, those usages said “if you want to send to me with security, you must trust this explicit Certificate Authority, not only for me, but also for all other sites on the Internet, including those not using DANE”. No site has any legitimate business decreeing who you must trust by default for communicating with others, so those Usage modes were excluded.

          The basic problem is “the mail must flow” and when mail doesn’t flow, The Boss comes breathing down the Postmaster’s neck, demanding that mail flow now, even if the other side is the cause of the issues. They’re violating spec by putting IP addresses directly in MX records? Who cares that it’s a spec violation, get the mail flowing. They’re saying that you must trust Symantec as a CA, for all sites? Who cares, do it.

          With DANE Usage modes, we got away from that: the secure DNS tells you either which certificate (DANE-EE, Usage 3) or which CA (DANE-TA, Usage 2) you need to trust for sending mail to just that one domain, with no influence over any other sites you send email to.

          But now the CA chosen for the HTTPS certificate for the mta-sts host becomes one which must be trusted for that site, and all others, unless developers come up with some mechanism to introduce conditional trust stores. So that one big “national company most people despise and which is known for their technical ineptitude, but we have to be able to send email to them” now can get you back to trusting Malfeasance CAs for everyone.

          The whole security model of the CA system is based upon the fiction that end-users can read CA Certification Practice Statements, evaluate the trade-offs, consider indemnity and so forth, and then decide to trust that CA for themselves without influencing anyone else. (If you think I’m exaggerating here, go back and look at the old claims for the CA system). This model is fundamentally inappropriate for backend item routers for federated systems.

          In practice, Google etc don’t want to implement DNSSEC and so while many smaller operators are deploying DANE and DNSSEC (Viktor Dukhovni puts out monthly stats showing the growth) if you want to offer some security for messages sent to your users from the likes of Google, then you have to publish mta-sts records and set up a webserver. We can suck it up and do that.

          AFAIK there are no plans by any Exim Developer to implement client support for MTA-STS. We’d be doing our users an active mis-service to actively promote this as anything more than something which we have to do to provide crutches for big companies to keep them from having to figure out DNSSEC.

          1. 1

            This seems to be somewhere between true and not actually a problem? The same situation exists for browsers, where any random site might demand you use some other CA, yet my own efforts to do so have met with limited success. In practice, chrome and Firefox work out a trusted list, and if you want to be on the web you use one of those CAs.

            Given that google sponsored this proposal, and receiving email from gmail is a priority for most sites, who is going to setup their incoming mail server to use a cert that google won’t trust?

            1. 2

              So our security boils down to “people will only use a cert from a CA which $OneBigCompany trusts and so as long as $OneBigCompany rotates bad CAs out of the trust for their mail platform, everyone else will have to follow suit” ?

              That has a kind of grotesque elegance to it. Completely against how Internet standards normally enforce security, but it will probably work.

              For a while. Right up until Google try to stop trusting a Large CA, while Microsoft and Yahoo continue trusting it, and Google get hit with an anti-trust lawsuit and we discover that we can no longer rotate out CAs.

              1. 1

                Aren’t most MTAs already using your system’s CA store (which is typically Mozilla’s ca_root_nss)?

                1. 1

                  MTAs by default typically don’t verify certs, since there’s no trustworthy meaningful identifier to assert and by spec they have to fallback to plaintext, so failure to verify terminating TLS would just retry with plaintext. Thus why disabling old protocols and ciphers which are still in use is not a forcing function to improve security but counter-productive. Not only are self-signed certs widespread, but anonymous TLS is seen in the wild too.

                  The old approach was to manually configure better behavior by mutual consent or spotting a published policy.

                  It’s only with DANE, and now MTA-STS, that there’s a federated system for changing the default behavior. MTA-STS provides TOFU protection, as long as you trust the CA ecosystem to not mis-issue. DANE provides protection including on first connection, and pins down (either by certificate or public key) either the CA or the certificate for the site. But DANE requires DNSSEC, which is steadily growing but still not predominant.

                2. 1

                  I mean, I agree it could have problems, but in practice google/chrome have done more to nuke more rogue CAs than anybody.

                  I too wish the web trust model were other than it is, but I’d at least like the web and email trust models to be coherent.

            1. 19

              Many programmers push Worse is Better saying it’s profitable and/or inevitable. Terry Davis worked on what he believed was The Right Thing no matter what it cost. He’s dead. The Temple[OS] he gave his life for will live on. Maybe his work or dedication will inspire other people. Hopefully. :)

              1. 15

                From what I have read, he was impossible to work with because he would be absolutely set on things without giving any explanation other than “this is the right way”. A lot of the decisions for temple OS were not better but just random decisions “from god”.

                1. 18

                  I’m an opponent of him in many ways. I overall agree with your assessment using the word “arbitrary decisions from god.” In the past, I would’ve even called him out since I was blunt in a brutally-honest way.

                  His death saddened many Lobsters seemingly more than “it’s sad someone else died.” Maybe there’s something about him that hits home for them. I decided I’d comment on just the good aspects of what he was doing from the perspective of an ex-devout-Christian and current agnostic that respects people at least acting on their principles even when it costs them. That’s all.

                  1. 16

                    The thing that is inspiring about Terry for me is that he would uncompromisingly follow his muse. As crazy as he was, he was able to keep personal integrity and be unfaltered in the face of criticism. As a developer I like to create things. I miss the days where I had a computer and would just create things for the sake of it. Explore and put my own flavor onto things without worrying about anything else. I miss that and Terry would remind me of these days.

                    1. 22

                      Hm, it’s probably not intended, but this comment rubs me the wrong way. I also must admit that I have written and deleted multiple attempts trying to get my point across. Please also note that I usually don’t handle those subjects in English.

                      The amazing Stella Young has coined the term “inspiration porn”, where people without certain barriers or disabilities use people with those as inspiration, quoting how “despite their struggles” they achieved something. In reality, it’s much more that these people have no choice. Pretty often, the people doing so had much more ability and chances to actually get what they want, they just don’t do it.

                      We know that Terrys “personal integrity” was highly influenced by his illness. Yes, he had a “don’t care” mentality, but we don’t know if he could have afforded caring, even if he wanted. Having dealt with people with schizophrenia and other disassociating issues during both my civil service and in private life, I can definitely say that there’s often another side: the time after an episode where people try to do damage control and piece the status quo back together. Even if Terry had a rather public life, we may not have seen all of it.

                      I’m not a fan of remote diagnosis, but the fact that TempleOS and his live streams were some kind of refuge and not only a sign of dedication and devotion needs to be considered. Also, in some form, they might also have been an expression of his illness. Plus probably his new faith. Terry found his niche in which he could exist and - harshly put - survive. In that niche, he was still a person that was insufferable to many. Given his interviews, I think he was very aware of that. And we shouldn’t forget some of his popularity was for the wrong reasons, e.g. people have used him as a nice chance to put racists statements out as “see what the deranged TempleOS-guy said” (just search for “Terry Davis Quotes”). He’s rarely quoted for his tech stuff.

                      Terry had a lot of time to invest in his personal projects - but it’s not like he chose it. He was unfit for work and on meds. He wasn’t the kind of person people would take out for a ride or to the pub. We don’t know if he had a wish to do all of that, but maybe? He had an untreatable illness that is often given up on once people are considered “stable”. I’m not sure he didn’t really bother, but he had probably found a way to cope. These conditions are all about finding ways to cope. Given the way he left, he may have lost that way.

                      So, in the end, a lot of could-be’s, maybes and a lot of things intersecting. Lots of things that could be speculated, but probably shouldn’t.

                      I appreciate Terry for speaking about his mental state openly and unapologetic and reminding people that it exists. But didn’t enjoy much of his personality. That’s okay. I’m sad about his passing.

                      But seeing a lot of people explaining the one with the other, for example by giving him a pass on many of his behaviours because of his schizophrenia is also a tough thing to see. Not because I think he shouldn’t get a pass for them, but because they also subtly remove all agency out of the person.

                      1. 3

                        Thank you for this comment, it hit a lot of the points I wanted to write but couldn’t get them out. Having gotten my first professional break from a friend, getting to work alongside him and watch schizophrenia slowly take away his – everything – was absolutely heartbreaking.

                        The amazing Stella Young has coined the term “inspiration porn”

                        I had never seen the phrase (or her), but it really encompasses perfectly what upsets me about it.

                        He’s rarely quoted for his tech stuff.

                        http://www.codersnotes.com/notes/a-constructive-look-at-templeos/ is a great attempt to look at the technical work.

                        He had an untreatable illness that is often given up on once people are considered “stable”.

                        Or long before that … the toll it takes to be around someone actively suffering prolonged severe mental illness is astonishing, exhausting, nearly impossible to imagine. Often direct relatives check out of the process.

                        giving him a pass on many of his behaviours because of his schizophrenia is also a tough thing to see. Not because I think he shouldn’t get a pass for them, but because they also subtly remove all agency out of the person.

                        I think the disease is what removes agency, not comments about behaviors. Fundamentally that is a part of what it does, it steals agency, which is horrifying and brutal. I simply to this day do not know the correct approach to deal with people suffering severe mental illness, the tool they use to decide if they should take medicine is damaged.

                        1. 3

                          I don’t know enough about mental illness so I am probably misguided. I can only relate to myself where doing anything without coffee is already really hard. It’s obvious that you know more on the subject than I do.

                          Anyways, I like what he did with TempleOS and HolyC. Who would think of using a C-like language as a REPL, it’s way too dangerous! I liked that he made unconventional choices which then allows to explore design spaces that are not necessarily well traveled.

                        2. 6

                          It’s never too late. Pick you a tiny project, carve out small slice of time, work on it a bit, pause it, work again, and so on. You might slowly get back into the habit you enjoyed.

                          1. 1

                            <3 Thanks. I am working on it. Breaking out of bad habits it hard.

                    2. 1

                      Taken from the perspective that perfect is unattainable, Worse is Better really is the only viable way. TempleOS lives on but in how many heads will it do so?

                      1. 3

                        TempleOS lives on but in how many heads will it do so?

                        The value of an idea is not at all related to how fashionable it is.

                        1. 2

                          To support that, just look for all the times people in literature or science did some work, people thought it had no value, and then it’s a big thing way later.

                          1. 2

                            The funny thing about things like that is the collective memory of being wrong about a big thing gets wiped away immediately once something is recognized.

                            We’ve always been at war with Eurasia and all that.

                            1. 1

                              Nice way of putting it. I battle it all the time in discussions about the C language and UNIX OS.

                              1. 5

                                It’s incredibly aggravating. The collective memory is seen as immune to mistakes even as it makes them firsthand.

                                Dangerous territory for people who are interested more in good ideas than the constantly-changing ‘right’ ideas.

                            2. 2

                              Like boolean algebra?

                            3. 2

                              I don’t agree because if an idea has no exposure it rarely makes its way to implementation, or me as well as the fact that stuff gets lost if it’s not fashionable enough.

                            4. 3

                              That’s not true. It’s the way that works the most in the most situations. The Right Thing does work in niche markets. Erlang and Ocaml are examples of the right thing that have a lot of use right now. There’s also many example in embedded sector where the Worse is Better stuff is usually an extra layer or module that’s optional. There’s even hardware products that weren’t in performance-sensitive markets that are selling. That’s not to mention appliance vendors like Oreck or catalogs focusing on “best of” like Hammacher Schlemmer (which is awesome, esp outdoor stuff!).

                              So, The Right Thing can work if there’s a market or mind share for it. It usually won’t work. My proposal, which someone else wrote up as well, is to do a hybrid where you build something as close to The Right Thing as possible with the viral characteristics of Worse is Better. Alternatively, you can build your product in a modular way with careful API’s so you can incrementally improve its quality and/or security over time if it sells. Most of the money still goes on features and marketing but some goes into those attributes. Stuff that gets rewritten is the stabilized or otherwise slow-to-change stuff.

                              1. 3

                                Not that I put much stock in “The Right Thing” versus “Worse Is Better” anymore. See: http://dreamsongs.com/Files/worse-is-worse.pdf (written by same author under a pseudonym).

                                That said, I always considered Erlang clearly on the pragmatic, get it done, worse-is-better, New Jersey side of the coin, at least according to the tenants described https://en.wikipedia.org/wiki/Worse_is_better. You code for the “Happy Path” – you put off / ignore most error handling. You need a little bit of mutable goodness, don’t worry, just use ETS we won’t tell anyone. Need even more bit twiddling, here is a friendly NIF.

                                • Simplicity: language simplicity at its core, complex stuff outsourced to OTP! Abstraction sounds complicated, lets just trust programmers to do the right thing and add another outside the language tool(dialyzer)!

                                Erlang is a “Mutually Consenting Adult Language” (read: dynamically typed with full term introspection - or more violently - unityped crap with everything in one big union type). – JLOIUS

                                • Correctness: time was known broken, but simple up until ERTS 7.0 when time warp was added. Single assignment is good, but don’t worry, ETS is there when you need it for some mutable goodness.
                                • Consistency: string module uses 1 based index, binary module uses 0 based index. Anything that takes needle, haystack – might take haystack, needle… check the docs.
                                • Completeness: sure, it would be nice to have decent string handling, but that would be a lot of work, how bout an list of bytes and you deal with it (complete it).
                                1. 1

                                  Sounds like we both agree but we’re looking from different perspectives; in the end common sense needs to be used.

                            1. 1

                              Three good overviews and great (and reassuring!) indicators of plausible futures.

                              The check handling smells a little of special-casing, much as map is special-cased today to avoid the need for generics; in this case, it’s avoiding LISP-style macros for being able to insert code in the current scope. The problems caused by cpp macros have led to allergy and it’s certainly possible to abuse them to create monstrosities.

                              So at some level, it’s a very “go” approach to not support something “insanely powerful” and instead special-case the perceived most-critical use-cases, map[K]V in Go 1, check in Go 2.

                              1. 2

                                Dryly amusing: my .sig for years (a decade or so ago) used to contain a short example of just how zsh does handle NULs in strings.

                                Loosely speaking though, the moment that you’re using the beyond-POSIX features of bash or zsh for anything other than REPL control, that’s a sign that you’re entering technical debt territory and should be rewriting, now that the shell prototype has confirmed what needs to happen and what the general failure modes are.

                                1. 2

                                  I can’t decide if Let’s Encrypt is a godsend or a threat.

                                  On one hand, it let you support HTTPS for free.
                                  On the other, they collect an enourmous power worldwide.

                                  1. 8

                                    Agreed, they are quickly becoming the only game in town worth playing with when it comes to TLS certs. Luckily they are a non-profit, so they have more transparency than say Google, who took over our email.

                                    It’s awesome that we have easy, free TLS certs, but there shouldn’t be a single provider for such things.

                                    1. 3

                                      Is there anything preventing another (or another ten) free CAs from existing? Let’s Encrypt just showed everyone how, and their protocol isn’t a secret.

                                      1. 6

                                        OpenCA tried for a long time, and I think now has pretty much given up: https://www.openca.org/ and just exist in their own little bubble now.

                                        Basically nobody wants to certify you unless you are willing to pay out the nose and are considered friendly to the way of doing things. LE bought their way in I’m sure, to get their cert cross-signed, which is how they managed so “quickly” and it still took YEARS.

                                        1. 1

                                          Have you ever tried to create a CA?

                                          1. 3

                                            I’ve created lots of CAs, trusted by at most 250 people. :)

                                            Of course it’s not easy to make a new generally-trusted CA — nor would I want it to be. It’s a big complicated expensive thing to do properly. But if you’re willing to do the work, and can arrange the funding, is anything stopping you? I don’t know that browser vendors are against the idea of multiple free CAs.

                                            1. 3

                                              Obviously I was not talking about the technical stuffs.

                                              One of my previous boss explored the matter. He had the technical staff already but he wanted to become an official authority. It was more or less 2005.

                                              After a few time (and a lot of money spent in legal consulting) he gave up.

                                              He said: “it’s easier to open a bank”.

                                              In a sense, it’s reasonable, as the European laws want to protect citizens from unsafe organisations.

                                              But, it’s definitely not a technical problem.

                                        2. 1

                                          Luckily they are a non-profit

                                          Linux Foundation is a 501(c)(6) organization, a business league that is not organized for profit and no part of the net earnings goes to the benefit of any private shareholder or individual.
                                          The fact all shareholders benefit from its work without a direct economical gain, doesn’t means it has the public good at heart. Even less the public good of the whole world.

                                          It sound a lot like another attempt to centralize the Internet, always around the same center.

                                          It’s awesome that we have easy, free TLS certs, but there shouldn’t be a single provider for such things.

                                          And such certificates protect people from a lot of relatively cheap attacks. That’s why I’m in doubt.

                                          Probably, issuing TLS certificates should be a public service free for each citizen of a state.

                                          1. 3

                                            Oh Jeez. Thanks, I didn’t realize it was not a 501c3, When LE was first coming around they talked about being a non-profit and I just assumed. That’s what happens when I assume.

                                            Proof, so we aren’t just taking @Shamar’s word for it:

                                            Linux Foundation Bylaws: https://www.linuxfoundation.org/bylaws/

                                            Section 2.1 states the 501(c)(6) designation with the IRS.

                                            My point stands, that we do get more transparency this way than we would if they were a private for-profit company, but I agree it’s definitely not ideal.

                                            So you think local cities, counties, states and countries should get in the TLS cert business? That would be interesting.

                                            1. 5

                                              It’s true the Linux Foundation isn’t a 501(c)(3) but the Linux Foundation doesn’t control Let’s Encrypt, the Internet Security Research Group does. And the ISRG is a 501(c)(3).

                                              So your initial post is correct and Shamar is mistaken.

                                              1. 1

                                                The Linux Foundation will provide general and administrative support services, as well as services related to fundraising, financial management, contract and vendor management, and human resources.

                                                This is from the page linked by @philpennock.

                                                I wonder what is left to do for the Let’s Encrypt staff! :-)

                                                I’m amused by how easily people forget that organisations are composed by people.

                                                What if Linux Foundation decides to drop its support?
                                                No funds. No finance. No contracts. No human resources.
                                                Oh and no hosting, too.

                                                But hey! I’m mistaken! ;-)

                                                1. 2

                                                  Unless you have inside information on the contract, saying LE depends on the Linux Foundation is pure speculation.

                                                  I can speculate too. Should the Linux Foundation withdraw support there are plenty of companies and organisations that have a vested interest in keeping LetsEncrypt afloat. They’ll be fine.

                                                  1. 1


                                                    Feel free to think that it’s a philanthropic endeavour!
                                                    I will continue to think it’s a political one.

                                                    The point (and as I said I cannot answer yet) is if the global risk of a single US organisation being able to break most of HTTPS traffic world wide is worth the benefit of free certificates.

                                                    1. 3

                                                      Any trusted CA can MITM, though, not just the one that issued the certificate. So the problem is (and always has been) much, much worse than that.

                                                      1. 1

                                                        Good point! I stand corrected. :-)

                                                        Still note how it’s easier for the certificate issuer to go unnoticed.

                                            2. 4

                                              What’s Linux Foundation got to do with it? Let’s Encrypt is run by ISRG, Internet Security Research Group, an organization from the IAB/IETF family if memory serves.

                                              They’re a 501(c)(3).

                                              1. 2

                                                LF provide hosting and support services, yes. Much as I pay AWS to run some things for me, which doesn’t lead to Amazon being in charge. https://letsencrypt.org/2015/04/09/isrg-lf-collaboration.html explains the connection.

                                                1. 1

                                                  Look at the home page, top-right.

                                                  1. 2

                                                    The Linux Foundation provides hosting, fundraising and other services. LetsEncrypt collaborates with them but is run by the ISRG:

                                                    Let’s Encrypt is a free, automated, and open certificate authority brought to you by the non-profit Internet Security Research Group (ISRG).

                                          1. 12

                                            So those who lost their .mcdonalds email addresses are now Old McDonalds?

                                            1. 1

                                              Back in my day it was just called McDonald’s.

                                            1. 6

                                              An IMAP server handles N users and can have M shared folders readable by all of them. Rather than have every user be presented with the complete list, each user has a maintained subscription list and mail-clients (used to? mine still does) default to only showing the subscribed folders, with a toggle to switch to see all folders and sub/unsub as desired.

                                              I for one actively use this, with mail folders. Some lists which I might occasionally want to delve into local copies of, I unsub from. The mail still flows in, I don’t need to get notified for it, but I can search it locally when I want.

                                              You haven’t even touched on the two vaguely-compatible revisions of the ACL flags and their meanings and how the permissions need to map from one to the other. ;) Nor that for the sake of letting some ancient servers continue to be inefficient, clients are barred by spec from certain behaviors across mailboxes. The IMAP police will tell you how wrong and evil you are for wanting, say, a count of total/new/unread mail across 30 mailboxes. Sometimes you just have to ignore the spec and say “this tool is for competently written IMAP servers” and go ahead and issue a bunch of STATUS commands.

                                              Disambiguating async notifications from part of the response to a given command almost requires an organically grown codebase handling the entire history of IMAP, rather than having a clean model.

                                              There’s a reason that the IETF Working Group to turn IMAP+SMTP into something which mobile clients could use for sane handling of attachments ended up being called “lemonade”. When life hands you …

                                              1. 14

                                                [full disclosure: I’m the Phil referenced; I’m not an SKS maintainer, but did write various wiki pages and do have patches in the codebase]

                                                The attacks causing disks to fill are problems with specific keys breaking reconciliation and triggering transaction failures in BDB, leading to many GB of disk usage by those unable to get the broken key.

                                                On-disk size has gone from around 6GB to 40+GB in the space of a couple of weeks, and that’s what’s knocked a bunch of SKS systems offline, repeatedly. All the decades of cruft is an order of magnitude less disk space than that caused by a couple of keys designed to break SKS.

                                                Also, Kristian is one of the SKS developers, but is not the original developer. He, like everyone else involved, is a volunteer with a day-job unrelated to SKS.

                                                I’ve been on the SKS devel mailing-list for probably 8 years (guess) and I’ve never seen hostility to the idea that SKS should change or to any reasonable proposal for doing so. I’ve seen various levels of resignation and annoyance at (1) people who propose changes without thinking through how to deal with the fundamental SKS reconciliation algorithm; (2) people who make demands that others do work for them, but never contribute patches themselves. The Almighty Designers who sketch out a non-viable proposal and can’t understand why others aren’t prepared to leap to do the work to make their vision a reality.

                                                In stark contrast, in March Andrew Gallagher posted (thread “SKS apocalypse mitigation”) and took on board the points about algorithm and design issues and himself put in the effort to design something which might work. Haven’t seen code yet, but he’s demonstrated how easy it is to get a productive discussion if you’re willing to take account of engineering design constraints; so many before have instead pouted and stomped their feet and said “well that should be fixed”.

                                                Hockeypuck has been around for a few years; it’s gained a little traction, but is not a silver bullet: it peers by using the SKS reconciliation algorithm and what’s needed is a design approach to change how reconciliation happens, not just a different codebase. SKS itself is GPLv2, Hockeypuck is AGPLv3, both are available for folks to work on and propose changes.

                                                1. 2

                                                  i added the SKS apocalypse mitigation thread to the article so people can read whats going on.

                                                  1. 1

                                                    Thank you for the reply, i have added an edit about why the servers have gone off line. Could you send me the link to Andrew gallaghers thread i would be interested in reading it. i found the link

                                                    1. 3

                                                      I like how we get the update that you found the link, but not the link itself 😂

                                                        1. 3


                                                    2. 1

                                                      Thanks. As a user, even though I enjoyed reading the post and be aware of the issues, I like to always hear/read the other side of the story/argument.

                                                    1. 7

                                                      Yeah, I know someone who runs a keyserver and they are getting absolutely sick of responding to the GDPR troll emails.

                                                      Love the idea to use activitypub (the same technology involved in mastadon) for keyservers. That’s really smart!

                                                      1. 16

                                                        Offtopic: Excuse me.

                                                        I think it depends on some conditions, so not everybody is going to see this every time. But when I click on medium links I tend to get this huge dialog box come up over the entire page saying some thing about registering or something. It’s really annoying. I wish we could host articles somewhere that doesn’t do this.

                                                        My opinion is that links should be links to some content. Not links to some kind of annoyware that I have to click past to get to the real article.

                                                        1. 11

                                                          Use the cached link for Medium articles. It doesn’t have the popup. Just the content.

                                                          1. 1

                                                            Could you give an example? That sounds like a pleasant improvement, but i don’t know exactly what you mean by a cached link.

                                                            1. 3

                                                              There is a’ cached’ link under each article title on lobste.rs

                                                              1. 1


                                                          2. 7

                                                            I started running uMatrix and added rules to block all 1st party JS by default. It does take a while to white list things, yes, but it’s amazing when you start to see how many sites use Javascript for stupid shit. Imgur requires Javascript to view images! So do all Square Space sites (it’s for those fancy hover-over zoom boxes).

                                                            As a nice side effect, I rarely ever get paywall modals. If the article doesn’t show, I typically plug it into archive.is rather than enable javascript when I shouldn’t have to.

                                                            1. 2

                                                              I do this as well, but with Medium it’s a choice between blocking the pop-up and getting to see the article images.

                                                              1. 6

                                                                I think if you check the ‘spoof noscript>l tags’ option in umatrix then you’ll be able to see the images.

                                                                1. 1

                                                                  Nice trick, thanks!

                                                            2. 6

                                                              How timely! Someone at the office just shared this with me today: http://makemediumreadable.com

                                                              1. 4

                                                                From what I can see, the popup is just a begging bowl, there’s actually no paywall or regwall involved.

                                                                I just click the little X in the top right corner of the popup.

                                                                But I do think that anyone who likes to blog more than a couple of times a year should just get a domain, a VPS and some blog software. It helps decentralization.

                                                                1. 1

                                                                  And I find that I can’t scroll down.

                                                                  1. 3

                                                                    I use the kill sticky bookmarklet to dismiss overlays such as the one on medium.com. And yes, then I have to refresh the page to get the scroll to work again.

                                                                    On other paywall sites when I can’t scroll, (perhaps because I removed some paywall overlay to get at the content below,) I’m able to restore scrolling by finding the overflow-x CSS property and altering or removing it. …Though, that didn’t work for me just now on medium.com.

                                                                    1. 1

                                                                      Actually, it’s the overflow: hidden; CSS that I remove to get pages to scroll after removing some sticky div!

                                                                2. 3

                                                                  What is the keyserver’s privacy policy?

                                                                  1. 5

                                                                    I run an SKS keyserver, have some patches in the codebase, wrote the operations documents in the wiki, etc.

                                                                    Each keyserver is run by volunteers, peering with each other to exchange keys. The design was based around “protection against government attempts to censor keys”, dating from the first crypto wars. They’re immutable append-only logs, and the design approach is probably about dead. Each keyserver operator has their own policies.

                                                                    I am a US citizen, living in the USA, with a keyserver hosted in the USA. My server’s privacy statement is at https://sks.spodhuis.org/#privacy but that does not cover anyone else running keyservers. [update: I’ve taken my keyserver down, copy/paste of former privacy policy at: https://gist.github.com/philpennock/0635864d34a323aa366b0c30c7360972 ]

                                                                    You don’t know who is running keyservers. It’s “highly likely” that at least one nation has some acronym agency running one, at some kind of arms-length distance: it’s an easy and cheap way to get metadata about who wants to communicate privately with whom, where you get the logs because folks choose to send traffic to you as a service operator. I went into a little more depth on this over at http://www.openwall.com/lists/oss-security/2017/12/10/1

                                                                    1. 5

                                                                      Thanks for this info.

                                                                      Fundamentally, GDPR is about giving the right to individuals to censor content related to themselves.

                                                                      A system set out to thwart any censorship will fall afoul of GDPR, based on this interpretation

                                                                      However, people who use a keyserver are presumably A-OK with associating their info with an append-only immutable system. Sadly , GDPR doesn’t really take this use case into account (I think, I am not a lawyer).

                                                                      I think what’s important to note about GDPR is that there’s an authority in each EU country that’s responsible for handling complaints. Someone might try to troll keyserver sites by attempting to remove their info, but they will have to make their case to this authority. Hopefully this authority will read the rules of the keyserver and decide that the complainant has no real case based on the stated goals of the keyserver site… or they’ll take this as a golden opportunity to kneecap (part of) secure communications.

                                                                      I still think GDPR in general is a good idea - it treats personal info as toxic waste that has to be handled carefully, not as a valuable commodity to be sold to the highest bidder. Unfortunately it will cause damage in edge cases, like this.

                                                                      1. 3

                                                                        gerikson you make really good points there about the GDPR.

                                                                        Consenting people are not the focus of this entirely though , its about current and potential abuse of the servers and people who have not consented to their information being posted and there being no way for removal.

                                                                        The Supervisory Authority’s wont ignore that, this is why the key servers need to change to prevent further abuse and their extinction.

                                                                        They also wont consider this case, just like the recent ICANN case where they want it to be a requirement to store your information publicly with your domain which was rejected outright. The keyservers are not necessary to the functioning of the keys you upload, and a big part of the GDPR is processing only as long as necessary.

                                                                        Someone recently made a point about the below term non-repudiation.
                                                                        Non-repudiation this means in digital security

                                                                        A service that provides proof of the integrity and origin of data.
                                                                        An authentication that can be asserted to be genuine with high assurance.

                                                                        KeyServers don’t do this!, you can have the same email address as anyone else, and even the maintainers and creator of the sks keyservers state this as well and recommend you check through other means to see if keys are what they appear to be, such as telephone or in person.

                                                                        I also don’t think this is an edge case i think its a wake up call to rethink the design of the software and catch up with the rest of the world and quickly.

                                                                        Lastly i don’t approve of trolling, if your doing it just for the sake of doing it “DON’T”, if you genuinely feel the need to submit a “right to erasure” due to not consenting to having your data published, please do it.

                                                                      2. 2

                                                                        Thank you for the link: http://www.openwall.com/lists/oss-security/2017/12/10/1, its a fantastic read and makes some really good points.

                                                                        Its easy for anyone to get hold of recent dumps from the sks servers, i have just hunted through a recent dump of 5 million + keys yesterday looking for interesting data. Will be writing an article soon about it.

                                                                    2. 3

                                                                      i totally agree, it has been bothering me as well, i am in the middle of considering starting up my own self hosted blog. I also don’t like mediums method of charging for access to peoples stories without giving them anything.

                                                                      1. 3

                                                                        I’m thinking of setting up a blog platform, like Medium, but totally free of bullshit for both the readers and the writers. Though the authors pay a small fee to host their blog (it’s a personal website/blog engine, as opposed to Medium which is much more public and community-like).

                                                                        If that could be something that interests you, let me know and I’ll let you know :)

                                                                        1. 2

                                                                          lmao you don’t even get paid when someone has to pay for your article?

                                                                          1. 1

                                                                            correction, turns out you can get paid if you sign up for their partner program, but i think it requires approval n shit.

                                                                          2. 2

                                                                            hey @pushcx, is there a feature where we can prune a comment branch and graft it on to another branch? asking for a friend. Certainly not a high priority feature.

                                                                            1. 3

                                                                              No, but it’s on my list of potential features to consider when Lobsters gets several times the comments it does now. For now the ‘off-topic’ votes do OK at prompting people to start new top-level threads, but I feel like I’m seeing a slow increase in threads where promoting a branch to a top-level comment would be useful enough to justify the disruption.

                                                                        1. 8

                                                                          The less successful person didn’t write much code, and he had excellent reasons why: I’m too busy! The person who made the request can’t wait! I have 100 other things to do today! Nobody’s allocating time for me to write code!

                                                                          Google has this fixed. SREs can only spend a maximum of 50% of time doing manual admin work.

                                                                          1. 6

                                                                            Certainly wasn’t true when I was an SRE at Google, and the person on my team who documented everything he did manually, such that the first automated system for doing that work was called “Electric [hisname]”, was penalized for rolling up his sleeves to do the grotty work and logging things as Tom advocates here, while others who were all about parroting rules on how much could be in shell, etc, were studiously nowhere to be found in the aftermath of an Emergency Power Off.

                                                                            I know which people I’d rather have on my team again. Tom’s article is excellent.

                                                                            1. 2

                                                                              Ah to be clear I have never worked at google, I just read that from an official source. (Think it was googles book on SRE)

                                                                          1. 5

                                                                            Slide 42 figure 6’s Representation of a break statement (Greenfoot, 2006) is really intriguing as an example of use of color and graphics to make it much easier to spot potential problems cleanly. Has anyone seen a plugin for Vim which can do something like this, either for C or for any langserver backend? While C macros might make it awkward, there’s some codebases (and a few security-critical bugs) which would benefit from being able to have this sort of view while looking at them.

                                                                            1. 3
                                                                              1. Deploy DNSSEC signing for your zones. It won’t protect everyone, but it rewards the behavior you want to reward: systems behind verifying resolvers will be protected from these shenanigans. If PKIX issuers aren’t using validating resolvers before issuing DNS-proven certs, then that’s a security failure on their part.
                                                                              2. If you’re in a position to do so, ask your network provider where they are on using ROA verification for routes, as part of RPKI security. My colo box is hosted with people who’ve been filtering on this basis for a few years now. This is no longer new functionality and we’re reaching the point where we’re in danger of the lawyers getting involved to argue gross negligence if operators aren’t filtering out provably bad route advertisements.
                                                                              1. 1

                                                                                Please stop. DNSSEC is not a solution. Let it die already.


                                                                                “Reminder: you could publish the DNSSEC root RSA secret keys on Pastebin and nothing on the Internet that matters would break.”

                                                                                edit: oh I forgot about this gem

                                                                                “Overlooking some DNSSEC outages because they’re so frequent: By default, Unbound ignores for up to 24 hours any DNSSEC failure resulting from expired RRSIGs.”

                                                                                1. 1

                                                                                  Let what die? DNSSEC? It’s at over 50% of all .NL domains and generally on an upwards trend. The number of mail-systems being protected with DANE (TLSA records in DNSSEC-signed domains) is ever-increasing, since the only alternative for MX delivery is MTA-STS (spec still in draft, has gone through incompatible changes, and bakes in the same failure modes which led us to reject TLSA Usages 0 and 1 for DANE/SMTP).

                                                                                  Every Internet technology ever has led to outages in the early days of deployment, until people figured out how to make tools more robust … and even then has led to reductions in the frequency of outages, not to eliminating them. The questions are “what’s the failure mode?” and “will things improve?”. We see enough outages on a per-domain basis caused by inept management of DNS itself, without DNSSEC, that I don’t see DNSSEC as moving the needle on outage frequency here.

                                                                                  I do see more folks outsourcing their DNS management (eg, AWS Route 53, CloudFlare) and as we’ve seen from CloudFlare’s DNSSEC support, this pays off in getting professionally managed DNS+DNSSEC by people who understand it.

                                                                                  The Internet is full of sites which enumerate mistakes and try to say that the existence of mistakes by individuals means the technology should die. Finding one website which does this for DNSSEC does not mean that DNSSEC is dying.

                                                                                  1. 2

                                                                                    Oh, and I agree that DNSSEC is ugly and problematic, but for verifying authenticity of name resolution, it’s the only solution we’ve got today. So today, it’s what we deploy. Let’s not abandon something which works, just because it’s not perfect.

                                                                                    1. 1

                                                                                      I am very annoyed because I wrote a 3 page rebuttal to every point and accidentally force closed my browser when switching apps.

                                                                                      tl;dr it’s a dead RFC from 1997. Its usage is measurably on the decline. We peaked at ~1% of the important domains (net com and org).

                                                                                      They tried to use DANE for IRC and nobody wanted it. They removed DANE code from Irssi.

                                                                                      DANE for SMTP is a poor argument with the existence of LetsEncrypt. This argument is so tired I don’t know why it persists.

                                                                                      If you can convince Green, Ptacek, Bernstein, or Marlinspike that DNSSEC is worth having I will rescind my statements. But it’s not going to happen. It’s awful, adds vulnerabilities to DNS resolvers, and has too many failure modes which are completely opaque to end users/applications.

                                                                                      DNSSEC is basically Wayne’s ex-girlfriend Stacy. “It’s over. Get the net!”


                                                                                      If you want security here’s what you do: you use dnscrypt or equivalent to a large provider like OpenDNS. They have the means to actively monitor for cache poisoning and other attacks worldwide in real-time. Voila, you know your DNS isn’t being tampered with.

                                                                                      1. 1

                                                                                        For browsers/HTTPS, we have a semi-working model now without DNSSEC. I can’t speak authoritatively to the trade-offs which apply there.

                                                                                        SMTP I can speak authoritatively on: I added the initial DNSSEC support to Exim (although Jeremy Harris later picked it up and did the bulk of the work to take it to full DANE support) and talked extensively with Viktor Dukhovni of Postfix on the DANE spec, refining the text which became the RFCs.

                                                                                        For Submission/Submissions service, or smarthost identity configuration, Let’s Encrypt is a sufficient answer.

                                                                                        For MX delivery, LE buys you nothing. For TLS security, you need an identity which you can verify. That identity can not be derived by insecure means. With email to MX, that means the only verifiable identity is the domain. The mail-domain is rarely in the certificate SAN list. To fix this, you need a way to map from the domain to a host identity, securely. Further, it needs to be done in such a way that one external domain important to your organization (“when the CEO starts shouting about mail not going through”) can’t force domains into your trust-store for use with all other domains. This is why DANE for SMTP prohibits TLSA Usage fields 0 and 1. This is one of the severe flaws in MTA-STS.

                                                                                        My current recommendation for MTA security for MX hosts is to get a Let’s Encrypt cert, setup DANE referencing that, and also set up the MTA-STS publishing side to let senders such as Gmail work, IF you’re willing to keep tracking the MTA-STS drafts for further breaking changes. This is what I set up for exim.org and for some of my own domains.

                                                                                        Thus Let’s Encrypt solves absolutely nothing for MX SMTP.

                                                                                        “If you want security” … I say that you first need to break down what you mean by “security” and who “you” is. When it comes to DNS, there’s authenticity and there’s privacy. dnscrypt provides privacy between you and whomever you talk to, as does DNS-over-HTTPS and DNS-over-TLS. dnscrypt does not provide any protection against tampering, whether at the resolver provider (under court order) or between them and the upstream.

                                                                                        If “you” is an end-user or home operator, then you can carefully pick a DNS resolver and choose one who don’t actively tamper with the results for profit (and where you trust the jurisdiction, etc), then using an external provider with very-local-to-you resolvers, or with client-subnet support, pays off and gets you fast easy wins and is usually worth doing. If you pick one which does DNSSEC validation for you and has privacy/integrity between you and them, then you’re in a strong position. Google, CloudFlare, censurfridns, Verisign Labs, these are decent choices.

                                                                                        If you’re a mail-server operator with bulk DNS traffic, that’s less tenable. There’s a reason that for decades now it’s been best practice for MTA operators for domains handling any non-trivial traffic to have a local resolver, either on-subnet or on-host. Thus the large external providers don’t help.

                                                                                1. -1

                                                                                  Urgh, what a horrible page. @johnblood this page is a clusterfuck, hope your ad revenue is nice.

                                                                                  I’m not convinced, the cost of the extension boards is insanely high given what the cost would be to just shove it all on one board and that two of them don’t have active components - you need to purchase the actual e.g. WNIC or SFP module on top, not to mention antennas for wifi

                                                                                  With the super professionally produced video makes it seem even more like crowdfunding fodder to make a buck.

                                                                                  1. 3

                                                                                    cz.nic appears to be a non-profit; I’m not familiar with Czech law, but section 46 of their statutes prohibit disbursements to their member base, and it’s an association of legal entities, not a share-based structure. The statutes: https://www.nic.cz/files/nic/doc/Stanovy__20170701_AJ.pdf

                                                                                    So, no “making a buck”; I believe that the people involved are all salaried. cz.nic have been doing good solid open source software work for many years. It honestly looked to me like a fun video put together in the spirit of crowd-funding, relying upon “humor” and editing away anyone going “uhm” or “er”.

                                                                                    I backed the Turris Omnia and am Very Happy with the resulting product, as it’s by far the best home router I’ve owned. It’s things like “actually pushes out software updates with security fixes, in good time” which help keep it that way. So I backed the Mox too, for more ad-hoc use.

                                                                                    1. 0

                                                                                      Thank you for your kind words.

                                                                                    1. 6

                                                                                      So, Perl’s Parrot VM system was just ahead of its time?

                                                                                      1. 15

                                                                                        Given how much confusion is created by systems which do allow “foo.bar” and “foobar” to be different email addresses in the same domain, for different users, Gmail saying “we won’t allow that” is wonderful. Given how often people don’t correctly write down dots or whatever when copying email addresses, Gmail’s behavior is also good for getting the mail to just flow.

                                                                                        Saying Netflix shouldn’t have to have insider knowledge misses that (1) they made assumptions which required that insider knowledge, and (2) most sites make insider assumptions. Continuing with 2 for now: every site is allowed to have whatever rules they want for the left-hand-side (LHS), and per the standards the left-hand-side is case-sensitive. If I want “bar@” and “bAr@” to be different email addresses, that’s my business. Any email handling system which generally loses case of the LHS is, technically, broken. The federation used by email allows whatever systems are responsible for a given domain to have complete control over the semantics of the LHS.

                                                                                        In practice, the most widely deployed LHS canonicalization is almost certainly “be case-insensitive”, followed by “have sub-addresses with + or perhaps -”. IMO, the Gmail dot handling is incredibly sane and everyone running mail-systems should seriously consider it.

                                                                                        If I went out filing bugs against systems which made the case-insensitive assumption, then I’d be dismissed as a crazy person. In practice we (almost) all accept that some assumptions will be made. If you want to be safe, or not have to make assumptions, then validate the email addresses used at signup.

                                                                                        A friend had some issues with his wife because four different people had signed up for Ashley Madison using his email address (first-name @ gmail.com) and A-M never validated. Perhaps the potential consequences here highlight why not validating email addresses at sign-up or email address change should be interpreted (legally) as reckless negligence. If you’re going to decide that you don’t need to validate, then you assume responsibility for knowing about the canonicalization performed by every recipient domain. So the author of this piece is flat wrong: the moment Netflix decided to not bother validating email addresses, while also using email addresses as authentication identifiers, they assumed complete responsibility for the security consequences of having correct information about canonicalization used in every domain, to keep their authentication identifiers distinct.

                                                                                        (disclosure: as well as the hat, I’m also a former Gmail SRE, but had nothing to do with this feature)

                                                                                        1. 1

                                                                                          Why not just disallow . in email addresses?

                                                                                          1. 1

                                                                                            About 40 years too late to decide to start restricting what can be on the LHS. That’s entirely up to the domain. You can have empty strings, SQL injection attacks, path attacks and more, because you can have fairly arbitrary (length-restricted) strings, if you use double-quotes. The LHS without quotes is an optimization for simple cases.

                                                                                            Given that there exist today domains where the dot matters, and fred.bloggs != fredbloggs, instead those belonging to different people, any site which disallows dots in sign-up will cut off legitimate users.

                                                                                            Just validate.

                                                                                        1. 4

                                                                                          Too fast and responsive to be legitimately phpBB.

                                                                                          1. 1

                                                                                            I can’t help but wonder at having does-added methods which override the self-same method and using this to implement a state machine.

                                                                                            1. 15

                                                                                              There are various command-line concoctions such as password-store which stores PGP-encrypted files in a Git repo, but that doesn’t improve my situation over 1Password. I would still have to manually look up passwords and copy them to the clipboard. These command-line packages also lack mobile apps and syncing.

                                                                                              That’s not completely true. I use pass with syncing via a private Git repository, there’s a Firefox plugin with autofill support, good mobile clients for both Android and iOS. The best password management system I’ve used (I’ve been a user of 1Password for about 3 years before that). Being able to do git log to see password history for a website is awesome. Bonus point: OTP plugin works like a charm.

                                                                                              1. 2

                                                                                                The major problem with pass is that the mobile clients don’t supported encrypted git remotes, which is a huge problem: anyone with read access to the remote repo can see what your accounts are.

                                                                                                1. 4

                                                                                                  So put the remote on a system you physically control ;)

                                                                                                  1. -3

                                                                                                    Given that git is distributed and makes it very easy to push from any client to any remote, it’s a pretty safe assumption that one day you’ll accidentally push to another remote where you realize shortly after doing so that this was A Bad Plan.

                                                                                                    1. 14

                                                                                                      … it’s pretty hard to accidentally push to a remote you never set up…

                                                                                              1. 7

                                                                                                The key to this work is throwing out old assumptions and requiring explicit guest support.

                                                                                                Historically, VM systems “had” to be able to boot guests which didn’t need to know they were in a VM, but the guest could optionally implement dedicated “hardware” drivers to have more optimized I/O than through emulated devices. Still, you could take the install media for various OSes and install them all.

                                                                                                This project requires explicit guest support for basic boot-up. Which is great, if your model is around managing everything in the guest and you can make that demand. They reap major benefits from doing so, and there’s no reason for everyone creating images for deployment needs to be held back because the target system is also trying to be compatible with stuff which you’ll never deploy. But it’s very much a case of needing the guest to be compiled explicitly for the target hosting platform.

                                                                                                Since the competition is structured containerization with something like a Dockerfile defining entry-points, environmental dependencies, etc, this is not different. It’s a great trade-off. But it is made possible by the target audience having moved and adapted to a world of on-demand machine instances and container workloads.