1. 3

      I wish more folks involved in packaging for Linux distros were familiar with Homebrew. Obviously not everything Homebrew does is applicable to Debian, but the ability for folks to show up and easily contribute new versions with a simple PR is game changing. Last night I noticed that the python-paramiko package in Debian is severely out of date, but the thought of trying to learn the various intricacies of contributing to Debian well enough to update it is turns me right off.

      1. 14

        As an upstream dev of code that’s packaged with Homebrew, I have noticed that Homebrew is by far the sloppiest of any packagers; there is basically no QA, and often the packagers don’t even read the instructions I’ve provided for them. I’ve never tried it myself, but it’s caused me a lot of headaches all the same.

        1. 2

          I just looked at the packaging information for paramiko and I have more questions than before:

          How does this setup even work in case of a security vulnerability?

          1. 4

            Unfortunately, Debian has still a strong ownership model. Unless a package is team-maintained, an unwilling maintainer can stall any effort to update a package, sometimes actively, sometimes passively. In the particular case of Paramiko, the maintainer has very strong opinions on this matter (I know that first hand).

            1. 1

              Strong opinions are not necessarily bad. Does he believe paramiko should not be updated?

            2. 3

              How does this setup even work in case of a security vulnerability?

              Bugs tagged as security problems (esp. if also tagged with a CVE) get extra attention from the security team. How that plays out depends on the package/bug, but it can range from someone from the security team prodding the maintainer, all the way to directly uploading a fix themselves (as a non-maintainer upload).

              But yeah in general most Debian packages have 1-2 maintainers, which can be a bottleneck if the maintainer loses interest or gets busy. For packages with a lot of interest, such a maintainer will end up replaced by someone else. For more obscure packages it might just languish unmaintained until someone removes the package from Debian for having unfixed major issues.

          1. 7

            Neat idea! One question though: How do you handle renewals? In my experience, postgresql (9.x at least) can only re-read the certificate upon a server restart, not upon mere reloads. Therefore, all connections are interrupted when the certificate is changed. With letsencrypt, this will happen more frequently - did you find a way around this?

            1. 5

              If you put nginx in front as a reverse TCP proxy, Postgres won’t need to know about TLS at all and nginx already has fancy reload capability.

              1. 3

                I was thinking about that too - and it made me also wonder whether using OpenResty along with a judicious combination of stream-lua-nginx-module and lua-resty-letsencrypt might let you do the whole thing in nginx, including automatic AOT cert updates as well as fancy reloads, without postgres needing to know anything about it at all (even if some tweaking of resty-letsencrypt might be needed).

                1. 1

                  That’s funny I was just talking to someone who was having problems with “reload” not picking up certificates in nginx. Can you confirm nginx doesn’t require a restart?

                  1. 1

                    Hmm, I wonder if they’re not sending the SIGHUP to the right process. It does work when configured correctly.

                2. 2

                  I’ve run into this issue as well with PostgreSQL deployments using an internal CA that did short lived certs.

                  Does anyone know if the upstream PostgreSQL devs are aware of the issue?

                  1. 19

                    This is fixed in PG 10. “This allows SSL to be reconfigured without a server restart, by using pg_ctl reload, SELECT pg_reload_conf(), or sending a SIGHUP signal. However, reloading the SSL configuration does not work if the server’s SSL key requires a passphrase, as there is no way to re-prompt for the passphrase. The original configuration will apply for the life of the postmaster in that case.” from https://www.postgresql.org/docs/current/static/release-10.html

                1. 24

                  “There are a lot of CAs and therefore there is no security in the TLS CA model” is such a worn out trope.

                  The Mozilla and Google CA teams work tirelessly to improve standards for CAs and expand technical enforcement. We remove CAs determined to be negligent and raise the bar for the rest. There seems to be an underlying implication that there are trusted CAs who will happily issue you a google.com certificate: NO. Any CA discovered to be doing something like this gets removed with incredible haste.

                  If they’re really concerned about the CA ecosystem, requiring Signed Certificate Timestamps (part of the Certificate Transparency ecosystem) for TLS connections provides evidence that the certificate is publicly auditable, making it possible to detect attacks.

                  Finally, TLS provides good defense in depth against things like CVE-2016-1252.

                  1. 13

                    Any CA discovered to be doing something like this gets removed with incredible haste.

                    WoSign got dropped by Mozilla and Google last year after it came to light that they were issuing fraudulent certificates, but afaict there was a gap of unknown duration between when they started allowing fraudulent certs to be issued and when it was discovered that they were doing so. And it still took over six months before the certificate was phased out; I wouldn’t call that “incredible haste”.

                    1. 2

                      I’m not sure where the process is, but if certificate transparency becomes more standard, I think that would help with this problem.

                    2. 5

                      TLS provides good defense in depth against things like CVE-2016-1252.

                      Defense in depth can do more harm than good if it blurs where the actual security boundaries are. It might be better to distribute packages in a way that makes it very clear they’re untrusted than to additionally verify the packages if that additional verification doesn’t actually form a hard security boundary (e.g. rsync mirrors also exist and while rsync hosts might use some kind of certification, it’s unlikely to follow the same standards as HTTPS. So a developer who assumed that packages fed into apt had already been validated by the TLS CA ecosystem would be dangerously mislead)

                      1. 5

                        This is partly why browsers are trying to move from https being labeled “secure” to http being labeled “insecure” and displaying no specific indicators for https.

                        1. 1

                          e.g. rsync mirrors also exist and while rsync hosts might use some kind of certification, it’s unlikely to follow the same standards as HTTPS

                          If you have this additional complexity in the supply chain then you are going to need additional measures. At the same time, does this functionality provide enough value to the whole ecosystem to exist by default?

                          1. 5

                            If you have this additional complexity in the supply chain then you are going to need additional measures.

                            Only if you need the measures at all. Does GPG signing provide an adequate guarantee of package integrity on its own? IMO it does, and our efforts would be better spent on improving the existing security boundary (e.g. by auditing all the apt code that happens before signature verification) than trying to introduce “defence in depth”.

                            At the same time, does this functionality provide enough value to the whole ecosystem to exist by default?

                            Some kind of alternative to HTTPS for obtaining packages is vital, given how easy it is to break your TLS libraries on a linux system through relatively minor sysadmin mistakes.

                      1. 1

                        Is this the same HACL X25519 implementation as NSS uses, or are there two distinct implementations on the same formal verification tooling?

                        Does it make sense to have a cryptography library (in the style of OpenSSL’s libcrypto) to ease distribution and use of formally verified algorithm implementations?

                        1. 2

                          Is this the same HACL X25519 implementation as NSS uses, or are there two distinct implementations on the same formal verification tooling?

                          NSS and I both used the same project to generate our implementations, but our C code is slightly different, due to kernel constraints. For example, the kernel doesn’t like C99 declarations-after-statements, so a special option must be passed to HACL* to have output that is suitable for C89.

                          Does it make sense to have a cryptography library (in the style of OpenSSL’s libcrypto) to ease distribution and use of formally verified algorithm implementations?

                          I’m pretty sure this is what HACL* is trying to do, in general. They have a lot of different primitives verified and available. Impressive project.

                          1. 2

                            They spent who knows what amount of time and money specifying and verifying this stuff down to imperative code. Then, they casually mention this afterward:

                            “For convenience, C code for our verified primitives has already been extracted and is available in snapshots/hacl-c. To build the library, you need a modern C compiler.”

                            McCoy: “You want me to get trustworthy assembly by running verified C through a modern C compiler? My God, man, I’m not Xavier Leroy!”

                            1. 3

                              INRIA also makes compcert – http://compcert.inria.fr – which you’ve probably heard about. HACL* appears to have a mode meant for targeting the compcert compiler. So you can do it this way if you’re feeling anxious.

                              1. 2

                                Oh yeah. I was mostly kidding. Note that CompCert is proprietary software. They currently allow non-commercial usage of it. Depending on one’s circumstances, the verified C might be available but CompCert might not. It’s why I recommend where possible to add verified assembly using free tools like Simpl/C or x86/Proved. The users can pick what level of risk they want when choosing either the C or whatever assembly was included.

                        1. 6

                          very surprising that the BSDs weren’t given heads up from the researchers. Feels like would be a list at this point of people who could rely on this kind of heads up.

                          1. 13

                            The more information and statements that come out, the more it looks like Intel gave the details to nobody beyond Apple, Microsoft and the Linux Foundation.

                            Admittedly, macOS, Windows, and Linux covers almost all of the user and server space. Still a bit of a dick move; this is what CERT is for.

                            1. 5

                              Plus, the various BSD projects have security officers and secure, confidential ways to communicate. It’s not significantly more effort.

                              1. 7

                                Right.

                                And it’s worse than that when looking at the bigger picture: it seems the exploits and their details were released publicly before most server farms were given any head’s up. You simply can’t reboot whole datacenters overnight, even if the patches are available and you completely skip over the vetting part. Unfortunately, Meltdown is significant enough that it might be necessary, which is just brutal; there have to be a lot of pissed ops out there, not just OS devs.

                                To add insult to injury, you can see Intel PR trying to spin Meltdown as some minor thing. They seem to be trying to conflate Meltdown (the most impactful Intel bug ever, well beyond f00f) with Spectre (a new category of vulnerability) so they can say that everybody else has the same problem. Even their docs say everything is working as designed, which is totally missing the point…

                            2. 7

                              Wasn’t there a post on here not long ago about Theo breaking embargos?

                              https://www.krackattacks.com/#openbsd

                              1. 12

                                Note that I wrote and included a suggested diff for OpenBSD already, and that at the time the tentative disclosure deadline was around the end of August. As a compromise, I allowed them to silently patch the vulnerability.

                                He agreed to the patch on an already extended embargo date. He may regret that but there was no embargo date actually broken.

                                @stsp explained that in detail here on lobste.rs.

                                1. 10

                                  So I assume Linux developers will no longer receive any advance notice since they were posting patches before the meltdown embargo was over?

                                  1. 3

                                    I expect there’s some kind of risk/benefit assessment. Linux has lots of users so I suspect it would take some pretty overt embargo breaking to harm their access to this kind of information.

                                    OpenBSD has (relatively) few users and a history of disrespect for embargoes. One might imagine that Intel et al thought that the risk to the majority of their users (not on OpenBSD) of OpenBSD leaking such a vulnerability wasn’t worth it.

                                    1. 5

                                      Even if, institutionally, Linux were not being included in embargos, I imagine they’d have been included here: this was discovered by Google Project Zero, and Google has a large investment in Linux.

                                2. 2

                                  Actually, it looks like FreeBSD was notified last year: https://www.freebsd.org/news/newsflash.html#event20180104:01

                                  1. 3

                                    By late last year you mean “late December 2017” - I’m going to guess this is much later than the other parties were notified.

                                    macOS 10.13.2 had some related fixes to meltdown and was released on December 6th. My guess is vendors with tighter business relationships (Apple, ms) to Intel started getting info on it around October or November. Possibly earlier considering the bug was initially found by Google back in the summer.

                                    1. 2

                                      Windows had a fix for it in November according to this: https://twitter.com/aionescu/status/930412525111296000

                                  2. 1

                                    A sincere but hopefully not too rude question: Are there any large-scale non-hobbyist uses of the BSDs that are impacted by these bugs? The immediate concern is for situations where an attacker can run untrusted code like in an end user’s web browser or in a shared hosting service that hosts custom applications. Are any of the BSDs widely deployed like that?

                                    Of course given application bugs these attacks could be used to escalate privileges, but that’s less of a sudden shock.

                                    1. 1

                                      DigitalOcean and AWS both offer FreeBSD images.

                                      1. 1

                                        there are/were some large scale deployments of BSDs/derived code. apple airport extreme, dell force10, junos, etc.

                                        people don’t always keep track of them but sometimes a company shows up then uses it for a very large number of devices.

                                        1. 1

                                          Presumably these don’t all have a cron job doing cvsup; make world; reboot against upstream *BSD. I think I understand how the Linux kernel updates end up on customer devices but I guess I don’t know how a patch in the FreeBSD or OpenBSD kernel would make it to customers with derived products. As a (sophisticated) customer I can update the Linux kernel on my OpenWRT based wireless router but I imagine Apple doesn’t distribute the Airport Extreme firmware under a BSD license.

                                    1. 2

                                      Are there any browsers that don’t make bs excuses?

                                      1. 2

                                        What’s the excuse you think browsers are making?

                                        1. 3

                                          What is the legitimate reason you seem to think exists in that post?

                                          MITM attacks (which TLS is designed to prevent), are not a legitimate reason to block TLS improvements.

                                      1. 1

                                        There was a MacOS security update waiting for me when I woke up this morning.

                                        1. 1

                                          That’s probably a fairly old one, as far as I can tell Apple has not issued a macOS security update since December 6th.

                                          1. 1

                                            I had no update yesterday.

                                            shrug

                                        1. 4

                                          During my stint in government, I kept a link to the OSS Sabotage Manual in my email footer, since it bore a striking resemblance to what a lot of folks thought was a best practice.

                                          Would be fascinating for someone to an anthropological study of how the advice morphed from sabotage to “best practice”. I suspect it looks like, “In an effort to stamp out failure-case outliers, we developed a methodology which also prevented success-case outliers, and as we attempted to prevent more and more failure cases, we prevented more and more success cases, until all we had left was the middle, but we kept applying these same norms which eventually became ‘accountability for accountability’s sake, with no consideration of overall performance’”, but someone should really do the actual research :-)

                                          1. 4

                                            Would be fascinating for someone to an anthropological study of how the advice morphed from sabotage to “best practice”.

                                            Probably a case of “too much of a good thing”. Most of these are tolerable or even beneficial if done in moderation. It’s only when you go to the extremes and stay there that this becomes sabotage.

                                            1. 2

                                              Exactly, if any of these behaviors were bad under all circumstances, then it would be in our collective consciousness that it is destructive behavior. So what makes this sabotage here deadly effective is the fact that you hide your malicious intentions between many meta layers. Like “yeah, we all want to make some progress here, but we also want to do things the proper way, so the question is where we draw the line”, so while giving the appearance of cooperation, you’re introducing yet another discussion!

                                          1. 14

                                            Questions (and answers) like this really ought to start with a definition of what they mean by “Agile”.

                                            The top voted answer appears to be critiquing a very rigid Capital-A-Agile methodology, but none of it comes through to me as a valid critique of a more general lower-case-a-agile methodology: deploy regularly, tight feedback cycles with users, integrate feedback.

                                            1. 10

                                              I guess these discussions are always a bit futile, because “Agility” is by definition a positive property. It’s a tautology really.

                                              Most criticism of agile methods are more focussed on a specific implementation (scrum at company X), and the usual response is “this is not true agile”.

                                              1. 7

                                                “this is not true agile” I’ve been guilty of this in the past. Agile is good, therefore if what you’re describing to me isn’t good then it’s not true agile.

                                                But after years of Scrum at various shops, sometimes under the guidance of pricey “Scrum coaches” consultants I’m totally burnt out and disillusioned by it.

                                                As you say agile is by definition positive but beyond this, I think there are still a lot of good ideas and principles in the early agile movement just not in the Scrum process itself (which doesn’t predate Agile) and what it has come to represent.

                                                1. 6

                                                  I would define Agile as “follows the principles of the Agile Manifesto”. This implies a few things:

                                                  1. The Manifesto devalues things like comprehensive documentation. This can be criticized and discussed.

                                                  2. Scrum is only one possible instance of Agile. Not necessarily the best, maybe not even a good one. I would suspect that people discussed that to death already when Scrum was fresh.

                                                  3. You can do Scrum without Agile. Scrum is usually defined superficially. This means there is a lot of room for variation including stuff which undermines the Agile intentions. Second, it helps the consulting business, because how could you get Scrum right except by oral teachings of certified people?

                                                  1. 1

                                                    The Manifesto devalues things like comprehensive documentation. This can be criticized and discussed.

                                                    This aspect is a bit peculiar. Do they devalue software-documentation? (which is how I understood this principle). Or maybe it can be thought of a devaluation of a requirements-library/document. I came to terms with this principle in the sense, that it meant as an advice to avoid wasteful, up-front documentation, because clearly you cannot build a good product without documentation.

                                                    1. 1

                                                      From the manifesto:

                                                      That is, while there is value in the items on the right, we value the items on the left more.

                                                      It’s not “documentation doesn’t matter”, it’s “deliver something that works or your documentation is pointless”.

                                                    2. 1

                                                      The key bit of superficiality that reduces Scrum’s value is that people ignore the fact that Scrum does not mandate a process:

                                                      It is the opposite of a big collection of interwoven mandatory components. Scrum is not a methodology. What is Scrum?

                                                      Scrum is not a process, technique, or definitive method. Rather, it is a framework within which you can employ various processes and techniques. Scrum Guide

                                                      They take the initial process guide, defined in Scrum as a starting point to test, reflect, and improve upon, and treat it as a big collection of interwoven mandatory components. It makes middle management feel good as they get to hold meetings, see progress, and implement a buzzword, but avoids all of the valuable parts of Scrum.

                                                    3. 3

                                                      Bertrand Meyer has some criticisms (and compliments) of the core ideas, especially user stories vs requirements.

                                                      1. 1

                                                        thank you for that link. Would prefer text over video, but if it is Meyer, I’ll try to make room for it.

                                                        1. 1

                                                          Yeah, I feel the same way. He apparently has a book on the same topic, but I haven’t read it.

                                                          1. 1

                                                            okay, I haven’t watched it fully, but skipped over a few parts ,but I made sure to look at the user storeis and requirements parts. I am a bit torn on his view, because I can relate to his feeligns as a software user, that many times his user-story was forgotten and he attributes this to not generalizing them into requirements. However, I wonder if the lack of a requirements document is really the reason. Also, I think he must have forgotten how unusable a lot of requirements-backed software has been.

                                                            I share his sentiments on design and architecture work. However, good teams with good management have always made it possible to fit such work into the agile workflow. I attribute to agile consultants, that throughput and “velocity” have been overemphasized to sell agile, when it should much more be about building good products.

                                                            He lost me when he commented on test-driven development.

                                                          2. 1

                                                            His book is called “Agile! The good, the hype, and the ugly”, it’s brief, insightful, and well worth a read.

                                                      2. 5

                                                        I would argue that what you’re talking about there is more the consequences of adopting continuous integration and making deployments less painful, which one might call operational agility, but it has very little to do with the Agile methodology as such, at least from what I can see.

                                                        1. 6

                                                          Nope. Having tight feedback cycles with users is a core principle of Agile. Continuous integration on its own has nothing to do with user feedback, and doesn’t necessarily cause responsiveness to user feedback.

                                                          1. 1

                                                            The Agile Manifesto does not mention tight cycles, only “customer collaboration”.

                                                            1. 2

                                                              the Agile Principles (you have to click the link at the bottom of the manifesto) make multiple references.

                                                              1. 1

                                                                Can you explain? I don’t see the words “tight”, “feedback” or “cycles” here http://agilemanifesto.org/principles.html

                                                                1. 1

                                                                  Presumably: The main difference between collaboration with customers (vs contract negotiations) is that rather than getting a single document attempting to describe what the customer wants up front (feedback cycle = one contract) you continually work with them to narrow down what they actually want (shorter/tighter than that).

                                                                  1. 1

                                                                    the first principle, satisfy the customer through early and continuous delivery of valuable software, implies it. the third, deliver working software frequently, implies it. the fourth, business people and developers must work together daily, is an out-and-out statement of it.

                                                              2. 1

                                                                In my experience CI&CD is more useful for bugs than features. If you are coming from waterfall I understand where the connection between CI/CD and agile comes in.

                                                                1. 2

                                                                  Regardless of your experience and opinion of utility, those strategies are core to Agile and have obvious analogues in other industries that Agile draws inspiration from. They aren’t unique or novel products of Agile, but I think it’s fair to say that’s how they reached such widespread use today. It’s definitely incorrect to say they have little to do with Agile methodology.

                                                            2. 3

                                                              After having been making the error of using the word “agile” in the latter generic sense for some time, I came to realize that pretty much nobody does it. When you say “Agile” business people automatically think “Scrum” and it works (still) as a magical incantation. When you try to talk about the actual merits of agile approaches (plural) they tend to phase you out and think you’re trying to look smart without knowing anything.

                                                              1. -2

                                                                The top voted answer appears to be critiquing a very rigid Capital-C-Communism ideology, but none of it comes through to me as a valid critique of a more general lower-case-c-communism ideology: democratic, common ownership of the means of production, state and classlessness

                                                              1. 37

                                                                “I am capable of writing a large codebase in a memory unsafe language without introducing enough security vulnerabilities to drive a truck through.”

                                                                1. [Comment removed by author]

                                                                  1. 2

                                                                    There’s a huge area between memory safety and Python.

                                                                    1. [Comment removed by author]

                                                                      1. 6

                                                                        Once again, memory safe langs don’t sell being bug free and perfectly secure. They just say what they are… memory safe, it solves some issues, but not all. Let’s say that when you have a large codebase, having a memory safe lang helps you to focus on other things than solving memory issues (like preventing other bugs).

                                                                        1. 5

                                                                          that large code bases in memory-safe languages somehow don’t also end up with a lot of security vulnerabilities

                                                                          Inverse error. That’s not the implication. It’s that memory unsafe tools can and do cause security issues.

                                                                      2. 1

                                                                        I’ve done it.

                                                                        We probably have different definitions of large. But, @alex_gaynor did say as an individual…

                                                                        And I’ve written code in Python that had a huge command-injection vulnerability. Memory safety isn’t a panacea.

                                                                        And? That’s a class of vulnerabilities— shared by both C and Python— that no one was discussing.

                                                                        1. [Comment removed by author]

                                                                          1. 6

                                                                            The point— as I understood it— was that writing in a memory unsafe language makes it easy to make the mistakes you listed. Those mistakes “introduce enough security vulnerabilities to drive a truck through.”

                                                                            So do other things!

                                                                            But he wasn’t talking about any of those other things. He was talking about memory unsafe languages.

                                                                            Use a memory-safe language, remove a class of defects, remove a class of security vulnerabilities.

                                                                            Other vulnerabilities can and will still exist!

                                                                            Edit:

                                                                            “I am capable of making a sandwich with peanut butter without triggering a nut allergy.”

                                                                            “I’ve made sandwiches with almond butter that trigger nut allergies. Making sandwiches without peanut butter isn’t a panacea.”

                                                                        2. 1

                                                                          How did you know you did it?

                                                                          1. 1

                                                                            Change it from “writing” to “writing and maintaining over a period of several years with a team that has members come and go on a regular basis” and it’s another story.

                                                                        1. 2

                                                                          Could it be, that Linus is not the evil abuser that people paint him like? gasp

                                                                          1. 22

                                                                            I find this comment equally useless as the other reply I commented on.

                                                                            It’s is very usual that people resorting to abuse regularly don’t do it all the time. It doesn’t make it less abusive.

                                                                            Discuss the interpretation of Linus outbursts in all directions, but all these discussions have to happen in a wider context, not based on single emails. Linus is criticised for regular outburts, no one is saying that he’s like that all the time.

                                                                            1. 2

                                                                              The emails linked here (both post subject and the one linked by @pgl) paint a more nuanced picture than the one painted in the thread here. I find them interesting as I try to research the entire situation around the Linux kernel mailing list and Linus’ stewardship of the kernel development process.

                                                                              Obviously if a public figure is abusive and contributes to a working environment that’s toxic, it’s a serious issue. Maybe doubly so when it’s a flagship open source product like the Linux kernel. I apologize if I sound as if I’m flippant and dismissive.

                                                                              However, browsing through the comments in the previous thread I found a lot of speculation that this has harmed the kernel development process, made people less interested in contributing to open source, or generally being looked down upon. I did not find any links to actual first- or second-person accounts.

                                                                              I realize that demanding such accounts may in itself be insensitive, however I’d prefer to judge Linus’ behavior based on them rather than unsourced speculation.

                                                                              Edit I also realize that the speculation may in fact not be unsourced, but it’s in fact common knowledge (but unknown to me). If so, I should probably have just politely asked for links instead of attempting satire…

                                                                              1. 21

                                                                                I was actually more annoyed by this comment then by yours. I was just seriously not understanding what you wanted to imply. Thank you for this elaborate answer, though, I’ll ramble in return.

                                                                                I totally appreciate that Linus has a lot of experience with a project at that scale. And this email here perfectly illustrates it. It’s stern. Stern is great. I think the proper feedback to give here would be “that’s a great email” and not immediately bring up the standard debate. What annoys me is that people criticising specific Linus behaviour are painted like they can’t appreciate a good word from Linus or are just enraged by anything. This is also done by bringing up this debate in unrelated places. No, often, criticism comes from people that invest a lot of time in FOSS project management, too. I’ve lost more then one contributor on projects I was involved in because some project lead jumped on them in fashions like Linus does, some specifically citing the Linus way. That’s why I take a lot of care about communication nowadays, especially over mediums that detach you from the speaker, such as email.

                                                                                What gets me: Linus is also not the only person in such a position. It’s often postulated that his position is singular. It isn’t. There’s an ample amount of very nice people that pulled off similarly huge projects that pull a lot of weight. Matz, Knuth, Lamport, Larry Wall. And doing a thousand good things does not earn you the privilege of being an ass from time to time. That’s a classic abuse pattern that gets enabled by not pushing back when people cross boundaries.

                                                                                The proper feedback to Linus outbreaks would therefore be “Linus, most of the work is great, but this is a boundary crossed”. That doesn’t mean immediately breaking ties or such, but now, Linus has shown that this is his habit and he wants it that way.

                                                                                The fact that Linus felt the need to apologise in this case speaks for the people criticising his outburst at first. Maybe, things change? I’d be happy about it.

                                                                                He inflicts a lot of harm with random outburts to people. Googles mangagement research in the recent years found that safety is important for creativity and a good work environment. Safety specifically doesn’t mean freedom from critcism, or even anger. But there are lines that should not be crossed, one of them being that you don’t call for people to be “retroactively aborted” (essentially wishing their death). This is an unambigous, spitting insult in all cultures on this planet and no way to treat contributors. I’m amazed of the number of people still defending such things. There’s perfectly fine ways to express anger without going on a rampage.

                                                                                Retreating to “Finnish management style” also doesn’t cut it, unless you work with a project exclusively staffed by with finnish people.

                                                                                I wonder where the idea that this discussion is just theory comes from. People have publicly left kernel development because of the nature of debate in the project, which comes from top, the most prominent example being Sage Sharp: http://sarah.thesharps.us/2015/10/05/closing-a-door/. There’s other accounts around, from people like Matthew Garrett and others. There’s ample number of people that specifically say they avoid Linus if possible, ask a couple of people on conferences. Expecting people to rehash all of those whenever the subject comes up again is also problematic. Also, word-of-mouth is a thing, because criticising Linus in a public space might yield with your inbox getting emails like this. You know how no one talks about the bad managers in a company in public, but once you go to a bar with some colleagues, they start talking? Same effect.

                                                                                This thing is real. I am convinced that Linux would be better if they had better communication from top. This whole situation would be much better if Linus would have written just this email and not the other one. Then again, I’m not part of the Linux kernel, I have no say there. If things should change, its the crew around him that must do that.

                                                                                1. 8

                                                                                  I want to put this comment in a golden frame and show it to everyone who thinks Linus should totally be hurling insults.

                                                                                  1. 6

                                                                                    Thanks a lot of this extensive reply. It’s given me a lot to think about and read up on.

                                                                                    1. 5

                                                                                      Thanks for prompting it :).

                                                                                  2. 10

                                                                                    I have several friends who quit kernel development because of the culture on LKML in general, and Linus specifically; they are some of the most talented folks I know. One of their friends quit doing upstream kernel security work and now sells vulnerabilities to semi-shady semi-government organizations.

                                                                              1. 10

                                                                                I don’t really see a lot of smaller open source projects having their own LTS releases.

                                                                                What I see is them suffering from trying to support their ecosystem’s LTS releases. When CentOS ships with a Python that the Python core team has dropped, I’m stuck supporting it in my packages, because there are users there (and they blame me, not CentOS, for their troubles if I drop support).

                                                                                1. 2

                                                                                  I don’t understand CentOS, is enterprise really so inflexible with a shorter release cycle?

                                                                                  1. 12

                                                                                    Yes. Change is bad. Especially when you have scary SLAs (that is, downtime on your end costs your company thousands of dollars per minute) you tend to be very careful about what and when you upgrade, especially if things are working (if it ain’t broke, don’t fix it).

                                                                                    1. 1

                                                                                      I wonder why we don’t make better software / devops to handle change. Maybe the pain to change once in a while is less than to roll with tighter LTS windows?

                                                                                      1. 7

                                                                                        Because starting to use a new methodology to handle change is a change on its own. And so a new technology can only climb the scale relatively slowly (so many projects half our size have used this technology that we can as well run a small trial). This means that some importants kinds of feedback are received with a timescale of years, not weeks…

                                                                                        1. 4

                                                                                          Exactly, its not that enterprises don’t want to change its that change in and of itself is hard. It is also expensive in time, which means money. Which basically means: keep things as static as possible to minimize when things break. If you have N changes amongst M things, debugging what truly broke is non trivial. Reducing the scope of a regression is probably the number one motivator of never upgrading things.

                                                                                          An example, at work we modify the linux kernel for $REASONS, needless to say, testing this is just plain hard. Random fixes to one part of the kernel can drastically alter how often say the OOM killer triggers. Sometimes you don’t see issues until several weeks of beating the crap out of things. When the feedback cycle is literally a month, I am not sure one could argue that we want more change to be possible.

                                                                                          I don’t see much of a way to improve the situation beyond just sucking it up and accepting that certain changes cannot be rushed without triggering unknown unknowns. Even with multi week testing you still might miss regressions.

                                                                                          This is a different case entirely than making an update to a web application and restarting.

                                                                                          1. 2

                                                                                            First of all, thanks for a nice presentation of this kind of issues.

                                                                                            This is a different case entirely than making an update to a web application and restarting.

                                                                                            I am not sure what you mean here, because a lot of web applications have a lot of state, and a lot of inner structure, and a lot of rare events affecting the behaviour. I don’t want to deny that your case is more complicated than many, I just want to say that your post doesn’t convey that it is qualitatively different as opposed to quantitatively.

                                                                                            I am not sure one could argue that we want more change to be possible.

                                                                                            What you might have wanted, is comparing throwing more hardware at the problem (so that you can run more code in a less brittle part of the system) with continuing with the current situation. And then there would be questions of managing deployments and their reproducibility, possiibility or impossibility of redundancy, fault isolation, etc. I guess in your specific case the current situation is optimal by some parameters.

                                                                                            Then of course, the author of the original post might effectively have an interest — opposing to yours — in making your current situation more expensive to maintain (this could be linked to a change that might make something they want less expensive to get). Or maybe not.

                                                                                            How much do the things discussed in the original post apply to your situation by the way? Do you try to cherry-pick fixes or to stabilize an environment with minimal necessary minor-version upgrades?

                                                                                            1. 3

                                                                                              I am not sure what you mean here, because a lot of web applications have a lot of state, and a lot of inner structure, and a lot of rare events affecting the behaviour. I don’t want to deny that your case is more complicated than many, I just want to say that your post doesn’t convey that it is qualitatively different as opposed to quantitatively.

                                                                                              I’m not entirely sure I can reasonably argue that kernel hacking isn’t qualitatively different from say web application development but here goes. Mind that some of this is going to be specific to the use cases I encounter and thus can be considered an edge case, however edge cases are always great for challenging assumptions you may not realize you had.

                                                                                              Lets take the case of doing 175 deployments in one day that another commenter linked. For a web application, there are relatively easy ways of doing updates with minimal impact to end users. This is mostly possible as the overall stack is so far removed from the hardware, its relatively trivial to do. Mind you I’m not trying to discount the difficulty but overall it amounts to some sort of HA or load balancing say via dns, haproxy, etc… to handle flipping a switch from old version to new.

                                                                                              One might also have an in application way to do A/B version flips in place in the application as well, whatever the case here, the ability to update is in lots of ways a feature of the application space.

                                                                                              A con to this very feature is that restarting the application and deploying a new version inherently destroys the state the application is in. Aka: lets say you have a memory bug, restarting it fixes it magically but you upgrade so often you never notice it. This is a case where I am almost 99% sure that any user space developer would catch bugs if they were to run their application for longer than a month. Now I doubt that will happen but its something to contemplate. The ability to do rapid updates is a two edged sword.

                                                                                              Now lets compare to the kernel. Lets take a trivial idea like adding a 64 bit pointer to the skb buffer. Easy right? Shouldn’t impact a thing, its just 64 bits what is 64 bits of memory amongst friends? Well a lot it turns out, lets say you’re running network traffic at 10Gb/s all the while where you have a user space application using up as much memory as it can. Probably overcommitting memory as well just to be annoying. Debugging why this application triggers the OOM killer after a simple change like I described is definitely non trivial. The other problem is you need to trigger the exact circumstances to hit the bug. And worst of all it can often be a confluence of bugs that trigger it. Aka some network driver will leak a byte every so often once some queue is over a certain size, meaning you have to run stuff a long time to get to that state again.

                                                                                              I’m using a singular example but I could give others where the filesystem can similarly play into the same stats.

                                                                                              Note, since I’m talking about linux lets review the things that a kernel update cannot reasonably do, namely update in place. This severely limits how a user space application can be run and for how long. Lets say this user space application can’t be shutdown without some effect on the users end goal. Unreasonable? Sure, but note that a lot of runtime processes are not designed with rapid updates and things like checkpointing so they can be re-run from a point in time snapshot. And despite things like ksplice to update the kernel in place, it has…. limitations to trying to update things. Limitations relating to struct layout tend to cause things to go boom.

                                                                                              In my aforementioned case, struct layout and the impact on memory can also severely change how well user space code runs. Say you add another byte to a struct that was at 32bytes of memory already. Now you’re requiring 40bytes of memory per struct. This means that its likely your’e wasting 24 bytes of memory and hurting caching of data in the processor in ways you might not know. Lets say you decide to make it a pointer, now you’re hitting memory differently and also causing changes to the overall behavior of how everything runs.

                                                                                              I’m only scratching the surface here, but I’m not sure how one can arrive at kernel development isn’t qualitatively different state wise than a web application. I’m not denigrating web app developers either here, but I don’t know of many web app developers worrying about adding a single byte to a struct as the performance impact causes more cache invalidation and making things ever so slower for what a user space process sees. They both involve managing state, but making changes in the kernel can be frustrating when a simple 3 line change can cause odd space leaks in how user applications run. If you’re wondering why Linus is such a stickler about breaking user space, its because its really easy to do.

                                                                                              I also wish I could magically trip up every heisenbug related to long running processes abusing the scheduler, vm subsystem, filesystem, and network but much like any programmer, bugs at the boundary are hard to replicate. Its also hard to debug when all you’ve got is a memory image of the state of the kernel when things broke. What happened leading up to that is normally the important part but entirely gone.

                                                                                              What you might have wanted, is comparing throwing more hardware at the problem (so that you can run more code in a less brittle part of the system) with continuing with the current situation. And then there would be questions of managing deployments and their reproducibility, possiibility or impossibility of redundancy, fault isolation, etc. I guess in your specific case the current situation is optimal by some parameters.

                                                                                              Not sure its optimal but a bit of a casus belli in that if you have to run a suite of user land programs that have been known to trigger bad behavior, and run them for a month straight to be overly certain things aren’t broken, throwing more hardware at it doesn’t make the baby any faster. Just like throwing 9 women at the making a baby problem won’t make a baby any faster, sometimes the time it takes to know things work just can’t be reduced. You can test more in parallel at once sure, but even then, you run into cost issues for the hardware.

                                                                                              How much do the things discussed in the original post apply to your situation by the way? Do you try to cherry-pick fixes or to stabilize an environment with minimal necessary minor-version upgrades?

                                                                                              Pretty much that, cherry-pick changes as needed, and stick to a single kernel revision. Testing is mostly done on major version changes, aka upgrading from version N to version M reapply the changes and let things loose to see what the tree shaking finds on the ground. Then debugging what might have introduced the bug and fixing that along with more testing.

                                                                                              Generally though the month long runs tend to be freaks of nature bugs. But god are they horrible to debug.

                                                                                              Hopefully that helps explain my vantage point a bit. If its unconvincing feel free to ask for more clarification. Its hard to get too specific due to legal reasons but I’ll try to do as well as I can. Lets just say, I envy every user space application as to the debugging tools they have. I wish to god the kernel had something like rr to debug back in time to watch a space leak as an example.

                                                                                              1. 1

                                                                                                Thanks a lot.

                                                                                                Sorry for a poor word choice — I meant that the end-goal problems you solve are on a continuum with no bright cutoffs that passes through the tasks currently solved by the most complicated web systems, by other user-space systems, by embedded development (let’s say small enough to have no use for FS), other kinds of kernel development etc. There are no clear borders, and there are large overlaps and crazy outliers. I guess if you said «orders of magnitude», I would just agree.

                                                                                                On the other hand, poor word choice is the most efficient way to make people tell interesting things…

                                                                                                I think a large subset of examples you gave actually confirm the point I have failed to express.

                                                                                                Deploying web applications doesn’t have to reset the process, it is just that many large systems now throw enough hardware to reset the entire OS instance. Reloading parts of the code inside the web application works fine, unless a library leaks an fd on some rare operations and the server process fails a week later. Restarting helps, that’s true. Redeploying a new set of instances takes more resources, needs to be separately maintained, but allows to shrug off some other problems (many of which you have enumerated).

                                                                                                And persistent state management still requires effort for web apps, but less than before more resources were thrown at it.

                                                                                                I do want to hope that at some point kernel debugging (yes, device drivers excluded, that’s true) by running Bochs-style CPU emulator under rr becomes feasible. After all, this is a question of throwing resources at the problem…

                                                                                                1. 1

                                                                                                  Deploying web applications doesn’t have to reset the process, it is just that many large systems now throw enough hardware to reset the entire OS instance. Reloading parts of the code inside the web application works fine, unless a library leaks an fd on some rare operations and the server process fails a week later. Restarting helps, that’s true. Redeploying a new set of instances takes more resources, needs to be separately maintained, but allows to shrug off some other problems (many of which you have enumerated).

                                                                                                  Correct, but this all depends upon the application. A binary for example would necessarily have to be restarted somehow, even if it means re exec()‘ing the process to get at the new code. Unless you’re going to dynamically load in symbols on something like a HUP it seems a bit simpler to just do a load balanced type setup and bleed off connections then restart and let connections trickle back in. But I don’t know I’m not really a web guy. :)

                                                                                                  I do want to hope that at some point kernel debugging (yes, device drivers excluded, that’s true) by running Bochs-style CPU emulator under rr becomes feasible. After all, this is a question of throwing resources at the problem…

                                                                                                  I highly doubt that will ever happen, but I wish it would. But qemu/bochs etc… all have issues with perfect emulation of cpus sadly.

                                                                                        2. 2

                                                                                          It’s not like we don’t have the software. GitHub deployed to production 175 times in one day back in 2012. Tech product companies often do continuous deployment, with gradual rollout of both app versions across servers and features across user accounts, all that cool stuff.

                                                                                          The “enterprise” world is just not designed for change, and no one seems to be changing that yet.

                                                                                        3. 1

                                                                                          if it ain’t broke, don’t fix it

                                                                                          And if it isn’t seriously affecting profitability yet, it ain’t broke.

                                                                                          Even if there are known unpatched vulnerabilities which expose people you have a duty to to increased risk.

                                                                                      2. 1

                                                                                        The article recommendation seems to be to create a separate branch for LTS backports (so that the new development can initially happen easier) and (maybe gradually) handing over most of the control of the backports. Unless these users are a significant share of the project contributors already (regardless of form of contribution).

                                                                                        Whether this recommendation is aligned with your motivation for the project is another question.

                                                                                      1. 11

                                                                                        attrs has been successful enough that PSF CPython core is investigating adding a stripped-down version to the 3.7 standard library.

                                                                                        1. 8

                                                                                          It’s not the PSF, it’s the CPython core developers. The PSF does not direct the development of the Python language.

                                                                                          1. 13

                                                                                            Well, Ted, that’d be because that one didn’t sound alarmist, whereas this one does. Clearly you know nothing of attracting the discerning reader.

                                                                                            1. 5

                                                                                              So noted for next time.

                                                                                              Substantively:

                                                                                              a) it’s disappointing to see that coverage guided fuzzing wasn’t very useful, given how productive a technique it’s been on other targets (honestly, watching a coverage guided is the closest thing I’ve seen to magic)

                                                                                              b) I’d be curious if the results change with another 10x runs

                                                                                              1. 7

                                                                                                So noted for next time.

                                                                                                I think this was obvious, but your original title was perfect; my riposte was barely disguised annoyance at the clickbait titling of this version. Please don’t change.

                                                                                                a) it’s disappointing to see that coverage guided fuzzing wasn’t very useful, given how productive a technique it’s been on other targets (honestly, watching a coverage guided is the closest thing I’ve seen to magic)

                                                                                                I was also surprised about that, and mulling it over my morning coffee. Here’s my mostly uninformed best guess:

                                                                                                Right now, Blink, WebKit, Gecko, and (I assume) Trident all have incredibly extensive unit tests. It seems plausible to me that the guided fuzzing was amounting to little more than extra unit tests, falling into the old trap of “if you write the code and the tests, then you’ll code to the tests”. In that scenario, it’d precisely be the unguided fuzzing where I’d expect issues to be found, since it’s there that pre- and post-conditions are likely to be broken in unexpected ways.

                                                                                                That said, I have extremely limited experience with guided fuzzing, and I have never actually written any code myself for any of the rendering engines, so I may be misinformed on one or both points.

                                                                                                1. 4

                                                                                                  I was being facetious :-)

                                                                                                  My guess as to the problem is that browser engines are kind of like interpreters - there’s a lot of “shared branches”, which makes the coverage metrics misleading, you hit very high line coverage very quickly, but not “conceptual” coverage.

                                                                                                  1. 1

                                                                                                    It is a really relevant observation. thanks a lot for sharing.

                                                                                            2. 3

                                                                                              Apologies, I hadn’t seen that posted. I was wondering whether to link directly to the project, but I thought the synopsis might be more interesting to scan.

                                                                                              1. 2

                                                                                                I personally prefer original links, but as much as I’d like to think that Lobsters’s readership is more discerning than most, clickbaity titles (unfortunate) and synopses (understandable) tend to get more attention.

                                                                                            1. 5

                                                                                              I don’t think it’s that big of a deal. I agree with a comment on the post.

                                                                                              If people use random TLDs for testing then that’s just bad practice and should be considered broken anyway.

                                                                                              1. 2

                                                                                                At least the developer tools (like pow, puma-dev) which do squat on *.dev will now be compelled to support “turnkey https” out of the box, or risk losing many of their users.

                                                                                                1. 4

                                                                                                  Or switch to some TLD that’s reserved, like .test

                                                                                                  1. 1

                                                                                                    Well, since .dev is a real domain, what I actually suspect will happen is they’ll just switch to something else. Which, to be honest, I’d prefer: I’m all for HTTPS everywhere, but on localhost, when doing dev, it’s not worth it 99.9% of the time (and it robs me of tools like nc and tcpdump to casually help with certain issues).

                                                                                                1. 3

                                                                                                  I appear to be hitting an ssl exception on this URL. Something about the certificate issuer being unknown.

                                                                                                  1. 6

                                                                                                    @tedu hasn’t gotten to the book about CA infrastructure yet

                                                                                                    1. 2

                                                                                                      Lol. Oh he has. @tedu went further to launch a small-scale experiment on the psychological effects of highly-technical users encountering SSL problems on the homepage of someone they expect understands security. Aside from personal amusement, he probably focused on categorizing them from how many ignore them to quick suggestions to in-depth arguments. He follows up with a sub-study on the quality of those arguments mining them for things that will appeal to the masses. He’ll then extrapolate the patterns he finds to discussions in tech forums in general. He’ll then submit the results to Security and Online Behavior 2018.

                                                                                                      Every tedu story on Lobsters having a complaint about this is the fun part of the study for him. A break from all the tedium of cataloging and analyzing the responses. On that, how bout a Joker Meme: “If a random site by careless admins generate CA errors, then the IT pro’s think that’s all part of the plan. Let one, security-conscious admin have his own CA and then everybody loses their minds!”

                                                                                                      1. 2

                                                                                                        Not far from the truth.

                                                                                                        1. 2

                                                                                                          He’ll pay the $$$ and jump through hoops for DNS; but, the CA system— the line is drawn here!

                                                                                                          1. 2

                                                                                                            Well, domain names are scarce in a way that RSA keys aren’t, and have unevenly distributed value. My domain name was not randomly generated. :)

                                                                                                            1. 1
                                                                                                              tedunangst.com name server ns-434.awsdns-54.com.
                                                                                                              tedunangst.com name server ns-607.awsdns-11.net.
                                                                                                              tedunangst.com name server ns-1775.awsdns-29.co.uk.
                                                                                                              tedunangst.com name server ns-1312.awsdns-36.org.
                                                                                                              

                                                                                                              Did you ask for people to add your nameservers to their resolver roots?

                                                                                                              Domain names and RSA keys are equally scarce. It’s all protection money, for root servers and for root CAs.

                                                                                                          2. [Comment removed by author]

                                                                                                            1. 6

                                                                                                              This comment is totally unsupported by data, the Chrome team in particular has done a ton of research which has improved error adherence: https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43265.pdf in particular, but there’s others as well.

                                                                                                              The past few years have featured the greatest improvement in both the quality and quantity of HTTPS on the web since TLS was introduced, and it’s been supported by careful research on both the crypto side and the UX side.

                                                                                                              1. 3

                                                                                                                Huh? The situation was much worse: browsers just displayed OK/Cancel dialog and most users just clicked OK. Today it’s harder for users to click OK, and this single change of UI made many more users secure against MiTM attacks. I don’t have links handy, but those Chrome and Firefox “assholes” did a lot of research regarding this, and made browsing more secure for the majority of non-technical people.

                                                                                                                1. 2

                                                                                                                  At the same time, I think they’ve made it harder for technical users to make informed decisions.

                                                                                                                  1. 1

                                                                                                                    True.

                                                                                                                    1. 1

                                                                                                                      How is that not a win? ;-)

                                                                                                          1. 1

                                                                                                            seems like a trivial amount – why doesn’t the PSF pay for half if not all of that??

                                                                                                            1. 5

                                                                                                              Because no one has written a grant application to get the money. I’m also not sure that the PSF hands out grants that big for code development efforts.

                                                                                                              1. 2

                                                                                                                Biggest grant I can find by the PSF is 10k https://www.python.org/psf/records/board/resolutions/

                                                                                                                1. 2

                                                                                                                  I don’t imagine them really wanting to give people a reason to change the defacto standard python away from one they control.

                                                                                                                  1. 5

                                                                                                                    The PSF doesn’t control CPython, and doesn’t care which Python you use, as long as you use Python :-)

                                                                                                                    (Source: Former PSF board member)

                                                                                                                    1. 1

                                                                                                                      I stand corrected, thanks!

                                                                                                                1. 7

                                                                                                                  And we wonder why women leave the computing industry at a faster rate than men….

                                                                                                                  1. 6

                                                                                                                    Why? Because a male programmer choose to demonstrate this concept by showing paintings of boobs instead of penises? He could have, of course, demonstrated this with something else, but this clearly gets the point across of how bad it can get without being completely obscene, or actually committing data that would confirm the problem to begin with.

                                                                                                                    1. 6

                                                                                                                      I’ve noticed that a lot of men have a problem with seeing other men’s penises. There’s a certain threat factor involved with an exposed penis. Consider how many men act grossed out at the thought of other men being nude for an extended amount of time in the locker rooms. Why are they so nude? They must be pedophiles getting off from exposing themselves. Even worse if they’re older men, their nudity is doubly threatening.

                                                                                                                      Anyway, my point is, yes, choosing breasts was a very gendered, male choice.

                                                                                                                      1. 7

                                                                                                                        The author explains it quite plainly:

                                                                                                                        Immutability is a double-edged sword. Transaction data stays forever, which is good. But a wicked mind could leverage immutability to store harmful images or texts about a third party FOREVER, with the goal of inflicting social damage. Once stored, it is irreversible. And it interferes with the Right to be forgotten. Think about a spiteful vengeance in the context of a lovers’ spat or a relationship break-up. That’s why I’ve used artistic boobs, as a fun analogy

                                                                                                                        1. 4

                                                                                                                          Does everything have to be political? Can you not just see what he’s doing and look at the intent instead of trying to insert some unrelated narrative?

                                                                                                                          1. 5

                                                                                                                            That ship sailed a long time ago, sibling.

                                                                                                                            1. 6

                                                                                                                              Does everything have to be political?

                                                                                                                              He made it political in the first place by choosing breasts. If you don’t want people to discuss human anatomy, pick something else as an example, something that doesn’t bring the same kind of attention. We are frail, social creatures and that means politics.

                                                                                                                              1. 5

                                                                                                                                I don’t see how breasts are remotely political.

                                                                                                                                1. 2

                                                                                                                                  Obviously, doctor, you’ve never had to deal with hearing the public’s opinions about your breasts.

                                                                                                                                2. 4

                                                                                                                                  pick something else as an example,

                                                                                                                                  The danger of picking something else is that the entire danger will be lost on the casual reader. “Oh noes!!!! I can put flowers in the block chain and they are forever!!!! AAAAAHHHH WHAT WILL I DO???” v. my immediate reaction of “I sure hope my daughters, in 8 years, don’t start snapchatting breast picks to their so called friends who make it impossible to delete, forever!”

                                                                                                                                  1. 1

                                                                                                                                    If you don’t want people to discuss human anatomy, pick something else as an example, something that doesn’t bring the same kind of attention.

                                                                                                                                    something that doesn’t bring the same kind of attention.

                                                                                                                                    Maybe that’s the meta reason? To get everyone wound up and subconciously realize what this post is really about?

                                                                                                                            2. 7

                                                                                                                              I popped into the comments to say the same thing. I suppose I can see why he thought using those pictures was apt, but I think the point could’ve been made just as strongly using any image.

                                                                                                                              1. 3

                                                                                                                                Your remark, true or not, is not germane to this topic and will likely only lead to flaming.

                                                                                                                                If you can’t say anything about the technology, don’t say anything at all.

                                                                                                                                1. 9

                                                                                                                                  If you can’t say anything about the technology, don’t say anything at all.

                                                                                                                                  It seems to me that you’re encouraging the self-censoring of one viewpoint under the guise of maintaining neutraity. When the dispute is between “this is in poor taste” and “there is nothing wrong with this”, silence on the subject is an implicit endorsement of the latter.

                                                                                                                                  1. 5

                                                                                                                                    My point is that that dispute is uninteresting and off-topic compared to the technology.

                                                                                                                                    1. 2

                                                                                                                                      When the dispute is between “this is in poor taste” and “there is nothing wrong with this”, silence on the subject is an implicit endorsement of the latter.

                                                                                                                                      You are perfectly free to find something in bad taste. And you’re free to express that opinion. But trying to constitute an argument out of it, for the purpose of holding people to a standard you haven’t met yourself, is illegitimate. Extremely fashionable, but still not valid.

                                                                                                                                1. 2

                                                                                                                                  I’m not sure if this is the punchline to the article, or merely an accident, but in both Firefox and Chrome this produces an “unknown issuer” error.

                                                                                                                                  The server appears to be serving a single leaf certificate, which appears to be issued by Ted’s own CA.

                                                                                                                                  On principle, I don’t click through cert errors, so I guess I’ll never know if it’s the punchline or an accident.

                                                                                                                                  1. 2

                                                                                                                                    This all the point of the article and the broken CA system.

                                                                                                                                    in both Firefox and Chrome this produces an “unknown issuer” error

                                                                                                                                    Unknown to who? If it didn’t say the issuer was unknown, who would the issuer be? Would you know who they are?

                                                                                                                                    appears to be issued by Ted’s own CA

                                                                                                                                    So you do know who the issuer is. Just your web browser doesn’t. If you know who Ted is and trust him, wouldn’t you trust his certificate? If you don’t know who he is or don’t trust him, if he had gotten a certificate from CA included in Firefox, now it’s ok to trust this Ted guy?

                                                                                                                                    On principle, I don’t click through cert errors

                                                                                                                                    They have you trained well. ;-)

                                                                                                                                    Web browsers have helped complicate the problems of the CA trust model under the guise of usability (and people just don’t want to have to care) by taking control of the user’s trust away from them. Even those who know better and want to handle it themselves.

                                                                                                                                    I appreciate Ted, who has enough traffic to be noticed, going against the established model and demonstrating another method.

                                                                                                                                    1. 4

                                                                                                                                      So you do know who the issuer is. Just your web browser doesn’t

                                                                                                                                      Nope. It was issued by someone claiming to be “Ted”. But anyone can do that.

                                                                                                                                      If you don’t know who he is or don’t trust him, if he had gotten a certificate from CA included in Firefox, now it’s ok to trust this Ted guy?

                                                                                                                                      Yes, because I trust the Firefox organization and processes, far more than any lone individual.

                                                                                                                                      1. 3

                                                                                                                                        What’s missing here, is that @tedu has not shared, out of band, the certificate fingerprints.

                                                                                                                                        trust the Firefox organization and processes, far more than any lone individual.

                                                                                                                                        Sounds like a good idea except for WoSign, StartCom, Government certs, etc. I had to keep removing these (through a many step process), and they kept coming back with updates, long before Firefox removed them themselves. Given the state of the CA system now, I’m all for Firefox and others helping to curate the list of CA’s but I don’t like this implicit trust without thought that the UX creates and little to no attention to usability of managing that trust as a user.

                                                                                                                                        To put it another way. Firefox is not perfect. Ted is not perfect, no one is perfect. But the design takes away control from me to manage the risks and tries to force (or imply that you can have) 100% trust in Firefox (or browser company X).

                                                                                                                                        1. 7

                                                                                                                                          SHA256 (ca-tedunangst-com.crt) = 049673630a4a8d801a6c17ac727e015fbf951686cdd253d986e9e4d1a8375cba

                                                                                                                                          That’s posted on the home page. It’s not included in the flak post following the principle that important information should only be maintained in one place. I’ve added a note and a link. That was an oversight.

                                                                                                                                          It’s the hash of the file, not some internal fingerprint, because I find that easier to verify with simpler tools. You don’t even need to decode it to at least verify it’s the same file I say it is.

                                                                                                                                          1. 3

                                                                                                                                            I’m all for browsers making it easier to manage your own trust roots. But realistically the end user is a lot more imperfect than Firefox most of the time; good defaults are by far the most important part of what the likes of Firefox need to be doing, and I’d actually consider WoSign/StartCom/… a success story - Firefox et al did the right thing, and sent a much stronger message than individuals acting alone ever would. Government surveillance is the kind of thing that requires collective action to counter - uncoordinated individual opposition doesn’t cut it.

                                                                                                                                            Heck, I’m probably one of the most paranoid 0.1% of users, but I never curated my root certificate list. There are only so many hours in the day, I have things to be doing, I’m not going to evaluate an individual CA for every website I go to. At best I’d use a list run by the EFF or someone, but really that someone might as well be Firefox.

                                                                                                                                            I don’t know what he’s proposing, but it’s hard to imagine what advantage it offers that he can’t get by using a CA-signed certificate. Installing the Ted CA doesn’t stop another CA from signing tedunangst.com. It does give Ted the authority to sign certificates for other websites, which I don’t want - 150 root CAs is pretty bad but 151 is still worse. If he has mechanisms for publishing fingerprints out of band, why not just do that with his site’s certificate? If he doesn’t trust the CAs, there are any number of mechanisms - HPKP, DANE,… - for increasing authentication while remaining compatible with the existing CA system, which, for all its flaws, is pretty effective. If he’s not willing to cooperate with the most effective, widely deployed security mechanism then screw him; I can live without his blog, it’s not worth the amount of time it would take me to figure out whether he’s just being awkward or actually wants to compromise my security. If he really wants to run a CA, he can go through the process to get it approved by Firefox; they’re far more capable of doing audits than I am.

                                                                                                                                            1. 1

                                                                                                                                              I agree with a lot of that, but still disagree on some things but I think I have made those points and don’t want to keep ranting on. :)

                                                                                                                                              I will however agree with this:

                                                                                                                                              If he has mechanisms for publishing fingerprints out of band, why not just do that with his site’s certificate?

                                                                                                                                              I did not add his CA to my list of Authorities for reasons you state (and Ted talks about this in the article). I only accepted the server certificate. Why is that OK and not the CA? Because the day before, I was accessing his site in plain text. Have I been MITMed? Who cares? I could have been getting MITMed for years while going there.

                                                                                                                                      2. 1

                                                                                                                                        Yesterday there was a hole in the redirect to give some notice of the coming service disruption. Not sure how long I should leave it there however. Or what all should be excluded from https (this page, the cert, the home page?). I opted in favor of strictness for now.