1. 1

    The document doesn’t state where these clones come from. Are the Firefox numbers from a mozilla-central clone? Because if so it’s also including lots of code for Thunderbird and Lightning at least, and probably lots of legacy stuff like XULRunner, etc…. the list goes on.

    1. 31

      at this point most browsers are OS’s that run (and build) on other OS’s:

      • language runtime - multiple checks
      • graphic subsystem - check
      • networking - check
      • interaction with peripherals (sound, location, etc) - check
      • permissions - for users, pages, sites, and more.

      And more importantly, is there any (important to the writers) advantage to them becoming smaller? Security maybe?

      1. 10

        Browsers rarely link out the system. FF/Chromium have their own PNG decodes, JPEG decodes, AV codecs, memory allocators or allocation abstraction layers, etc. etc.

        It bothers me everything is now shipping as an electron app. Do we really need every single app to have the footprint of a modern browser? Can we at least limit them to the footprint of Firefox2?

        1. 9

          but if you limit it to the footprint of firefox2 then computers might be fast enough. (a problem)

          1. 2

            New computers are no longer faster than old computers at the same cost, though – moore’s law ended in 2005 and consumer stuff has caught up with the lag. So, the only speed-up from replacement is from clearing out bloat, not from actual hardware improvements in processing speed.

            (Maybe secondary storage speed will have a big bump, if you’re moving from hard disk to SSD, but that only happens once.)

            1. 3

              moore’s law ended in 2005 and consumer stuff has caught up with the lag. So, the only speed-up from replacement is from clearing out bloat, not from actual hardware improvements in processing speed.

              Are you claiming there have been no speedups due to better pipelining, out-of-order/speculative execution, larger caches, multicore, hyperthreading, and ASIC acceleration of common primitives? And the benchmarks magazines post showing newer stuff outperforming older stuff were all fabricated? I’d find those claims unbelievable.

              Also, every newer system I had was faster past 2005. I recently had to use an older backup. Much slower. Finally, performance isn’t the only thing to consider: the newer, process nodes use less energy and have smaller chips.

              1. 2

                I’m slightly overstating the claim. Performance increases have dropped to incremental from exponential, and are associated with piecemeal attempts to chase performance increase goals that once were a straightforward result of increased circuit density through optimization tricks that can only really be done once.

                Once we’ve picked all the low-hanging fruit (simple optimization tricks with major & general impact) we’ll need to start seriously milking performance out of multicore and other features that actually require the involvement of application developers. (Multicore doesn’t affect performance at all for single-threaded applications or fully-synchronous applications that happen to have multiple threads – in other words, everything an unschooled developer is prepared to write, unless they happen to be mostly into unix shell scripting or something.)

                Moore’s law isn’t all that matters, no. But, it matters a lot with regard to whether or not we can reasonably expect to defend practices like electron apps on the grounds that we can maintain current responsiveness while making everything take more cycles. The era where the same slow code can be guaranteed to run faster on next year’s machine without any effort on the part of developers is over.

                As a specific example: I doubt that even in ten years, a low-end desktop PC will be able to run today’s version of slack with reasonable performance. There is no discernible difference in its performance between my two primary machines (both low-end desktop PCs, one from 2011 and one from 2017). There isn’t a perpetually rising tide that makes all code more performant anymore, and the kind of bookkeeping that most web apps spend their cycles in doesn’t have specialized hardware accelerators the way matrix arithmetic does.

                1. 4

                  Performance increases have dropped to incremental from exponential, and are associated with piecemeal attempts to chase performance increase goals that once were a straightforward result of increased circuit density through optimization tricks that can only really be done once.

                  I agree with that totally.

                  “Multicore doesn’t affect performance at all for single-threaded applications “

                  Although largely true, people often forget a way multicore can boost single-threaded performance: simply letting the single-threaded app have more time on CPU core since other stuff is running on another. Some OS’s, esp RTOS’s, let you control which cores apps run on specifically to utilize that. I’m not sure if desktop OS’s have good support for this right now, though. I haven’t tried it in a while.

                  “There isn’t a perpetually rising tide that makes all code more performant anymore, and the kind of bookkeeping that most web apps spend their cycles in doesn’t have specialized hardware accelerators the way matrix arithmetic does.”

                  Yeah, all the ideas I have for it are incremental. The best illustration of where rest of gains might come from is Cavium’s Octeon line. They have offloading engines for TCP/IP, compression, crypto, string ops, and so on. On rendering side, Firefox is switching to GPU’s which will take time to fully utilize. On Javascript side, maybe JIT’s could have a small, dedicated core. So, there’s still room for speeding Web up in hardware. Just not Moore’s law without developer effort like you were saying.

        2. 9

          Although you partly covered it, I’d say “execution of programs” is good wording for JavaScript since it matches browser and OS usage. There’s definitely advantages to them being smaller. A guy I knew even deleted a bunch of code out of his OS and Firefox to achieve that on top of a tiny, backup image. Dude had a WinXP system full of working apps that fit on one CD-R.

          Far as secure browsers, I’d start with designs from high-assurance security bringing in mainstream components carefully. Some are already doing that. An older one inspired Chrome’s architecture. I have a list in this comment. I’ll also note that there were few of these because high-assurance security defaulted on just putting a browser in a dedicated partition that isolated it from other apps on top of security-focused kernels. One browser per domain of trust. Also common were partitioning network stacks and filesystems that limited effect of one partition using them on others. QubesOS and GenodeOS are open-source software that support these with QubesOS having great usability/polish and GenodeOS architecturally closer to high-security designs.

          1. 6

            Are there simpler browsers optimised for displaying plain ol’ hyperlinked HTML documents, and also support modern standards? I don’t really need 4 tiers of JIT and whatnot for web apps to go fast, since I don’t use them.

            1. 12

              I’ve always thought one could improve on a Dillo-like browser for that. I also thought compile-time programming might make various components in browsers optional where you could actually tune it to amount of code or attack surface you need. That would require lots of work for mainstream stuff, though. A project like Dillo might pull it off, though.

              1. 10
                1. 3

                  Oh yeah, I have that on a Raspberry Pi running RISC OS. It’s quite nice! I didn’t realise it runs on so many other platforms. Unfortunately it only crashes on my main machine, I will investigate. Thanks for reminding me that it exists.

                  1. 1

                    Fascinating; how had I never heard of this before?

                    Or maybe I had and just assumed it was a variant of suckless surf? https://surf.suckless.org/

                    Looks promising. I wonder how it fares on keyboard control in particular.

                    1. 1

                      Aw hell; they don’t even have TLS set up correctly on https://netsurf-browser.org

                      Does not exactly inspire confidence. Plus there appears to be no keyboard shortcut for switching tabs?

                      Neat idea; hope they get it into a usable state in the future.

                    2. 1

                      AFAIK, it doesn’t support “modern” non-standards.

                      But it doesn’t support Javascript either, so it’s way more secure of mainstream ones.

                    3. 7

                      No. Modern web standards are too complicated to implement in a simple manner.

                      1. 3

                        Either KHTML or Links is what you’d like. KHTML would probably be the smallest browser you could find with a working, modern CSS, javascript and HTML5 engine. Links only does HTML <=4.0 (including everything implied by its <img> tag, but not CSS).

                        1. 2

                          I’m pretty sure KHTML was taken to a farm upstate years ago, and replaced with WebKit or Blink.

                          1. 6

                            It wasn’t “replaced”, Konqueror supports all KHTML-based backends including WebKit, WebEngine (chromium) and KHTML. KHTML still works relatively well to show modern web pages according to HTML5 standards and fits OP’s description perfectly. Konqueror allows you to choose your browser engine per tab, and even switch on the fly which I think is really nice, although this means loading all engines that you’re currently using in memory.

                            I wouldn’t say development is still very active, but it’s still supported in the KDE frameworks, they still make sure that it builds at least, along with the occasional bug fix. Saying that it was replaced is an overstatement. Although most KDE distributions do ship other browsers by default, if any, and I’m pretty sure Falkon is set to become KDE’s browser these days, which is basically an interface for WebEngine.

                        2. 2

                          A growing part of my browsing is now text-mode browsing. Maybe you could treat full graphical browsing as an exception and go to the minimum footprint most of the time…

                      2. 4

                        And more importantly, is there any (important to the writers) advantage to them becoming smaller? Security maybe?

                        user choice. rampant complexity has restricted your options to 3 rendering engines, if you want to function in the modern world.

                        1. 3

                          When reimplementing malloc and testing it out on several applications, I found out that Firefox ( at the time, I don’t know if this is still true) had its own internal malloc. It was allocating a big chunk of memory at startup and then managing it itself.

                          Back in the time I thought this was a crazy idea for a browser but in fact, it follows exactly the idea of your comment!

                          1. 3

                            Firefox uses a fork of jemalloc by default.

                            1. 2

                              IIRC this was done somewhere between Firefox 3 and Firefox 4 and was a huge speed boost. I can’t find a source for that claim though.

                              Anyway, there are good reasons Firefox uses its own malloc.

                              Edit: apparently I’m bored and/or like archeology, so I traced back the introduction of jemalloc to this hg changeset. This changeset is present in the tree for Mozilla 1.9.1 but not Mozilla 1.8.0. That would seem to indicate that jemalloc landed in the 3.6 cycle, although I’m not totally sure because the changeset description indicates that the real history is in CVS.

                          2. 3

                            In my daily job, this week I’m working on patching a modern Javascript application to run on older browsers (IE10, IE9 and IE8+ GCF 12).

                            The hardest problems are due the different implementation details of same origin policy.
                            The funniest problem has been one of the used famework that used “native” as variable name: when people speak about the good parts in Javascript I know they don’t know what they are talking about.

                            BTW, if browser complexity address a real problem (instead of being a DARPA weapon to get control of foreign computers), such problem is the distribution of computation among long distances.

                            Such problem was not addressed well enough by operating systems, despite some mild attempts, such as Microsoft’s CIFS.

                            This is partially a protocol issue, as both NFS, SMB and 9P were designed with local network in mind.

                            However, IMHO browsers OS are not the proper solution to the issue: they are designed for different goals, and they cannot discontinue such goals without loosing market share (unless they retain such share with weird marketing practices as Microsoft did years ago with IE on Windows and Google is currently doing with Chrome on Android).

                            We need better protocols and better distributed operating systems.

                            Unfortunately it’s not easy to create them.
                            (Disclaimer: browsers as platforms for os and javascript’s ubiquity are among the strongest reasons that make me spend countless nights hacking an OS)

                          1. 1

                            I’m late to the party but this reminds me of Isaac Schlueter on “No”, which “stands for Node Optimized and is also gonna be what [he] says any time someone asks for a feature.”

                            1. 1

                              This is labeled security, but I don’t really understand how it’s related. Is there a security issue here that I’m missing? Or just an “ordinary” Unicode handling bug?

                              1. 11

                                I think that Javascript is part of the problem. It lacks a large standard library so people are out there reinventing the wheel. It would be nice if browsers created a better base for people to work with.

                                1. 2

                                  Isn’t that basically what JQuery is?

                                  1. 2

                                    Exactly. If a library became so universal as to nearly be default, why not elevate the most imporant parts of it into a standard library that is preloaded on all browsers?

                                    Since JavaScript is sent over the wire with each invocation it cares more about “binary” size than the average language.

                                    1. 2

                                      FWIW this seems to have happened with some ideas from jQuery. See e.g. document.querySelectorAll.

                                      1. 1

                                        What I think JS and the HTTP in general needs is fetching content from hashes instead of URLs. That way if everyone uses the same library it may as well be preloaded because the browser will be able to identify that it already has the same resource without having to centralize files on a CDN

                                        1. 3

                                          Cue IPFS

                                          1. 1

                                            I’ve had the same thought - turns out this is non-trivial to tack on to HTTP after the fact. See w3c/webappsec-subresource-integrity#22 and https://hillbrad.github.io/sri-addressable-caching/sri-addressable-caching.html (I haven’t read either of them in a while, but there you go).

                                      2. 1

                                        I do think that there’s space for a JS standard library

                                        The big difficulty is that front-end development is still looking for what “the wheel” is. And there are different paradigms depending on what kind of system you’re building and the resource constraints.

                                        To be honest I think a lot of the glibness for web UIs ignores that many web UIs are miles more complex than what most native GUIs accomplish. Very few native apps have the variety involved in a facebook-style “stream of events” UI. And a lot of web UIs are these kinds of streams of events.

                                        1. 2

                                          There is a lot of room at the framework level for various different solutions. I was thinking of really basic utilities. Solutions to the “left-pad” problem. I think JavaScript didn’t have a function for properly checking if an object was an array until ECMAScript 5.1. It’s hard to build on top of shaky fundamentals.

                                          A lack of history with modules and some notion of compilation (bundling) also hurts JavaScript. The tool chain exists now, but it has felt a bit uninviting in my limited experience.

                                          So we’ve ended up with a small standard library and primitive ways of combining scripts. It’s no wonder that things are so chaotic.

                                          1. 1

                                            FWIW, left-pad-like functionality has been in the standard library since ES2017: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/padStart

                                            Tiny stuff like that isn’t too hard to put in the standard library. But the stuff that frameworks do is too high-level, abstract, and opinionated.

                                      1. 3

                                        Maybe @steveno or someone else can ELI5 this to me why is this advantageous over traditional, platform-agnostic, and dependency-less symlinking in a bash script? Cf. my dotfiles and the install script.

                                        1. 3

                                          Salt’s declarative nature means that you’re mostly describing the end state of a system, not how to get there.

                                          So instead of saying “copy this stuff to this directory and then chmod” you say “I want this other directory to look like this”. Instead of saying “install these packages” you say “I want this to be installed”. You also get dependency management so if you (say) just want to install your SSH setup on a machine you can say to do that (and ignore your window manager conf).

                                          If your files are grouped well enough and organized enough you can apply targeted subsets of your setup on many machines based off of what you want. “I want to use FF on this machine so pull in that + all the dependencies on that that I need”. “Install everything but leave out the driver conf I need for this one specific machine”

                                          This means that if you update these scripts, you can re-run salt and it will just run what needs to run to hit the target state! So you get recovery from partial setup, checking for divergences in setups, etc for free! There’s dry run capabilities too so you can easily see what would need to change.

                                          This is a wonderful way of keeping machines in sync

                                          1. 2

                                            Looking at my repository right now, there isn’t any advantage. You could do everything I’ve done with a bash script. The beauty of this setup for me, and I really should have stated this initially, is that I can have multiple machines all share this configuration really easily. For example, my plan is buy a RaspberryPi and setup an encrypted DNS server. All I need to do is install salt on the Pi and it gets all of this setup just like my NUC currently has. I can then use salt to target specific machines and have it setup a lot of this for me.

                                            1. 2

                                              The beauty of this setup for me, and I really should have stated this initially, is that I can have multiple machines all share this configuration really easily

                                              You can also do this with a shell script.

                                              All I need to do is install salt

                                              With shell scripts you don’t need to install anything.

                                              1. 3

                                                As I previously stated, given what’s currently in this repository, there isn’t anything here that you couldn’t do with a shell script. That’s missing the point though. Salt, or ansible, or chef, provide you with a way to manage complex setups on multiple systems. Salt specifically (because I’m not very familiar with ansible or chef) provides a lot of other convenient tools like salt-ssh or reactor as well.

                                                1. 2

                                                  I feel like your point is just that shell script is turing complete. Ok. The interesting questions are about which approach is better/easier/faster/safer/more powerful.

                                                  1. 2

                                                    If you’re targeting different distributions of linux or different operating systems entirely, the complexity of a bash script will start to ramp up pretty quickly.

                                                    1. 2

                                                      I disagree, I use a shell script simply because I use a vast array of Unix operating systems. Many of which don’t even support tools like salt, or simply do not have package management at all.

                                                      1. 1

                                                        I have a POSIX sh script that I use to manage my dotfiles. Instead of it trying to actually install system packages for me, I have a ./configctl check command that just checks if certain binaries are available in the environment. I’ve found that this approach hits the sweet spot since I still get a consistent environment across machines but I don’t need to do any hairy cross-distro stuff. And I get looped in to decide what’s right for the particular machine since I’m the one actually going and installing stuff.

                                                    2. 1

                                                      The beauty of this setup for me, and I really should have stated this initially, is that I can have multiple machines all share this configuration really easily.

                                                      Have to agree with @4ad on this one. I have to use remote machines I don’t have sudo rights and/or often are completely bare bones (eg., not even git preinstalled.) My goal, in essence, is a standardized, reproducible, platform-agnostic, dependency-less dotfile environment which I can install with as few commands as possible and use as fast as possible. I don’t see how adding such a dependency benefits me in this scenario. I’m not against Ansible-like dotfile systems, but, in my opinion, using such systems for this task seems like an overkill. Happy to hear otherwise, though.

                                                  1. 3

                                                    Coming from npm, and knowing little about Go’s packaging or import system, it took a significant amount of time reading this article to understand two properties npm/Node has that seem to help alleviate the problems Go is grappling with:

                                                    1. npm has a strong semantic versioning culture. Very strong. Packages that don’t respect semver (at least packages that aren’t rando code from someone inexperienced) are in the tens and easy to lock down. That helps a lot to guide upgrade decisions.
                                                    2. Far more importantly, Node’s import machinery doesn’t have any global notion of a package. Meaning, A and B can depend on different, incompatible versions of C and everything is fine because their dependence on C is really an implementation detail that’s hidden from the global system.

                                                    Of course there are valid criticisms of both of these, some that I’m sure I’m not aware of (the author of this article is clearly more of a problem domain expert than I am) but they clearly solve a lot of problems too. The behavior of a solver in npm is a non-issue because there’s no version conflicts to solve, so you just don’t need one in the first place.

                                                    I wonder if this has contributed to npm’s small modules culture too - if you have the possibility of version conflicts then the risk of such a conflict is directly, positively correlated with the number of packages you use. But if there’s no possibility of a conflict (ignoring peer dependencies, which are more uncommon and considered an anti-pattern), then there’s no cost to having more packages in the conflict risk sense.

                                                    1. 3

                                                      go has/had vendoring that is like 2.

                                                      1. 1

                                                        Why isn’t it used to prevent conflicts then?

                                                    1. 2

                                                      I want to go through the whole thing later, when I have time, but right off the bat I notice it doesn’t grant trademark licenses. Doesn’t that already automatically disqualify it from being an FSF-approved license? I’m thinking of the Firefox example, though maybe that was DFSG?

                                                      (That being said I’ve never had a problem with people protecting trademarks since they basically never have a real effect on the ability to use/modify/etc. the software. It might not fit the letter of the law in terms of the four freedoms but it fits the spirit.)

                                                      1. 2

                                                        This is the first I’ve heard of the FSF disqualifying licenses for not granting trademarks. Is that a recent decision (e.g. to reduce license proliferation)? I would have thought that’s a separate issue to copyright license approval, since so few FSF-approved licenses grant trademark licenses (MPL, MIT, BSD, …).

                                                        When you say “FSF-approved” are you referring specifically to GPL compatibility?

                                                        1. 1

                                                          Sorry, I was completely wrong! I was thinking of https://en.m.wikipedia.org/wiki/Mozilla_software_rebranded_by_Debian and thought the FSF was involved, but it was just Debian.

                                                          1. 3

                                                            And Debian never considered Firefox non-DFSG. Mozilla considered Debian to be misusing the trademark (because they alter the source and build from source instead of using upstream binaries) and asked that they stop using the trademarks (for awhile, Debian ships officially-branded Mozilla products again)

                                                            1. 3

                                                              Debian was kind of the initial objector in the saga, but they only objected to one specific thing, the Firefox logo. In 2004, they replaced the logo with a freely licensed one in the version they shipped, because Mozilla wouldn’t relicense the logo PNGs/SVGs under a free license. But the browser was still called Firefox. That was the reason for a ‘-dfsg’ suffix in the package’s version number. Those kinds of minor changes are common though. Debian even does it to some GNU packages, because they don’t consider the GNU Free Documentation License with invariant sections to be DFSG-free, so they strip out a few offending info files & tack on a -dfsg suffix.

                                                              You’re right that the name change came from Mozilla though, in 2006, when someone doing a review of the Firefox trademark seems to have objected to everything about Debian’s package: they didn’t like something with an alternate logo shipping under the “Firefox” trademark and for the first time they raised an objection to the patched source (which was patched for non-license reasons) shipping under that name. Which at that point pretty much required a rename, since even people who had thought the logo-copyright issue was petty/unimportant couldn’t accept a “no source patches allowed” condition in a free-software distribution.

                                                            2. 2

                                                              Yeah, “Iceweasel” was a Debian thing; IIRC Debian wanted to backport security fixes to stable released, but avoid including any new features. Mozilla didn’t want their “brand” on the less-featureful versions (even though it was all their software…), so trademark shenanigans ensued.

                                                              The FSF do actually push their own Firefox rebrand called GNU Icecat (really imaginative naming all round!). It mostly seems to be about not “promoting” proprietary addons, etc. That doesn’t mean FSF don’t “approve” (in a technical/legal sense) the MPL as a Free Software copyright license, etc. It just means they might not advocate using certain software (Firefox), in favour of something else (Icecat).

                                                        1. 8

                                                          Pretty ironic that this is on YouTube.

                                                            1. 4

                                                              See also https://d.tube/, hosted on IPFS.

                                                              1. 1

                                                                Hosted on github… ;)

                                                            1. 1

                                                              This looks seriously interesting. My favorite part is the name, because it’s what I say whenever the code doesn’t work. I.e., every time I look at the code.

                                                              1. 12

                                                                When people tell me to stop using the only cryptosystem in existence that has ever - per the Snowden leaks - successfully resisted the attentions of the NSA, I get suspicious, even hostile. It’s triply frustrating when, at the end of the linked rant, they actually recognize that PGP isn’t the problem:

                                                                It also bears noting that many of the issues above could, in principle at least, be addressed within the confines of the OpenPGP format. Indeed, if you view ‘PGP’ to mean nothing more than the OpenPGP transport, a lot of the above seems easy to fix — with the exception of forward secrecy, which really does seem hard to add without some serious hacks. But in practice, this is rarely all that people mean when they implement ‘PGP’.

                                                                There is a lot wrong with the GPG implementation and a lot more wrong with how mail clients integrate it. Why would someone who recognises that PGP is a matter of identity for many of its users go out of their way to express their very genuine criticisms as an attack on PGP? If half the effort that went into pushing Signal was put into a good implementation of OpenPGP following cryptographic best practices (which GPG is painfully unwilling to be), we’d have something that would make everyone better off. Instead these people make it weirdly specific about Signal, forcing me to choose between “PGP” and a partially-closed-source centralised system, a choice that’s only ever going to go one way.

                                                                1. 9

                                                                  I am deeply concerned about the push towards Signal. I am not a cryptographer, so all I can do is trust other people that the crypto is sound, but as we all know, the problems with crypto systems are rarely in the crypto layers.

                                                                  On one hand we know that PGP works, on the other hand we have had two game over vulnerabilities in Signal THIS WEEK. And the last Signal problem was very similar to the one in “not-really-PGP” in that the Signal app passed untrusted HTML to the browser engine.

                                                                  If I were a government trying to subvert secure communications, investing in Signal and tarnishing PGP is what I would try to do. What better strategy than to push everyone towards closed systems where you can’t even see the binaries, and that are not under the user’s control. The exact same devices with GPS and under constant surveilance.

                                                                  My mobile phone might have much better security mechanisms in theory, but I will never know for sure because neither I, nor anyone else can really check. In the meantime we know for sure what a privacy disaster these mobile phones are. We also know for sure the the various leaks that government implant malware on mobile devices, and we know that both manufacturers and carriers can install software, or updates, on devices without user consent.

                                                                  Whatever the PGP replacement might be, moving to the closed systems that are completely unauditable and not under the user’s control is not the solution. I am not surprised that some people advocate for this option. What I find totally insane is that a good majority of the tech world finds this position sensible. Just find any Hacker News thread and you will see that any criticism towards Signal is downvoted to oblivion, while the voices of “experts” preach PGP hysteria.

                                                                  PGP will never be used by ordinary people. It’s too clunky for that. But it’s used by some people very successfully, and if you try to dissuade this small, but very important group of people to move towards your “solution”, I can only suspect foul play. Signal does not compete with PGP. It’s a phone chat app. As Signal does not compete with PGP, why do you have to spend all this insane ammount of effort to convince an insignificant amount of people to drop PGP for Signal?

                                                                  1. 4

                                                                    I can’t for the life of me imagine why a CIA-covert-psyops-agency funded walled garden service would want to push people away from open standards to their walled garden service.

                                                                    Don’t get me wrong, Signal does a lot of the right things but a lot of claims are made about it implying it’s as open as PGP, which it isn’t.

                                                                    1. 2

                                                                      What makes Signal a closed system?

                                                                      https://github.com/signalapp

                                                                      1. 12

                                                                        Not Signal, iOS and Android, and all the secret operating systems that run underneath.

                                                                        As for Signal itself, moxie forced F-Droid to take down Signal, because he didn’t want other people to compile Signal. He said he wanted people only to use his binaries, which even if you are ok with in principle, on Android it mandates the use of the Google Play Store. If this is not a dick move, I don’t know what is.

                                                                        1. 3

                                                                          I’m with you on Android and especially iOS being problematic. That being said, Signal has been available without Google Play Services for a while now. See also the download page; I couldn’t find it linked anywhere on the site but it is there.

                                                                          However, we investigated this for PRISM Break, and it turns out that there’s a single Google binary embedded in the APK I just linked to. Which is unfortunate. See this GitHub comment.

                                                                          1. 2

                                                                            because he didn’t want other people to compile Signal. He said he wanted people only to use his binaries

                                                                            Ehm… he chose the wrong license in this case.

                                                                      2. 4

                                                                        As I understand it, the case against PGP is not with PGP in and of itself (the cryptography is good), but the ecosystem. That is, the toolchain in which one uses it. Because it is advocated for use in email and securing email, it is argued, is nigh on impossible, then it is irresponsible to recommend using PGP encrypted email for general consumption, especially for journalists.

                                                                        That is, while it is possible to use PGP via email effectively, it is incredibly difficult and error-prone. These are not qualities one wants in a secure system and thus, it should be avoided.

                                                                        1. 4

                                                                          But the cryptographyisn’t good. His case in the blog post is intentionally besides all of the crypto badness.example: the standard doesn’t allow any other hash function than sha1, which has been proven broken. The protocol itself disallows flexibility here to avoid ambiguity and that means there is no way to change it significantly without breaking compatibility.

                                                                          And so far, it seems, people wanted compatibility (or switched to something else, like Signal)

                                                                        2. 4

                                                                          Until this better implementation appears, an abstract recommendation for PGP is a concrete recommendation for GPG.

                                                                          Imagine if half the effort spent saying PGP is just fine went into making PGP just fine.

                                                                          1. 2

                                                                            I guess that’s an invitation to push https://autocrypt.org/

                                                                          2. 3

                                                                            When people tell me to stop using the only cryptosystem in existence that has ever - per the Snowden leaks - successfully resisted the attentions of the NSA, I get suspicious, even hostile.

                                                                            Without wanting to sound rude, this is discussed in the article:

                                                                            The fact of the matter is that OpenPGP is not really a cryptography project. That is, it’s not held together by cryptography. It’s held together by backwards-compatibility and (increasingly) a kind of an obsession with the idea of PGP as an end in and of itself, rather than as a means to actually make end-users more secure.

                                                                            OpenPGP might have resisted the NSA, but that’s not a unique property. Every modern encryption tool or standard has to do that or it is considered broken.

                                                                            I think most people unless they are heavily involved in security research don’t know how encrytion/auth/integrity protection are layered. There are a lot of layers in what people just want to call “encryption”. OpenPGP uses the same standard crypto building blocks as everything else and unfortunately putting those lower level primitives together is fiendishly difficult. Life also went on since OpenPGP was created meaning that those building blocks and how to put them together changed in the last few decades, cryptographers learned a lot.

                                                                            One of the most important things that cryptographers learned is that the entire ecosystem / the system as a whole counts. Even Snowden was talking about this when he said that the NSA just attacks the endpoints, where most of the attack surface is. So while the cryptography bits in the core of the OpenPGP standard are safe, if dated, that’s not the point. Reasonable people can’t really use PGP safely because we would have to have a library that implements the dated OpenPGP standard in a modern way, clients that interface with that modern library in a safe and thought-through way and users that know enough about the system to satisfy it’s safety requirements (which are large for OpenPGP)

                                                                            Part of that is attitude, most of the existing projects for implementing the standard just don’t seem to take a security-first stance. Who is really looking towards providing a secure overall experience to users under OpenPGP? Certainly not the projects bickering where to attribute blame.

                                                                            I think people kept contrasting this with Signal because Signal gets a lot of things right in contrast. The protocol is modern and it’s not impossibly demanding on users (ratcheting key rotation, anyone?), there is no security blame game between Signal the desktop app vs signal the mobile app vs the protocol when a security vulnerability happens, OWS just fixes it with little drama. Of course Signal-the-app has downsides too, like the centralization, however that seems like a reasonable choice. I’d rather have a clean protocol operating through a central server that most people can use than an unuseable (from the pov of most users) standard/protocol. We’re not there yet where we can have all of decentralization, security and ease of use.

                                                                            1. 2

                                                                              OpenPGP might have resisted the NSA, but that’s not a unique property. Every modern encryption tool or standard has to do that or it is considered broken.

                                                                              One assumes the NSA has backdoors in iOS, Google Play Services, and the binary builds of Signal (and any other major closed-source crypto tool, at least those distributed from the US) - there’s no countermeasure and virtually no downside, so why wouldn’t they?

                                                                              there is no security blame game between Signal the desktop app vs signal the mobile app vs the protocol when a security vulnerability happens, OWS just fixes it with little drama.

                                                                              Not really the response I’ve seen to their recent desktop-only vulnerability, though I do agree with you in principle.

                                                                              1. 3

                                                                                Signal Android has been reproducible for over two years now. What I don’t know is whether anyone has independently verified that it can be reproduced. I also don’t know whether the “remaining work” in that post was ever addressed.

                                                                                1. 2

                                                                                  The process of verifying a build can be done through a Docker image containing an Android build environment that we’ve published.

                                                                                  Doesn’t such process assume trust on who created the image (and on who created each of layers it was based on)?

                                                                                  A genuine question, as I see the convenience of Docker and how it could lead to more verifications, but on the other hand it create a single point of failure easier to attack.

                                                                                  1. 1

                                                                                    That question of trust is the reason why, if you’re forced to use Docker, build every layer for yourself from the most trustworthy sources. It isn’t even hard.

                                                                            2. 1

                                                                              the only cryptosystem in existence that has ever - per the Snowden leaks - successfully resisted the attentions of the NSA

                                                                              I’m pretty ignorant on this matter, but do you have any link to share?

                                                                              There is a lot wrong with the GPG implementation

                                                                              Actually, I’d like to read the opinion of GPG developers here, too.

                                                                              Everyone makes mistakes, but I’m pretty curious about the technical allegations: it seems like they did not considered the issue to be fixed in their own code.

                                                                              This might have pretty good security reasons.

                                                                              1. 3

                                                                                To start with, you can’t trust the closed-source providers since the NSA and GHCQ are throwing $200+ million at both finding 0-days and paying them to put backdoors in. Covered here. From there, you have to assess open-source solutions. There’s a lot of ways to do that. However, the NSA sort of did it for us in slides where GPG and Truecrypt were worst things for them to run into. Snowden said GPG works, too. He’d know given he had access to everything they had that worked and didn’t. He used GPG and Truecrypt. NSA had to either ignore those people or forward them to TAO for targeted attack on browser, OS, hardware, etc. The targeted attack group only has so much personnel and time. So, this is a huge increase in security.

                                                                                I always say that what stops NSA should be good enough to stop the majority of black hats. So, keep using and improving what is a known-good approach. I further limit risk by just GPG-encrypting text or zip files that I send/receive over untrusted transports using strong algorithms. I exchange the keys manually. That means I’m down to trusting the implementation of just a few commands. Securing GPG in my use-case would mean stripping out anything I don’t need (most of GPG) followed by hardening the remaining code manually or through automated means. It’s a much smaller problem than clean-slate, GUI-using, encrypted sharing of various media. Zip can encode anything. Give the files boring names, too. Untrusted, email provider is Swiss in case that buys anything on any type of attacker.

                                                                                Far as the leaks, I had a really-hard time getting you the NSA slides. Searching with specific terms in either DuckDuckGo or Google used to take me right to them. They don’t anymore. I’ve had to fight with them narrowing terms down with quotes trying to find any Snowden slides, much less the good ones. I’m getting Naked Security, FramaSoft, pharma spam, etc even on p 2 and 3 but not Snowden slides past a few, recurring ones. Even mandating the Guardian in terms often didn’t produce more than one, Guardian link. Really weird that both engines’ algorithms are suppressing all the important stuff despite really-focused searches. Although I’m not going conspiracy hat yet, the relative-inaccuracy of Google’s results compared to about any other search I’ve done over past year for both historical and current material is a bit worrying. Usually excellent accuracy.

                                                                                NSA Facts is still up if you want the big picture about their spying activities. Ok, after spending an hour, I’m going to have to settle for giving you this presentation calling TAILS or Truecrypt catastrophic loss of intelligence. TAILS was probably temporary but the TrueCrypt derivatives are worth investing effort in. Anyone else have a link to the GPG slide(s)? @4ad? I’m going to try to dig it all up out of old browser or Schneier conversations in near future. Need at least those slides so people knows what was NSA-proof at the time.

                                                                                1. 2

                                                                                  Why would TAILS be temporary? If anything this era of cheap devices makes it more practical than ever.

                                                                                  1. 3

                                                                                    It was secure at the time since either mass collection or TAO teams couldnt hack it. Hacking it requires one or more vulnerabilities in the software it runs. The TAILS software includes complex software such as Linux and a browser with history of vulnerabilities. We should assume that was temporary and/or would disappear if usage went up enough to budget more attacks its way.

                                                                                    1. 2

                                                                                      I’d still trust it more than TrueCrypt just due to being open-source.

                                                                                      What would it take to make an adequate replacement for TAILS? I’m guessing some kind of unikernel? Are there any efforts in that direction?

                                                                                      1. 1

                                                                                        Well, you have to look at the various methods of attack to assess this:

                                                                                        1. Mass surveillance attempting to read traffic through protocol weaknesses with or without a MITM. They keep finding these in Tor.

                                                                                        2. Attacks on the implementation of Tor, the browser, or other apps. These are plentiful since it’s mostly written in non-memory safe way. Also, having no covert, channel analysis on components processing secrets means there’s probably plenty of side channels. There’s also increasingly new attacks on hardware with a network-oriented one even being published.

                                                                                        3. Attacks on the repo or otherwise MITMing the binaries. I don’t think most people are checking for that. The few that do would make attackers cautious about being discovered. A deniable way to see who is who might be a bitflip or two that would cause the security check to fail. Put it in random, non-critical spots to make it look like an accident during transport. Whoever re-downloads doesn’t get hit with the actual attack.

                                                                                        So, the OS and apps have to be secure with some containment mechanisms for any failures. The protocol has to work. These must be checked against any subversions in the repo or during transport. All this together in a LiveCD. I think it’s doable minus the anonymity protocol working which I don’t trust. So, I’ve usually recommended dedicated computers bought with cash (esp netbooks), WiFi’s, cantennas, getting used to human patterns in those areas, and spots with minimal camera coverage. You can add Tor on top of it but NSA focuses on that traffic. They probably don’t pay attention to average person on WiFi using generic sites over HTTPS.

                                                                                        1. 1

                                                                                          Sure. My question was more: does a live CD project with that kind of aim exist? @josuah mentioned heads which at least avoids the regression of bringing in systemd, but doesn’t really improve over classic tails in terms of not relying on linux or a browser.

                                                                                          1. 2

                                                                                            An old one named Anonym.OS was an OpenBSD-based, Live CD. That would’ve been better on code injection front at least. I don’t know of any current offerings. I just assume they’ll be compromised.

                                                                                        2. 1

                                                                                          I think it is the reason why https://heads.dyne.org/ have been made: Replacing the complex software stack with a simpler one with aim to avoid security risks.

                                                                                          1. 1

                                                                                            Hmm. That’s a small start, but still running Linux (and with a non-mainstream patchset even), I don’t think it answers the core criticism.

                                                                                    2. 2

                                                                                      Thanks for this great answer.

                                                                                      Really weird that both engines’ algorithms are suppressing all the important stuff despite really-focused searches.

                                                                                      If you can share a few of your search terms I guess that a few friends would find them pretty interesting, with their research.

                                                                                      For sure this teach us a valuable lesson. The web is not a reliable medium for free speech.

                                                                                      From now on, I will download from the internet interesting documents about such topics and donate them (with other more neutral dvds) to small public libraries around the Europe.

                                                                                      I guess that slowly, people will go back to librarians if search engines don’t search carefully enough anymore.

                                                                                      1. 2

                                                                                        It was variations, with and without quotes, on terms I saw in the early reports. They included GPG, PGP, Truecrypt, Guard, Documents, Leaked, Snowden, and catastrophic. I at least found that one report that mentions it in combination with other things. I also found, but didn’t post, a PGP intercept that was highly-classified but said they couldn’t decrypt it. Finally, Snowden kept maintaining good encryption worked with GPG being one he used personally.

                                                                                        So, we have what we need to know. From there, just need to make the programs we know work more usable and memory safe.

                                                                                  1. 5

                                                                                    For years my #1 reason for logging into Wikipedia was so that I’d have this feature active while it was still in beta. It’s one of those tiny things you never want to give up once you’re used to it, and I’m so excited it’s shipping to everyone. Congrats to the team <3

                                                                                    1. 2

                                                                                      I never had any trouble with the beta, which seemed to just default to the first paragraph of text & first image. (But, also, I remember using the preview feature before this article makes it out to have begun, back in 2007 or 2008 – so maybe what I’m using is a completely different preview feature that the people working on this were unaware of?)

                                                                                    1. 5

                                                                                      Thanks for the update. #HugOps to everyone involved <3

                                                                                      1. 5

                                                                                        Network (switch) failures are particularly frustrating to deal with because we all work over IRC and self-host both a server and our bouncer. Since we can’t log in it’s difficult to chat with each other: I was able to connect to my IRC client over out out-of-band network but the connection between it and our bouncer was down.

                                                                                        Everyone scrambles for a bit to get on Freenode without using the bouncer–once that’s done and we’ve figured out what everyone’s temporary nick is and set about restoring service.

                                                                                        1. 2

                                                                                          Oof. That sounds awful. Glad you got it sorted.

                                                                                      1. 6

                                                                                        Can someone explain a reason you’d want to see the descendants of a commit?

                                                                                        1. 7

                                                                                          Following history. Code archeology.

                                                                                          Many people use the VCS history as a form of documentation for the project.

                                                                                          1. 4

                                                                                            But history is the past… you can always see the past in git…

                                                                                            1. 16

                                                                                              Suppose I’ve isolated an issue to this bug fix commit. In what version of gRPC did that commit release?

                                                                                              Github tells you it’s on the v1.8.x branch, so if you head over to the, v1.8.x branch, you can see it landed after v1.8.5, so it must have released in v1.8.6. Easy enough right?

                                                                                              Well that’s not the whole story. That commit was also cherry-picked over to the v1.9.x branch here, because v1.9.x was branched before the bug was fixed.

                                                                                              Besides, that was silly to begin with. Why did you go to the v1.8.x branch and then manually search for it. Why couldn’t it just tell you when it got merged? That would have been nice.

                                                                                              Many projects maintain many release branches. Some just backport bug fixes to older releases, some have more significant changes. Sometimes a bug fix only applies to a range of older releases. Do you want to track all that with no notion of descendants? It’s not fun.

                                                                                              Even just looking at pull requests, it would be nice to see whether a pull request eventually got merged in or not, what release it got merged into, and so on. That’s all history too.

                                                                                              So no, you can’t always see the past in git. You can only see the direct lineage of your current branch.

                                                                                              1. 3

                                                                                                I used to find this hella handy at Fog Creek, especially for quickly answering which bug fixes were in which custom branch for some particular client. We actually made a little GUI out of it, it was so helpful.

                                                                                                (Interestingly, while Kiln supports that in Git too, it at least used to do so by cheating: it looked up the Mercurial SHAs in the Harmony conversion table, asked Mercurial for the descendants, and then converted those commits back to their Git equivalents. Because Harmony is now turned off, I assume either they’ve changed how this works, or no longer ship the Electric DAG, but it was cool at the time.)

                                                                                                1. 2

                                                                                                  Why couldn’t it just tell you when it got merged?

                                                                                                  I don’t know why GitHub doesn’t, but Git can:

                                                                                                  $ git tag --contains b15024d6a1537c69fc446601559a89dc8b84cf6f
                                                                                                  v1.8.6
                                                                                                  

                                                                                                  That doesn’t address the cherry-picking case though. I’m not aware of any built-in tooling for that. Generally Git avoids relying on metadata for things that can be inferred from the data (with file renames being the poster child of the principle), so I’m not surprised that cherry-picks like this aren’t tracked directly. Theoretically they could be inferred (i.e. it’s “just” a matter of someone building the tooling), but I’m not sure that’s doable with a practical amount of computation. (There are other operations Git elects not to try to be fast at (the poster child being blame), but many of them still end up not being impractically slow to use.)

                                                                                                  1. 1

                                                                                                    Does Fossil track cherry-picks like this though? So that they’d show up as descendants? In git the cherry-picked commit technically has nothing to do with the original, but maybe Fossil does this better. (It’s always bothered me that git doesn’t track stuff like this - Mercurial has Changeset Evolution which has always looked suuuuper nice to me.)

                                                                                                    1. 4

                                                                                                      According to the fossil merge docs, cherry pick is just a flag on merge, so I imagine it does. I was just highlighting the utility of viewing commit descendants.

                                                                                                      1. 7

                                                                                                        Mercurial also tracks grafts (what git calls a cherry-pick) in the commit metadata.

                                                                                                        1. 5

                                                                                                          This is actually illustrative of the main reason I dislike mercurial. In git there are a gajillion low level commands to manipulate commits. But that’s all it is, commits. Give me a desired end state, and I can get there one way or another. But with mercurial there’s all this different stuff, and you need python plugins and config files for python plugins in order to do what you need to do. I feel like git rewards me for understanding the system and mercurial rewards me for understanding the plugin ecosystem.

                                                                                                          Maybe I’m off base, but “you need this plugin” has always turned me away from tools. To me it sounds like “this tool isn’t flexible enough to do what you want to do.”

                                                                                                          1. 7

                                                                                                            Huh? What did I say that needs a plugin? The graft metadata is part of the commit, in the so-called “extras” field.

                                                                                                            I can find all the commits that are origins for grafts in the repo with the following command:

                                                                                                            $ hg log -r "origin()"
                                                                                                            

                                                                                                            And all the commits that are destinations for grafts:

                                                                                                            $ hg log -r "destination()"
                                                                                                            

                                                                                                            This uses a core mercurial feature called “revsets” to expose this normally hidden metadata to the user.

                                                                                                            1. 2

                                                                                                              Right but how much manipulation of grafts can you do without a plugin? I assume you can do all the basic things like create them, list them, but what if I wanted to restructure them in some way? Can you do arbitrary restructuring without plugins?

                                                                                                              Like this “extras” field, how much stuff goes in that? And how much of it do I have to know about if I want to restructure my repository without breaking it? Is it enough that I need a plugin to make sure I don’t break anything?

                                                                                                              In fairness, I haven’t looked at mercurial much since 2015. Back then the answer was either “we don’t rewrite history” or “you can do that with this plugin.”

                                                                                                              But I want to rewrite history. I want to mix and blend stuff I have in my local repo however I want before I ultimately squash away the mess I’ve created into the commit I’ll actually push. That’s crazy useful to me. Apparently you can do it with mercurial—with an extension called queues.

                                                                                                              I’m okay with limited behavior on the upstream server, that’s fine. I just want to treat my working copy as my working copy and not a perfect clone of the central authority. For example, I don’t mind using svn at all, because with git-svn I can do all the stuff I would normally do and push it up to svn when I’m done. No problem.

                                                                                                              And I admit that I’m not exactly the common case. Which is why I doubt mercurial will ever support me: mercurial is a version control system, not a repository editor.

                                                                                                              1. 12

                                                                                                                For the past several years, as well as in the current release, you still have to enable an extension (or up to two) to edit history. To get the equivalent of Git, you would need the following two lines in ~/.hgrc or %APPDATA%\Mercurial.ini:

                                                                                                                [extensions]
                                                                                                                rebase=
                                                                                                                histedit=
                                                                                                                

                                                                                                                These correspond to turning on rebase and rebase -i, respectively. But that’s it; nothing to install, just two features to enable. I believe this was the same back in 2015, but I’d have to double-check; certainly these two extensions are all you’ve wanted for a long time, and have shipped with Hg for a long time.

                                                                                                                That said, that’s genuinely, truly it. Grafts aren’t something different from other commits; they’re just commits with some data. Git actually does the same thing, IIRC, and also stores them in the extra fields of a commit. I’m not near a computer, but git show —raw <commit sha> should show a field called something like Cherry-Pick for a cherry-picked commit, for example, and will also explicitly expose and show you the author versus committer in its raw form. That’s the same thing going on here in Mercurial.

                                                                                                                And having taught people Git since 2008, oh boy am I glad those two extra settings are required. I have as recently as two months ago had to ask everyone to please let me sit in silence while I tried to undo the result of someone new to Git doing a rebase that picked up some commits twice and others that shouldn’t have gone out, and then pushing to production. In Mercurial, the default commands do not allow you to shoot your foot off; that situation couldn’t have happened. And for experienced users, who I’ve noticed tend to already have elaborate .gitconfigs anyway, asking you to add two lines to a config file before using the danger tools really oughtn’t be that onerous. (And I know you’re up for that, because you mention using git-svn later in this thread, which is definitely not something that Just Works in two seconds with your average Subversion repository.)

                                                                                                                It’s fine if you want to rewrite history. Mercurial does and has let you do that for a very long time. It does not let you do so without adding up to three lines to one configuration file one time. You and I can disagree on whether it should require you to do that, but the idea that these three lines are somehow The Reason Not to Use Mercurial has always struck me as genuinely bizarre.

                                                                                                                1. 5

                                                                                                                  Right but how much manipulation of grafts can you do without a plugin?

                                                                                                                  A graft isn’t a separate type of object in Mercurial. It’s a built-in command (not a extension or plugin), which creates a regular commit annotated with some meta-data recording whence it came from. After the commit was created it can be dealt with like any other commit.

                                                                                                                  And how much of it do I have to know about if I want to restructure my repository without breaking it?

                                                                                                                  Nothing. Mercurial isn’t Git. You don’t need to know the implementation inside-out before you’re able to use it effectively. Should you need to accomplish low level tasks you can use Mercurial’s API, which like in most properly designed software hides implementation details.

                                                                                                                  But I want to rewrite history. (…) Apparently you can do it with mercurial—with an extension called queues.

                                                                                                                  The Mercurial Queues extension is for managing patches on top a repository. For history editing you should use the histedit and rebase extensions instead.

                                                                                                                  I just want to treat my working copy as my working copy and not a perfect clone of the central authority.

                                                                                                                  Mercurial is a DVCS. It lets you do exactly that. Have you run into any issues where Mercurial prevented you from doing things to your local copy?

                                                                                                                  For example, I don’t mind using svn at all, because with git-svn I can do all the stuff I would normally do and push it up to svn when I’m done.

                                                                                                                  Mercurial also has several ways to interact with Subversion repositories.

                                                                                                                  mercurial is a version control system, not a repository editor.

                                                                                                                  Indeed it is. And the former is what most users (maybe not you) actually want. Not the latter.

                                                                                                                  1. 2

                                                                                                                    Mercurial’s “Phases” and “Changeset Evolution” may be of interest to you, then.

                                                                                                                    1. 6

                                                                                                                      It’s also worth noting that mercurial’s extension system is there for advanced, built-in features like history editing. Out of the box, git exposes rebase, which is fine, but that does expose a huge potential footgun to an inexperienced user.

                                                                                                                      The Mercurial developers decided to make advanced features like history editing opt-in. However, these features are still part of core mercurial and are developed and tested as such. This includes commands like “hg rebase” and “hg histedit” (which is similar to git’s “rebase -i”).

                                                                                                                      The expectation is that you will want to customize mercurial a bit for your needs and desires. And as a tool that manages text files, it expects you to be ok with managing text files for configuration and customization. You might think that needing to customize a tool you use every day to get the most out of it to be onerous, but the reward mercurial gets with this approach is that new and inexperienced users avoid confusion and breakage from possibly dangerous operations like history editing.

                                                                                                                      Some experimental features (like changeset evolution, narrow clones and sparse clones) are only available as externally developed extensions. Some, like changset evolution, are pretty commonly used, however I think the mercurial devs have done a good job recently of trying to upstream as much useful stuff that’s out there in the ecosystem into core mercurial itself. Changset evolution is being integrated right now and will be a built-in feature in a few releases (hopefully).

                                                                                                1. 25

                                                                                                  Why do people overthink this? Versions are for human consumption. If I use x.y.z and x.y.z+1 comes out, my expectation is that I should be able to upgrade (if I need to) with minimum friction. Sometimes (rarely!) this is not the case. Tough life.

                                                                                                  Similarly I expect I should be able to upgrade to x.y+1.zz, but in this case I expect there might be more work involved, more testing, etc. In general, it should still work though. If not, tough life.

                                                                                                  I fully expect moving to x+1.yy.zz would be painful. Sometimes it isn’t though. Life is great.

                                                                                                  What’s the problem? The version communicates information to me. Like every other communication, sometimes it’s not perfectly accurate. So what? it’s news, not math.

                                                                                                  It seems that people who complain about this are the people who want to upgrade without testing. That is insane. You always need to test. You can’t trust that it will work because some guy who doesn’t know how you use the software promised you that it will work. No, he promised it should work. There are no guarantees. You always need to test.

                                                                                                  1. 5

                                                                                                    You are missing one final thing. I expect to be able to install X and X+1 side by side in whatever system. So many things seem to miss that, at least vgo gets that right.

                                                                                                    1. 2

                                                                                                      Yes, unfortunately almost every package manager gets this wrong.

                                                                                                      1. 3

                                                                                                        In a lot of cases it’s not the package manager’s fault; it’s the way the language does module loading. npm gets this right but the only reason it’s able to is because Node’s module loading algorithm supports (was designed to support?) this usecase.

                                                                                                        1. 1

                                                                                                          They were written at the same time by the same person, so yes, “designed” is appropriate.

                                                                                                          1. 1

                                                                                                            Do you mean require() and npm? I don’t think that’s right. I’m just assuming require() was Ryan Dahl in the very beginning of Node (docs say 0.1.13). And npm was by Isaac Schlueter, quite a while after that. People used to share Node modules on the Node GitHub wiki, and IIRC (though I wasn’t there, I just know from reading) npm was one of several package managers at the time.

                                                                                                            1. 2

                                                                                                              I was told that Isaac implemented both; it’s possible that I was misinformed, or maybe it was re-implemented by him.

                                                                                                        2. 1

                                                                                                          Package managers that allow side-by-side global installs that I can think of:

                                                                                                          • gem / bundler
                                                                                                          • maven
                                                                                                          • homebrew
                                                                                                          • nix

                                                                                                          They all require special tools to choose which version you want to use though. Are there any others? Are there any without that requirement?

                                                                                                      2. 1

                                                                                                        After accepting that every upstream change can break our code the next step is to accept that additional “level of probability” communicated by those dotted numbers are useless: they don’t affect your behavior as a maintainer, you still have to a) read and understand what changed and b) update and test your system. Which means that a single number would do just nice.

                                                                                                        I can speculate that semver became popular because of people who want to be trendy by “living on the bleeding edge” but still want some escape hatch that would “allow” them to not really read and understand all change logs of all those dozens of dependencies changing daily. So they like semver because if anything breaks after a minor version change, they can say it’s not their fault.

                                                                                                        1. 1

                                                                                                          It seems that people who complain about this are the people who want to upgrade without testing.

                                                                                                          Unfortunately, I’ve found that most people setup their package dependencies in whatever system to take X.*.*, so you automatically get updates between builds. That isn’t necessarily a fault of semver but it is what semver is selling.

                                                                                                          1. 2

                                                                                                            To clarify, when you say lots of people set things up this way, you’re not counting people who use lockfiles, correct?

                                                                                                            1. 2

                                                                                                              I guess not since I have experienced this fairly often.

                                                                                                            2. 1

                                                                                                              Why is this unfortunate? Due to lacking test between builds and shoddy upgrade procedures and builds that don’t lock down versions when the tests pass?

                                                                                                              Kinda like what @4ad said, it’s as if people expect this to solve world hunger when it should be regarded as a canned food in your pantry.

                                                                                                              1. 0

                                                                                                                Assuming no locks, the biggest problem is that it means what you built and what I built are not guaranteed to be the same thing. So reproducible builds are out. It also means if you have a bug and I don’t, we don’t know why.

                                                                                                                IME, semantic versioning has not helped me upgrade dependencies. I have to test the new update no matter what, despite the SemVer spec using words like MUST when it defines what things mean. And people fuck up their SemVers enough that backwards compatible changes end up not being backward compatible. So are we better off with SemVer than just some incrementing release number? I’m not really convinced the complexity of SemVer is really bringing a lot of value other than making us all feel like we elegantly solved a problem.

                                                                                                          1. 2

                                                                                                            Is this really technology-related, as opposed to simply a legal/fraud story?

                                                                                                            1. 2

                                                                                                              I thought it was sufficiently related since it was about blockchain and that whole sector. I wavered between submitting it and not submitting it and eventually decided to take a risk - if the community doesn’t like it (which it seems they don’t), that’s fine and I would be happy to see it deleted.

                                                                                                            1. 3

                                                                                                              I think it is very good that they are working on coming up to new solutions to avoid digital surveillance. But I remember reading that they already get the identity of logged out users using fingerprints like battery life. Using the techniques like the aforementioned one they could tie the two ‘browsing contexts’ together.

                                                                                                              Still, glad that they are fighting the good fight.

                                                                                                              1. 3

                                                                                                                This is very true. I would be interested to see how Panopticlick performs in a Firefox container tab. There’s a reason Tor Browser patches Firefox so heavily (but to be fair, that goodness is coming upstream!)

                                                                                                              1. 9

                                                                                                                This is for windows 7 which was released nearly a decade ago. Apple’s snow leopard released the same year by contrast stopped support 4 years ago. Those who see me comment here know I’m not a windows fan by any means, but some perspective here is important. If Microsoft didn’t support these old operating systems they wouldn’t have these problems.

                                                                                                                1. 2

                                                                                                                  Mmm… this doesn’t quite sound right. In principle I agree with you but this doesn’t seem like a great example because AFAICT this was a pretty simple bug, just a small mistake with some flags. Unless the argument was that the more times you have to make that change (i.e. the number of OS versions you have to make it to), the more likely you are to screw up? But in that case you still have the same probability of screwing up for any given release; they’re all independent from each other.

                                                                                                                  1. 2

                                                                                                                    People forget things, the people who worked on it may have moved on to other companies. Sometimes older codebases have radically different coding styles, expectations, and norms. Trivial bugs can become knockouts in this context because the code is huge, alien, and you might not be able to use the tools you’re used to. I’ll concede though that supporting old versions of software way longer than anyone should is part of Microsoft’s business model, and they probably should have better accounted for it.

                                                                                                                1. 3

                                                                                                                  Really wish this had example programs on the page or just something I could see without watching a video.

                                                                                                                    1. 1

                                                                                                                      Looks like a design bug for @andrewrk (or someone) to fix :-)

                                                                                                                      1. 2

                                                                                                                        What’s wrong with minimal (or no) design? It’s fully functional as is.

                                                                                                                        1. 3

                                                                                                                          Nothing’s wrong with it. All I meant to say was, if someone couldn’t find examples maybe it could be made clearer where to find them. By “design” I didn’t mean just the CSS.