1. 5

    With the upcoming deprecation of Tor Onion Services v2 it’s really time to find a successor for Ricochet. My first impression of Cwtch is that it does everything that Ricochet did and more (i.e. file sharing) and I like that I can finally copy paste code without the indent getting lost. Apart from that I find the user interface not super intuitive but maybe it’s just a matter of getting used to it. Anyway, big respect for the team and what they already have accomplished.

    1. 1

      what do the onion services v2 have to do with that?

      1. 4

        Ricochet uses Onion Services v2 and since they won’t be supported starting tomorrow (and are insecure for other reasons) we need “a Ricochet based on onion services v3”. One of them is Cwtch, it is started by Sarah Jamie Lewis, who has also been working in the past with John Brooks of Ricochet (on a Go implementation of Ricochet). Another contender is https://www.ricochetrefresh.net.

        1. 1

          Ah, I see. I was unaware of the Go impl. as well as the link between Cwtch and Ricochet. Thanks for the info.

    1. 2

      Nice review!

      If you have recommendations for other good books on software design, please share it in the comments.

      Haven’t read it myself yet but I heard good things about The Practice of Programming.

        1. 2

          When do all their mirrors support https? Downloading something over http or even ftp does not feel like 2021.

          1. 12

            If they do this right (signed packages and so on), then https will only help with privacy. Which is important for sure, but leaking which packages you download is less horrible than leaking the contents of your conversations, or even just who you’ve been in contact with lately.

            1. -1

              HTTPS is more than just privacy. It also prevents JavaScript injection via ISPs, or any other MITM.

              1. 21

                It does that for web pages, not for packages. Packages are signed by the distro’s keys, so if anyone were to mess with your packages as you download them, your package manager would notice and prevent you from installing the package. The only real advantage to HTTPS for package distribution is that it helps conceal which packages you download (though even then, I get an attacker could get a pretty good idea just by seeing which server you’re downloading from and how many bytes you’re downloading).

                1. 1

                  It does that for web pages, not for packages

                  Indeed, however ISOs, USB installers, etc. can still downloaded from the web site.

                  Packages are signed by the distro’s keys, so if anyone were to mess with your packages as you download them, your package manager would notice and prevent you from installing the package.

                  Yes, I’m familiar with cryptographic signatures.

                  1. 9

                    Indeed, however ISOs, USB installers, etc. can still downloaded from the web site.

                    Yes. The Debian website uses HTTPS, and it looks like the images are distributed using HTTPS too. I thought we were talking bout distributing packages using HTTP vs HTTPS. If your only point is that the ISOs should be distributed over HTTPS then of course I agree, and the Debian project seems to as well.

                    1. 0

                      No, the point is that there is no need for HTTP when HTTPS is available. Regardless of traffic, all HTTP should redirect to HTTPS IMNSHO.

                      1. 16

                        But… why? Your argument for why HTTPS is better is that it prevents JavaScript injection and other forms of MITM. But MITM clearly isn’t a problem for package distribution. Is your argument that “HTTPS protects websites against MITM so packages should use HTTPS (even thought HTTPS doesn’t do anything to protect packages from MITM)”?

                        I truly don’t understand what your reasoning is. Would you be happier if apt used a custom TCP-based transport protocol instead of HTTP?

                        1. 6

                          I suspect that a big reason is cost.

                          Debian mirrors will be serving an absurd amount of traffic, and will probably want to serve data as close to wire speed as possible (likely 10G). Adding a layer of TLS on top means you need to spend money on a powerful CPU or accelerator kit, instead of (mostly) shipping bytes from the disk to the network card.

                          Debian won’t be made of money, and sponsors won’t want to spend more than they absolutely have to.

                          1. 4

                            But MITM clearly isn’t a problem for package distribution.

                            It is though! Package managers still accept untrusted input data and usually do some parsing on it. apt has had vulnerabilities and pacman as well.

                            https://justi.cz/security/2019/01/22/apt-rce.html

                            https://xn--1xa.duncano.de/pacman-CVE-2019-18182-CVE-2019-18183.html

                            TLS would not prevent malicious mirrors in either of these cases, but it would prevent MITM attacks exploiting these issues.

                            1. 7

                              Adding TLS implementations also bring bugs, including RCE. And Debian is using GnuTLS for apt.

                              1. 1

                                Idd. It was one of the reasons for OpenBSD to create signify so I’m delighted to see Debians new approach might be based on it.

                                From https://www.openbsd.org/papers/bsdcan-signify.html:

                                … And if not CAs, then why use TLS? It takes more code for a TLS client just to negotiate hello than in all of signify.

                                The first most likely option we might consider is PGP or GPG. I hear other operating systems do so. The concerns I had using an existing tool were complexity, quality, and complexity.

                      2. 7

                        @sandro originally said: “When do all their mirrors support https?” Emphasis on “mirrors”. To the best of my knowledge, “mirror” in this context does not refer to a web site, or a copy thereof, but to a packages repository.

                        I responded specifically in this context. I was not talking about web sites, which rely on the transport mechanism for all security. In the context I was responding to, each package is signed. Your talk of JavaScript injection and other MITM attacks is simply off topic.

                2. 9

                  ftp.XX.debian.org are CNAMEs to servers accepting to host a mirror. These servers are handled by unrelated organisations, so it is not possible to provide a proper cert for them. This match the level of trust: mirrors are not trusted with the content nor the privacy. This is not the case of deb.debian.org which is available over HTTPS if you want (ftp.debian.org is an alias for it).

                  1. 2

                    Off line mirrors, people without direct internet access, decades later offline archives, people in the future, local DVD sets.

                    Why “trust” silent media?

                    1. 10

                      For reference, this is one chapter out of the (free and excellent) book Operating Systems: Three Easy Pieces, by Remzi Arpaci-Dusseau and Andrea Arpaci-Dusseau . The full thing is available here: https://pages.cs.wisc.edu/~remzi/OSTEP/

                      1. 2

                        by Remzi Arpaci-Dusseau and Andrea Arpaci-Dusseau

                        Idd, I’m becoming quite a fan of the work done by those two. From research on SSD internals, to file-systems and databases and now I found this gem.

                        1. 3

                          From research on SSD internals, to file-systems and databases and now I found this gem.

                          Newbie here, I am trying to learn about file systems and databases. Can you please link to those resources?

                          1. 1

                            Apart from the references in a paper itself to find previous work, I’m a big fan of the citations feature of papers found via researchgate.net. If you have one good paper, i.e. The_Unwritten_Contract_of_Solid_State_Drives then just scroll to the “Citations” section on that website. This way you easily discover new research that builds upon that. Also I found the textbook Database System Concepts very complete, it also refers to research papers for further reading. I’ll keep posting links to stuff that I find interesting, currently in the area of file systems and databases.

                      1. 17

                        if you want to retrieve an array value with two offsets, one of which can be negative, in C you write arr[off1 + off2] while in Rust it would be arr[((off1 as isize) + off2) as usize].

                        In other words, Rust is making you consider signed vs. unsigned arithmetic and the possibility of overflow. Those are good things in my book. Ignoring them is simpler, but debugging the possible after effects is not, especially when array access is just syntactic sugar for pointer arithmetic. How many real-world vulnerabilities have stemmed from this sort of thing?

                        Similarly memset() and memmove() are powerful tools.

                        They are chainsaws, the kind without finger-guards. And while it’s possible to make sculpture using chainsaws, it usually works better to use finer chisels.

                        IMHO if you value simplicity that highly, best to work in a safer higher level language like JavaScript or Python, where mistakes are either impossible or at least less catastrophic and easier to debug. Or if you want performance, use a language with more compile-time safeguards like C++ or Rust, even if some of those safeguards require you to be a bit more explicit about what you’re doing. Or look at Nim or Go, which are somewhere in between.

                        If I sound a bit judgey, it’s because so many software bugs and vulnerabilities come from this style of coding.

                        1. 2

                          I mostly agree with you, but now and then a chainsaw is appropriate :)

                          1. 3

                            That’s why Rust has unsafe.

                        1. 1

                          Unfortunately bad or unpredicatable TRIM performance is still an issue today, as explained by this Facebook engineer:

                          Zhou works for Facebook, where discard is enabled, but the results are not great; there is a fair amount of latency observed as the flash translation layer (FTL) shuffles blocks around for wear leveling and/or garbage collection. Facebook runs a periodic fstrim to discard unused blocks; that is something that the Cassandra database recommends for it users. The company also has an internal delete scheduler that slowly deletes files, but the FTL can take an exorbitant amount of time when gigabytes of files are deleted; read and write performance can be affected. It is “kind of insane” that applications need to recognize that they can’t issue a bunch of discards at one time; he wondered if there is a better way forward.

                          https://lwn.net/Articles/787272/

                          1. 6

                            Any measurements on resource usage? My tab count on Firefox quite often exceeds 100, and with current architecture, the resident memory cost of a processes make up only a small portion of total memory usage. With Chrome, high memory usage is one of the main problems I encountered, so I fear that with this model Firefox might turn that way as well.

                            1. 3

                              I’ve also got about a 100 tabs open divided over 4 windows. Currently Firefox only has 13 processes running of which 5 are named “FirefoxCP Isolated Web Content” on macOS, so it looks like it only creates processes for recently used tabs.

                              1. 2

                                IIUC there is a process per domain (ie. not a process per tab, like in Chrome), which should cut down some memory use. But yeah I don’t know much of the specifics.

                                1. 3

                                  it’s process per site

                                  1. 2

                                    Because the OS can just swap out the idle ones?

                                1. 4

                                  Wow, pleasantly surprised that Project Fission can now be used outside of Firefox nightly. I have been waiting for years for some privilege separation in Firefox. Hopefully this will be a good start to get to the security level of Chrome as discussed here and here.

                                  1. 4

                                    The openbsd discussion is from 2018 has long since been outdated. The other article was discussed (and partially debunked) on lobste.rs a few weeks ago at https://lobste.rs/s/eys36p/firefox_chromium :)

                                    (Obviously, you’re all allowed to perceive my opinion as heavily biased. I work on Firefox Security.)

                                    1. 2

                                      I purpously didn’t link to the discussion on lobste.rs because unfortunately that discussion didn’t focus on privilege separation. I think most points from both the OpenBSD discussion and the other one still stand for Firefox as long as Fission is not enabled (and it’s not enabled by default just yet). Although there is some separation of priviliges in Firefox internal architecture, it never came close to the level in which Chrome separates privileges and uses this to protect one site from another by extensively using security features from the Operating System. I think once Fission is enabled by default, the groundwork is ready to get seriously started to harden each individual process and get on par with Chrome w.r.t. software security. Only then I would say the story can be “debunked”. ;-) Or to repeat Theo de Raadt’s words from 2018:

                                      It is my understanding that firefox says they are catching [up], but all I see is lipstick on a pig. It now has multiple processes. That does not mean it has a well-designed privsep model. Landry’s attempt to add pledge to firefox, shows that pretty much all processes need all pledges.

                                      1. 9

                                        Landry’s attempt to add pledge to firefox, shows that pretty much all processes need all pledges

                                        And my fully working patch to add Capsicum to Firefox shows that this is a problem with the pledge model, not with Firefox ;)

                                        never came close to the level in which Chrome separates privileges and uses this to protect one site from another by extensively using security features from the Operating System

                                        On the mainstream OSes, Firefox literally uses the same Chromium sandbox code to use these platform features, btw.

                                        1. 5

                                          Do you know how it handles setting up the IPC channels? Chromium made a spectacularly bad design choice here: service endpoint capabilities are random identifiers, so any sandboxed process that can guess the name of an endpoint can connect to it. This means that any information leak from a privileged process (including cache side channels from prime-and-probe attacks by the renderer process) has the potential to be a sandbox escape. Every other compartmentalised program that I’ve seen uses file descriptors / handles as channel endpoints and either sets them up at process creation time or has a broker that authorises them based on identity or other attestations.

                                          1. 2

                                            Firefox does currently use legacy Chromium IPC as a transport. Are you referring to this Windows channel ID thing? This mechanism is not used on posix, it’s all SCM_RIGHTS. That’s really the only usage of randomness I could find in ipc/chromium. Well, also this macOS mach port process launching thing.

                                          2. 3

                                            Very interesting! I hope to have some time at some point to look into it in more depth. :)

                                    1. 1

                                      Fear leads to anger. Anger leads to hate. Hate leads to algebraically manipulating infinite polynomials because you weren’t willing to simplify them.

                                      :-)

                                      1. 3

                                        Another thing I really like about ISO 8601 that isn’t mentioned in the article is that it supports time zones. This way you can always convert a time to whatever local timezone you or your users are currently in. I.e. 2021-02-25T19:19:55+01:00 has an offset of UTC + 1 hour.

                                        1. 7

                                          There’s some value in supporting odd platforms, because it excercises portability of programs, like the endianness issue mentioned in the post. I’m sad that the endian wars were won by the wrong endian.

                                          1. 5

                                            I’m way more happy about the fact that the endian wars are over. I agree it’s a little sad that it is LE that won, just because BE is easier to read when you see it in a hex dump.

                                            1. 4

                                              Big Endian is easy for us only because we ended up with some weird legacy of using Arabic (right-to-left) numbers in Latin (left-to-write) text. Arabic numbers in Arabic text are least-significant-digit first. There are some tasks in computing that are easier on little-endian values, none that are easier on big-endian, so I’m very happy that LE won.

                                              If you want to know the low byte of a little-endian number, you read the first byte. If you want to know the top byte of a little-endian number, you need to know its width. The converse is true of a big-endian number, but if you want to know the top byte of any number and do anything useful with it then you generally do know its width because otherwise ‘top’ doesn’t mean anything meaningful.

                                              1. 2

                                                Likewise, there are some fun bugs only big endian can expose, like accessing a field with the wrong size. On little endian it’s likely to work with small values, but BE would always break.

                                            2. 2

                                              Apart from “network byte order” looking more intuitive to me at first sight, could you eloborate why big endian is better than little endian? I’m genuinely curious (and hope this won’t escalate ;)).

                                              1. 10

                                                My favorite property of big-endian is that lexicographically sorting encoded integers preserves the ordering of the numbers itself. This can be useful in binary formats. Since you have to use big-endian to get this property, a big-endian system doesn’t need to do byte swapping before using the bytes as an integer.

                                                Also, given that we write numbers with the most significant digits first, it just makes more “sense” to me personally.

                                                1. 5

                                                  Also, given that we write numbers with the most significant digits first, it just makes more “sense” to me personally.

                                                  A random fact I love: Arabic text is right-to-left, but writes its numbers with the same ordering of digits as Latin texts… so in Arabic, numbers are little-endian.

                                                  1. 3

                                                    Speaking of Endianness: In Arabic, relationships are described from the end farthest from you to the closest, as in if you were to naively describe the husband of a second cousin, instead of saying “my mother’s cousin’s daughter’s husband” you would say “the husband of the daughter of the cousin of my mother” and it makes it insanely hard to hold it in your head without a massive working memory (because you need to reverse it to actually grok the relationship) but I always wonder if it’s because I’m not the most fluent Arabic speaker or if it’s a problem for everyone that speaks it.

                                                    1. 2

                                                      My guess is that it is harder for native speakers as well, but they don’t notice it because they are used to it. A comparable case I can think of is a friend of mine who is a native German speaker who came to the States for a post-doc. He commented that after speaking English consistently for a while, he realized that German two digit numbers are needlessly complicated. Three and twenty is harder to keep in your head than twenty three for the same reason.

                                                      1. 2

                                                        German has nothing to Danish.

                                                        95 is “fem og halvfems” - “five and half-five”, where the final five refers to five twentys (100), and the “half” refers to half of 20, i.e. 10, giving 90.

                                                        It’s logical once you get the hang of it…

                                                        In Swedish it’s “nittiofem”.

                                                2. 4

                                                  I wondered this often and figured everyone just did the wrong thing, because BE seems obviously superior. Just today I’ve been reading RISC-V: An Overview of the Instruction Set Architecture and noted this comment on endianness:

                                                  Notice that with a little endian architecture, the first byte in memory always goes into the same bits in the register, regardless of whether the instruction is moving a byte, halfword, or word. This can result in a simplification of the circuitry.

                                                  It’s the first time I’ve noticed something positive about LE!

                                                  1. 1

                                                    From what I hear, it mostly impacted smaller/older devices with small buses. The impact isn’t as big nowadays.

                                                  2. 3

                                                    Little-endian vs. big-endian has a good summary of the trade-offs.

                                                    1. 2

                                                      That was a bit of tongue-in-cheek, so I don’t really want to restart the debate :)

                                                      1. 2

                                                        Whichever endianness you prefer, it is the wrong one. ;-)

                                                        Jokes aside, my understanding is that either endianness makes certain types of circuits/components/wire protocols easier and others harder. It’s just a matter of optimizing for the use case the speaker cares about more.

                                                      2. 1

                                                        Having debugged on big-endian for the longest time, I miss “sane” memory dumps on little-endian. It takes a bit more thought to parse them.

                                                        But I started programming on the 6502, and little-endian clearly makes sense when you’re cascading operations 8 bits at a time. I had a little trouble transitioning to the big-endian 16-bit 9900 as a result.

                                                      1. 17

                                                        Snow Leopard was the only release of OS X / macOS that was monotonically better than the previous one. Every other release has had some improvements and some regressions. In the early versions the improvements massively outnumbered the regressions. For the last few versions, they’ve been around the break-even point.

                                                        The Safari review misses the big one: The address and search bar are distinct. This change was pushed by Google in Chrome because they wanted everything to go to Google but it’s a terrible choice. It was trivial to search for something in old Safari by doing command-t, tab. Now I’m in the search bar. Since they merged (and in other browsers), I’ve wasted a load of time where I’ve typed a thing into the unified bar that looks like it might be an address and had the browser try to go there. Sometimes it even is a valid address and so then I have to get into the habit of typing a space at the end of my search so that the browser knows I mean a search term (because URLs don’t normally end with a space). I have to type the same number of key strokes (space at the end vs tab at the start).

                                                        iCal in Snow Leopard let you mark things as ‘tentative’ or ‘confirmed’. In the following version, this was gone from the UI and that property could be set only via email. Any events that had the ‘tentative’ flag set had the state lost in the upgrade. In spite of this resulting in data loss for customers, the bug was closed as ‘works as expected’ in Radar.

                                                        My pet peeve in the newer UI is that the shadow difference between the active window and the others is far less visually distinctive. On Lion I suddenly started typing into the wrong window, something I never did on earlier versions of OS X, because the subtle visual distinction between active and background windows crossed a threshold to being too subtle.

                                                        1. 10

                                                          The address and search bar are distinct.

                                                          Strong disagree here. Firefox held on to the separate search bar for what felt like ages and I hated it. I love the “omni” bar because it just does the right thing 99.9% of the time, at least for my use cases. I can’t remember the last time I searched for something that could be mistaken for a URL (although, in fairness, it has definitely happened before). I can press Cmd-T to get a new tab and immediately start typing, regardless of whether I’m doing a search or typing a URL. As a bonus, my history comes along for the ride as a kind of quasi search. If I’m done with the current tab, I don’t bother reusing it, I just do Cmd-W, Cmd-T in one smooth motion.

                                                          1. 3

                                                            I love the “omni” bar because it just does the right thing 99.9% of the time, at least for my use cases.

                                                            On the other hand, the separate search and address bar does the right thing 100% of the time, for everyone’s use cases :-).

                                                            I don’t know if Safari has this but now that I think about it, I’m going to check it out – I’m trying to get used to Safari because I got one of them M1 MBPs and Firefox eats battery the way I’d eat hamburgers after having nothing but salad for a week and the omnibar is driving me nuts. There is one browser that got the omnibar right, in fact it got pretty much everything right: Opera.

                                                            Opera had this neat feature where you could just type “g ” and it would Google for instead of trying to figure out on its own if what you typed is an address or a search term. I’m not sure if Safari supports this but it would be cool if it did.

                                                            (Edit: FWIW, Vivaldi had this time last time I tried it)

                                                            1. 4

                                                              On the other hand, the separate search and address bar does the right thing 100% of the time, for everyone’s use cases :-).

                                                              This only worked for me about half the time, though. Sometimes I want to search, sometimes I want to type a URL, and sometimes I want to fuzzy search through my history (this is probably the most common case). I couldn’t do this with separate address and search bars. To be clear, I’m not saying my workflow is somehow “correct”. I just see a lot of hate aimed at the omni bar and wanted to point out another perspective.

                                                              1. 2

                                                                Opera had this neat feature where you could just type “g ” and it would Google for instead of trying to figure out on its own if what you typed is an address or a search term.

                                                                I can imagine this is perfect for tech-savvy people, but too intricate for almost anyone else.

                                                                1. 2

                                                                  When auto-detecting something is involved, the toughest point with people who aren’t tech-savvy is the recovery action – i.e. what happens when auto-detection fails – so that’s the most useful reference point for a comparison. For people who find it too intricate to type G for Google, figuring out that you need to put http:// before an intranet link that was incorrectly detected as a search term, for example, is going to be hopeless.

                                                                  Also, I really think we should upgrade our image of people who aren’t tech-savvy. 1998 was 23 years ago. Even people who aren’t tech-savvy have heard of Google, even if they’ve only heard of it from watching news on TV or seeing the commercials. They know what Google is.

                                                                  Otherwise we’ll keep building UIs that people would have found friendly 25 years ago and everyone hates today – including its historical target audience.

                                                                2. 2

                                                                  Opera had this neat feature where you could just type “g ” and it would Google for instead of trying to figure out on its own if what you typed is an address or a search term. I’m not sure if Safari supports this but it would be cool if it did.

                                                                  I use DuckDuckGo’s bangs with the unified bar; it’s even better than that.

                                                                3. 2

                                                                  I can’t remember the last time I searched for something that could be mistaken for a URL

                                                                  Ditto, but I have often ended up in fights with the browser when it would intentionally misinterpret an address as a search string. Some fake “domains” like foo.dev just don’t seem to register as a domain. So you get a search results page, then you have to go back and manually type the http:// in front (and even then it still sometimes starts off searching).

                                                                  You might say “well then don’t use fake domains”, but for example my modem uses fritz.box as its hostname and I’ve even had times where localhost:1234 (without prefix of the URI scheme) would trigger a search. This really feels like the browser is actively being hostile to what I’m trying to do.

                                                                  I really don’t mind pressing the extra keystroke of C-k to go to the search box after pressing C-t for opening a new tab if that prevents this nonsense.

                                                                  1. 2

                                                                    I can’t remember the last time I searched for something that could be mistaken for a URL

                                                                    This got a lot worse for me recently, because the address bar autocompletes with URLs that I’ve previously visited and so will often find things in my history if I have a short search term and will jump directly to those, rather than searching.

                                                                    1. 1

                                                                      For me it happens most often with bare hostnames on my local network. If I type “foo” into the address bar, I want it to do a DNS lookup, not a web search.

                                                                  2. 5

                                                                    Snow Leopard was the only release of OS X / macOS that was monotonically better than the previous one.

                                                                    Tiger and Leopard were rough. They added new features and brought intel support to the public, but they broke things.

                                                                    Snow Leopard was a very good operating system, and I’d still be using it if that were reasonable, but part of why it was so loved is just how broken things had gotten over the previous two releases. And Lion picked up that mantle again. (I’d say Leopard and Lion were neck-and-neck for my least favorite modern Mac OS releases up until Catalina firmly resolved the “what’s the worst Mac OS ever” question resoundingly, only to be blown out of the field by Big Sur.)

                                                                    So in my opinion, it stands out as the only release of OS X and macOS that was monotonically better than both the previous one and the subsequent one.

                                                                    I’m sure that’s not the only reason, but it certainly contributes to my love of Snow Leopard.

                                                                    1. 3

                                                                      I keep hearing how terribly broken various releases of MacOS are, but I haven’t been experiencing breakage myself and I’ve not seen any details as to what’s actually breaking for people.

                                                                      Do you know if there’s a collection of problems listed somewhere? I’d like to see if I’ve been affected by any of them.

                                                                      1. 2

                                                                        Tiger had the most annoying bug I’ve ever seen. If you had File Vault (your home directory was an encrypted disk image) enabled in Panther then everything worked fine after the upgrade when you rebooted. The first time you rebooted Tiger, your home directory became unmountable and unrecoverable in Tiger. If you managed to get it to a machine running Panther, you could extract the files. The fact that Apple QA didn’t involve upgrading to 10.4 from 10.3 with a security feature that they’d been shouting a lot about and then rebooting told me a lot about Apple ‘quality’ at the time. 10.6 was properly tested before release.

                                                                    2. 2

                                                                      Snow Leopard was the only release of OS X / macOS that was monotonically better than the previous one.

                                                                      At least to me, it was very incremental over Leopard but dropped PPC support. If you’re on Intel that’s a good thing because it’s so much smaller; if you’re on PPC it kind of sucked. Without this, Snow Leopard could have been a series of 10.5.x releases, because the other changes were very small.

                                                                      The Safari review misses the big one: The address and search bar are distinct. This change was pushed by Google in Chrome because they wanted everything to go to Google but it’s a terrible choice.

                                                                      I can’t agree enough with this. Chrome got a lot right but it’s sad to see the cargo-culting over this point. It was a step backwards for usability driven by Google’s business interests, which other browser makers didn’t have but they were too busy copying Chrome to pause and think. IE9 did it too, and that kept me on IE8. I still use Firefox mainly because it allows me to configure these to be separate again. And IE11 quietly added an option to split them again in some obscure update years after it shipped.

                                                                      1. 3

                                                                        At least to me, it was very incremental over Leopard but dropped PPC support.

                                                                        That’s a strong point. Leopard was good on PPC. I happily used my 12” PowerBook (with Leopard on it) for a solid 9 years after Snow Leopard shipped. The experience was very different for Leopard on Intel, AFAIR.

                                                                      2. 2

                                                                        Snow Leopard was the only release of OS X / macOS that was monotonically better than the previous one. Every other release has had some improvements and some regressions.

                                                                        What about the loss of creator codes?

                                                                      1. 5

                                                                        Good advice, although I think it can be done simpler. I would keep the SPF just as they suggest like “v=spf1 -all”, but the DMARC I have set on all my non-mailing domains is “v=DMARC1; p=reject”. If you want to monitor if it’s being used as the sender domain by spammers add a rua tag, i.e. “v=DMARC1; p=reject; rua=mailto:postmaster+dmarc@yourdomain.com.

                                                                        All the other options they suggest are optional or defaults. Furthermore, this avoids abuse of DKIM by any receiver that checks DMARC.

                                                                        1. 1

                                                                          rua=mailto:postmaster+dmarc@yourdomain.com

                                                                          This is quite a late reply, so please don’t mind my question, as I am quite curious about this part. If we don’t intend to receive email on this mydomain.com, how would be able to monitor an email to postmaster+dmarc@yourdomain.com?

                                                                          1. 2

                                                                            If we don’t intend to receive email on this mydomain.com

                                                                            DMARC, SPF and DKIM are all services for the person you are sending mail to, i.e. your outbound flow. It doesn’t mean you won’t accept incoming mail. I always setup hostmaster@ and postmaster@ aliases on my mailserver for all domains that I’m running DNS for, including the ones with an SPF -all and DMARC reject policy.

                                                                            1. 1

                                                                              Thank you for the clarification.

                                                                              hostmaster@ and postmaster@ aliases on my mailserver

                                                                              By this, do you mean to setup email aliases in your postfix configuration? So that, I don’t have to explicitly setup any mailbox for the forensic reporting, but instead just depend on postfix to send me the forensic reports.

                                                                              1. 2

                                                                                Exactly! See RFC 2142 for some recommended mailboxes.

                                                                                1. 2

                                                                                  That is so awesome that this is possible. Thanks for the tip!

                                                                        1. 3

                                                                          Nice job! really cool :)

                                                                          1. 1

                                                                            I believe that all of the technical problems will get solved within next 10-20 years or so. The difficult problems stem from the society. And those are hard to solve, because people don’t like to change their habits. Often, it’s easier to have completely new people with new habits, than re-educating existing people from their current ones. Therefore, I would not make elections digital until average lifetime has passed after a full working implementation of digital voting. Or, so to say, only over my dead body.

                                                                            1. 4

                                                                              Often, it’s easier to have completely new people with new habits, than re-educating existing people from their current ones.

                                                                              In the (Brazilian) Navy, it is called “the ghost of the ship” (“espírito do navio”, if you are curious how they call it in Portuguese).

                                                                              The idea is that there is a ghost in the ship that settles people into certain ways of doing things.

                                                                              The real explanation is the following:

                                                                              • a new ship is inaugurated and a whole new crew (let us assume of 100 people) takes over the ship;
                                                                              • after some time, some people from the original crew has to leave it for any reason (e.g., retirement);
                                                                              • let us say there is a 10% rotation;
                                                                              • the other 90% set the habit to those 10% that just arrived.

                                                                              Since I heard this story, I got fascinated by this idea.

                                                                              It is not correlated to e-voting, but I vouch your sentence I quoted.

                                                                              1. 2

                                                                                One of the hard problems of a national election is how to bootstrap a democracy while being able to have little to no trust in the sitting government that has to organize it. One way to solve the threat of a coordinated attack that would result in a distribution of power that does not reflect the will of the people, is to empower each member of the electorate with the ability to verify independently that all ballots in a polling station are casted and counted correctly and fair, meaning:

                                                                                • each ballot box in the polling station is empty before the election starts
                                                                                • they’re all sealed during the day when votes are being cast
                                                                                • each ballot is casted by a single individual that has stuffed only one ballot in the box
                                                                                • no other ballots are stuffed in the box
                                                                                • once the ballot box is opened at night, all ballots are tallied correctly to the right party and exactly once

                                                                                As soon as you introduce microchips into this process it becomes opaque to most if not all of the electorate. This is not solved by introducing more complex technology.

                                                                                A unique requirement for a general election is voter privacy in order to avoid coercion. In the process outlined above this is taken care of by letting each individual be able to verify their vote is counted correctly and exactly once, without being able to prove to anyone what they have voted for.

                                                                                1. 1

                                                                                  As soon as you introduce microchips into this process it becomes opaque to most if not all of the electorate. This is not solved by introducing more complex technology.

                                                                                  This is exactly what I mean when talking about societal problems. This is solvable. But this is also very hard to solve, since as of right now, maybe only 1% of population could actually look into how it works. With education and real digital revolution, I believe that number could go up to 100%. But that is very hard to solve. It will take entirely new people. I believe that neither me, nor you will see a verifiable, fair election. But I hope our great-grandsons do.

                                                                                  1. 1

                                                                                    But this is also very hard to solve, since as of right now, maybe only 1% of population could actually look into how it works. With education and real digital revolution, I believe that number could go up to 100%.

                                                                                    Then the question becomes, this microchip that you’re inspecting, is it the one used in the actual election and has it not been tampered with before, during or after the election? It’s the complete operation from casting to tallying that you have to secure and that people have to legitimately trust.

                                                                                    I believe that neither me, nor you will see a verifiable, fair election.

                                                                                    Here in the Netherlands we have a decentralized process of paper ballots that are readable for humans, and a manual public tallying process. This way everybody can verify storage and counting for themselves in their polling station. I’m very happy to say that I can trust what I’ve seen in previous elections, and have faith these were fair because of the small chunk I could completely and independently verify myself, plus the fact there was no news nor rumors about significant trouble in any of the other polling stations that had the same public transparant procedure.

                                                                                    1. 3

                                                                                      I believe that neither me, nor you will see a verifiable, fair election.

                                                                                      Sorry, I meant to add a digital qualifier before the election. I do believe that some of the regular elections are fair, and some of those are verifiable.

                                                                                      Then the question becomes, this microchip that you’re inspecting, is it the one used in the actual election and has it not been tampered with before, during or after the election?

                                                                                      You should not trust “the microchip”. You should trust the algorithm, and the ability to change on which chip it runs. Trustable hardware is very hard, so if you can find ways to not trust it, you should do that. There are of course ways to verify the hardware you are running on, even ones that should be fairly accessible for everyone, but this makes many sacrifices. I’d recommend to watch this talk on this topic.

                                                                              1. 2

                                                                                The thing that’s a problem with QUIC is that you’ll have a hard time (like with http/2) to get it running between an application and your reverse proxy. So it’s user < http/3 > server < http1.1> application for 90% of what people tend to run ?

                                                                                Got a nextcloud behind a global nginx resolver ? Great, now everything past the nginx is streamed and quic, but not so between nginx and nextcloud. Or where are your localhost certificates coming from ?

                                                                                1. 6

                                                                                  The reason NGINX gives for not implementing HTTP/2 for reverse http traffic is that it wouldn’t improve the performance on low-latency networks that are normally used with this type of setup. Not sure if this would change for HTTP/3 though.

                                                                                  1. 1

                                                                                    Ah thanks, didn’t knew this was the case. Thought streaming / avoiding TCP would give you the same improvements locally.

                                                                                  2. 5

                                                                                    HTTP/1 turned out to be good enough for upstream communication. You lose only two things: H/2 push and detailed multiplexing/prioritization.

                                                                                    However, H/2 push seems to be dead. Browser implementations are too weird and fragile to be useful. Experiments in automating H/2 push at CDNs turned out to give “meh” results (lots of conditions must be just right for a perf benefit, but a less-than-ideal guess makes it a net negative).

                                                                                    Prioritization and multiplexing can be done at the edge. H/2 server can by itself decide how to prioritize and mix upstream H/1 connections, and this can be done well enough with heuristics.

                                                                                    So I expect this trend to continue. You can use H/1 server-side for simplicity, and H/3 for the last mile to get through slowest links.

                                                                                    1. 3

                                                                                      I tried to deploy http/2 push in a way that improves performance and it’s just hard, even though this was within the context of a server that understood and optimized HTML. Here’s how it typically goes:

                                                                                      I want to push js/css to the browser. But what file? Maybe I can configure my server to push /app.js, but my build system uses cache busters so now I need to integrate some sort of asset db into my web server config. What if the homepage, search, and product team all have different incompatible systems? Assuming that problem is solved, what happens if app-$SHA.js is already in the browser cache?

                                                                                      For certain websites you start looking at heuristics. Like if you cookie a user and a request comes in for a page without a cookie you can probably assume it’s their first visit and that they have a cold cache. But without some sort of asset db for your versioned assets you have to examine the response what assets are referenced. Now you might have to add a layer of gzip/brotli decoding and buffer.

                                                                                      It’s hard.

                                                                                      1. 3

                                                                                        Indeed. There has been proposal for “cache digest” that browser would send to signal what it has in the cache: https://calendar.perfplanet.com/2016/cache-digests-http2-server-push/

                                                                                        but it doesn’t seem to be going anywhere. It’s more complexity, potentially yet another tracking vector, and it’s still 1RTT win in the best case.

                                                                                      2. 2

                                                                                        Interesting to hear push is dead. Kinda like that tbh.

                                                                                      3. 4

                                                                                        HTTP/3 is really catered towards client-facing edge servers, especially talking to mobile clients. There might be a future where it makes sense for traffic within a datacenter or between services, but I’m skeptical. In any case, that will probably be awhile because more work needs to be done to bring QUIC’s server-side CPU footprint down before you’d want to try shoving it everywhere.

                                                                                        Generally I think HTTP/2 is the right choice for that sort of revproxy to server communication. You can use it without multiplexing and basically get HTTP1 with compressed headers.

                                                                                        1. 2

                                                                                          Not familiar with nextcloud - would nginx connect to it over a public internet? Because for the reliable internal networks HTTP 2/3 gives diminishing returns (packet loss is much less of an issue).

                                                                                        1. 4

                                                                                          If you’re concerned with POSIX compliance, use dash. It’s (unverified) the most compliant.

                                                                                          If you’re concerned with getting things done on most (not embedded, not boot or recovery) systems you encounter just use bash explicitly. I recommend using shellcheck too.

                                                                                          As far as that ticket? I know that individual and I’ll leave it at that.

                                                                                          1. 2

                                                                                            If you’re concerned with POSIX compliance, use dash. It’s (unverified) the most compliant.

                                                                                            Mmm, I’m just reading: dash is actually the most “divergent” shell I’ve tested, because of stuff the spec leaves open for implementors.

                                                                                            /edit punctuation

                                                                                            1. 1

                                                                                              Surprised that there is undefined behavior in a shell? The matter is does your particular script fail or behave oddly where you need it to run. Otherwise this is a bit of navel-gazing.

                                                                                              My opinion is that POSIX compliance isn’t what we believe/hope it is. What I usually want is portability and predictability for the systems I interact with. On Linux that’s bash in Bourne sh mode.