1. 6

    With their recent announcement of pfSense Plus I guess the claims of this article are more true than ever.

    I see where Netgate is coming from: they need something to guarantee themselves revenue, but sleazy tactics like this (do read that linked announcement for some amazing marketing spinning) leaves a very bad taste in my mouth.

    A bit of a shame as I’m a very happy user of pfSense for years now

    1. 2

      Netgate’s douchebag moves on OPNSense made me stop even considering using their products.

      But then I started reading a bit about their TNSR (closed source except for the open source components they are using) and was a bit tempted due to reasons.

    1. 4

      That is kind of interesting, but not in the way I expected. Like the author, I can only speculate, but I wouldn’t be surprised if it wasn’t just the case that Steam felt TLS wasn’t secure enough but that someone thought it would be a fun project to harden login even further and because it’s Steam, why not?

      1. 9

        It protects the password from network level SSL (passive - an active MitM could provide a fake public key or serve a compromised rsa library) interception employed by AV tools and ad frameworks (like that Lenovo superfish cert a few years back)

        I have a feeling that Steam’s target audience lives in that dangerous group of people who are capable of installing whatever crap they want, who are also not versed in security enough not to do so and who have a very valuable asset (the steam account with all games)

        Any bit of additional protection helps at that point

        1. 1

          I’m not quite grasping the threat model you’re trying to show.

          Passive SSL MITM isn’t, someone has to encrypt with a new cert – either your network administrator because all your traffic goes through a MITM proxy, or software on your box. But for that to work your machine already has to trust a non-standard CA root, which means your adversary already did something arbitrary on your box.

          So in that world what does an extra layer of RSA get you?

          1. 1

            I imagine the point pilif was trying to make is that it’s not outlandish to consider users who’ve unwittingly allowed a malicious CA root and new cert on their machine. So, security in layers. That said, anyone who has done so is effectively vulnerable to mitm attacks on every website they visit and probably reuses their password, so their steam account might as well be considered compromised.

          2. 1

            That’s kind of a funny threat model, but why does Steam not just do CA or cert pinning like a lot of mobile apps do nowadays?

            1. 2

              They do. But they also offer a website where people can log in. And they offer OAuth “login with steam” services, all of which happening in browsers and not supporting cert pinning

              1. 1

                Ah right, the browser. I wonder if they’d have done this if Chrome was market leader back then.

        1. 3

          Would be interesting to see performance compared while taking into account temperatures. Both Intel and AMD chips are heavily dependent on temperature for reaching and maintaining boost clocks.

          This article compares the M1 against previous generation MacBooks, which optimise for compactness and quietness over performance. An Intel chip in a properly cooled system would perform better. They also include Ryzen desktop CPUs into the mix, which had to have been cooled using conventional desktop parts.

          These numbers are useful for people choosing between MacBooks but in terms of actual performance they could be misleading.

          1. 3

            As far as I can tell, the article does not discuss the most important component here: the secondary storage itself. Some active benchmarking is needed here before any conclusions can be drawn.

            The new M1 apparently has an SSD that is nearly twice as fast as a previous gen mac, but its numbers are typical for an NVMe drive. I rather suspect that normalizing for storage performance would render the M1 CPU itself uninteresting.

            1. 2

              The new M1 apparently has an SSD that is nearly twice as fast as a previous gen Mac

              only on the MacBook Air which previously had an SSD that was 2x sower than what all other Macs had

              1. 1

                Fair enough. Point stands though.

          1. 2

            How does plex do their ip address based certs? Is it just totally incecure over the internet?

            1. 21

              Personally, I think this is a good thing, though of course it’s so easily circumventable by apps.

              Whenever I’m in the position where I’m in an app using sign-in-with-some-other-service and it’s showing me an internal web view, I’m usually very anxious about who else is getting access to my password: The makers of the app? The analytics SDK of the makers of the App? The advertising SDK of the makers of the app?

              By forcing the sign-in to work through an external browser, I can be sure (to some extent) that the password will go only to the site itself (barring malware or bad extensions, of course)

              1. 3

                Microsoft has tried it and they failed. And I bet chrome in gmail will not be affected.

                1. 6

                  And I bet chrome in gmail will not be affected.

                  Oh - I’m sure it won’t. But google can trust their own gmail app.

                  Or rather: If you don’t want to give your gmail password to the gmail app because you don’t trust your gmail app with your gmail password, then I don’t see how you could still be able to access gmail through the app.

                  Or: If google wants access to your password, it can just take the password you submit through the embedded web view of their own app. It doesn’t need to exfiltrate it using JS injected into their own web view by their own app.

                  1. 1

                    I was thinking about logging into other google services using chrome embedded into gmail app. But it also makes little sens. Silly me!

              1. 66

                Soon we’ll need Google-approved browsers to access Google’s version of the web. Well played.

                1. 34

                  You already do! Try visiting a site using recaptcha from a non-approved browser sometime…

                  1. 14

                    One can only hope that the US DOJ’s lawsuit against google will result in any real penalties/changes for this, but I’m not holding out too much hope.

                    1. 13

                      The Chromium-based Edge seems immune to these. I get far fewer requirements to do some mechanical Turk work to classify images for Google’s self-driving cars on Edge on Windows than I do on Safari on Mac, Firefox anywhere, or pre-Chromium Edge on Windows. As @craftyguy says, this is exactly the kind of thing that I hope the DOJ will look at: Subtly making large amounts of the web worse for users unless they use a Google browser seems like the kind of thing that antitrust laws are supposed to prevent.

                      1. 2

                        Could the browser spoof the user agent to be something Google likes?

                        1. 12

                          yes totally, but of course this will end up in the usual cat and mouse game where Google will detect the circumvention technique and the circumventers will circumvent the circumvention technique circumvention.

                          1. 5

                            Isn’t Google trying to kill user agent?

                            1. 5

                              Yes It’s one of the many great ironies of Google - the Chrome team (along with other major browsers) are trying to kill the User Agent, but the rest of Google UA-sniffs. Exactly the kind of behavior they’re trying to kill the User Agent over.

                              1. 2

                                that and the address bar too

                          2. 6

                            Tried to log into my Google account from an OpenBSD laptop in Chrome today. Right creds, just got redirected to “we couldn’t verify that belongs to you”.

                          1. 10

                            code review often results in me having to amend or squash my commit(s).

                            Why? What is wrong with fixing whatever needs to be fixed in new commits?

                            Sure, amend, squash, modify before you push, but after that, don’t, and you avoid a whole class of problems.

                            You might argue that the history will look “messy”, yes, perhaps, but it also reflects what actually happened, which can be a good thing.

                            1. 19

                              git history should tell a story, i don’t want to see your typos, unless it’s in the main branch, then it’s written in stone

                              1. 3

                                I don’t see why. VC history can be an almost arbitrary mess!

                                The thing which really matters is that you get your job done.

                                As long as you have a decent way to

                                1. find semantically connected commits (e.g. look at the merge of a PR, or a ticket ID in the commit messages) and
                                2. find out who to ask when you have questions about some code (some version of blame)

                                you should be good. At least, that is all I ever needed from a VCS. I would be interested in hearing about other use-cases, though.

                                In general, people are wasting too much time these days cleaning up their commit history.

                                1. 5

                                  as somebody regularly doing code archeology in a project that is now 16 years old and has gone through migrations from CVS to SVN to git, to git with people knowing how to rebase for readable histories, I can tell you that doing archeology in nice single-purpose commits is much nicer than doing archeology within messy commits.

                                  So I guess it depends. If the project you’re working on is a one-off, possibly rewritten or sunset within one or two years, sure, history doesn’t matter.

                                  But if your project sticks around for the long haul, you will thank yourself for not committing typo fixes and other cleanup commits.

                                  1. 4

                                    it CAN be, but that’s what we’re trying to avoid
                                    you can get your job done either way, and cleaning up git history doesn’t take a lot of time if you think properly from the beginning. Any additional time i do spend can easily be easily justified by arguing for better documented changes.

                                    1. sure
                                    2. you should not have to ask anyone, some aggregation of context, commit messages, and comments should answer any questions you have

                                    having a mistake you introduced, as well as the fix for that mistake in the same branch before merging into a main branch is just clutter… unnecessary cognitive load. If you use git blame properly, it’s yet another hoop you have to jump through and try to find out the real reason behind a change. Now, there are exceptions. Sometimes i do introduce a new feature with a problem in a branch, and happen to discover it and fix it in the same branch (usually it’s because the branch is too long lived, which is a bad thing). I do, sometimes, decide that this conclusion is important to the narrative, and decide to leave it in.

                                    1. 2

                                      I mean…I would agree in principle except for “cleaning up git history doesn’t take a lot of time”. I think that is only true if you have already invested a lot of time into coming up with your branching and merging and squashing model and another lot of time figuring out how to implement it with your tools.

                                      I have probably more cognitive overhead from reading blog posts on to-squash-or-not-squash et al. than I could get from ignoring “fix typo” commits in a lifetime. ;)

                                      1. 4

                                        “cleaning up source history” is such an ingrained part of my work flow, that seeing you dismiss it because it’s too costly reads similarly to me was, “I don’t have time to make my code comprehensible by using good naming, structure and writing good docs.” Which you absolutely could justify by simply saying, “all that matters is that you get the job done.” Maybe. But I’d push back and say: what if a cleaner history and cleaner code makes it easier to continue doing your job? Or easier for others to help you do the job?

                                        FWIW, GitHub made this a lot easier with their PR workflow by adding the “squash and merge” option with the ability to edit the commit message. Otherwise, yes, I’ll checkout their branch locally, do fixups and clean up commit history if necessary.

                                        1. 1

                                          I could make that argument. But I didn’t because it is not the same thing.

                                          This is exactly why I gave examples and asked for more! I haven’t found any use for a clean commit history. And - also answering @pilif here - this includes a medium sized (couple million lines of code), 30 year old project that had been rewritten in different languages twice and the code base at that time consisted of 5 different languages.

                                          (The fact that cleaning up history is such an ingrained part of your work flow doesn’t necessarily mean anything good. You might also just be used to it and wasting your time. You could argue that it is so easy that it’s worth doing even if there is no benefit. Maybe that’s true. Doesn’t seem like it to me at this point.)

                                          1. 5

                                            But I didn’t because it is not the same thing.

                                            Sure, that’s why I didn’t say they weren’t the same. I said they were similar. And I said they were similar precisely because I figured that if I said they were the same, someone would harp on that word choice and point out some way in which they aren’t the same that I didn’t think of. So I hedged and just said “similar.” Because ultimately, both things are done in the service of making interaction with the code in the future easier.

                                            This is exactly why I gave examples and asked for more! I haven’t found any use for a clean commit history.

                                            I guess it seems obvious to me. And it’s especially surprising that you literally haven’t found any use for it, despite people listing some of its advantages in this very thread! So I wonder whether you’ll even see my examples as valid. But I’ll give a try:

                                            • I frequently make use of my clean commit history to write changelogs during releases. I try to keep the changelog up to date, but it’s never in sync 100%, so I end up needing to go through commit history to write the release notes. If the commit history has a bunch of fixup commits, then this process is much more annoying.
                                            • Commit messages often serve as an excellent place to explain why a change was made. This is good not only for me, but to be able to point others to it as well. Reading commit messages is a fairly routine part of my workflow: 1) look at code, 2) wonder why it’s written that way, 3) do git blame, 4) look at commit that introduced it. Projects that don’t treat code history well often result in disappointing conclusion to this process.
                                            • A culture of fixup commits means that git bisect is less likely to work well. If there are a lot of fixup commits, it’s more likely that any given commit won’t build or pass tests. This means that commit likely needs to be skipped while running git bisect. One or two of these isn’t the end of the world, but if there are a lot of them, it gets annoying and makes using git bisect harder because it can’t narrow down where the problem is as precisely.
                                            • It helps with code review enormously, especially in a team environment. At $work, we have guidelines for commit history. Things like, “separate refactoring and new functionality into distinct commits” make it much easier to review pull requests. You could make the argument that such things should be in distinct PRs, but that creates a lot more overhead than just getting the commit history into a clean state. Especially if you orient your workflow with that in mind. (If you did all of your work in a single commit and then tried to split it up afterwards, that could indeed be quite annoying!) In general, our ultimate guideline is that the commits should tell a story. This helps reviewers contextualize why changes are being made and makes reviewing code more efficient.

                                            (The fact that cleaning up history is such an ingrained part of your work flow doesn’t necessarily mean anything good. You might also just be used to it and wasting your time. You could argue that it is so easy that it’s worth doing even if there is no benefit. Maybe that’s true. Doesn’t seem like it to me at this point.)

                                            Well, the charitable interpretation would be that I do it because I find it to be a productive use of my time. Just like I find making code comprehensible to be a good use of my time.

                                            And no, clean source history of course requires dedicated effort toward that end. Just like writing “clean” code does. Neither of these things come for free. I and others do them because there is value to be had from doing it.

                                            1. 1

                                              Thanks, this is more useful for discussing. So from my experience (in the same order):

                                              1. I could see this as being useful. I simply always used the project roadmap + issue tracker for that.
                                              2. Absolutely, I wasn’t trying to argue against good commit messages.
                                              3. I understand that fix-up commits can be a bit annoying in this respect so if you can easily avoid them you probably should. On the other hand I need git bisect only very rarely and fix-up commit are often trivial to identify and ignore. Either by assuming they doesn’t exist or by ignoring the initial faulty commit.
                                              4. I am totally in favor of having refactoring and actual work in separate commits. Refactorings are total clutter. Splitting a commit which has both is a total pain (unless I am missing something) so it’s more important to put them into separate commits from the start.

                                              I mean, maybe this is just too colored by how difficult I imagine the process to be. These arguments just seem too weak in comparison to the cognitive burden of knowing all the git voodoo to clean up the history. Of course if you already know git well enough that trade-off looks different.

                                              1. 1

                                                The git voodoo isn’t that bad. It does take some learning, but it’s not crazy. Mostly it’s a matter of mastering git rebase -i and the various squash/fixup/reword/edit options. Most people on my team at work didn’t have this mastered coming in, but since we have a culture of it, it was easy to have another team member hop in and help when someone got stuck.

                                                The only extra tooling I personally use is git absorb, which automates the process of generating fixup commits and choosing which commits to squash them back into automatically. I generally don’t recommend using this tool unless you’ve already mastered the git rebase -i process. Like git itself, git absorb is a convenient tool but provides a leaky abstraction. So if the tool fails, you really need to know how to git rebase yourself to success.

                                                It sounds painful, but once you have rebase mastered, it’s not. Most of my effort towards clean source history is spent on writing good commit messages, and not the drudgery of making git do what I want.

                                                It sounds like we are in some agreement on what’s valuable, so perhaps we were just thinking of different things when thinking about “clean source history.”

                                                Splitting a commit which has both is a total pain (unless I am missing something) so it’s more important to put them into separate commits from the start.

                                                Indeed, it is. Often because the code needs to be modified in a certain way to make it work. That’s why our commit history guidelines are just guidelines. If someone decided it was more convenient to just combine refactoring and semantic changes together, or maybe they just didn’t plan that well, then we don’t make them go back and fix it. If it’s easy to, sure, go ahead. But don’t kill yourself over it.

                                                The important bit is that our culture and guidelines gravitate toward clean history. But just like clean code, we don’t prioritize it to the expense of all else. I doubt few others who promote clean code/history do either.

                                                N.B. When I say “clean code,” I am not referring to Bob Martin’s “Clean Code” philosophy. But rather, just the general subjective valuation of what makes code nice to maintain and easy to read.

                                2. 5

                                  For a stacked diff flow, this is necessary https://jg.gg/2018/09/29/stacked-diffs-versus-pull-requests/

                                  1. 4

                                    If you are just going to duplicate your original commit message for the new work, why not amend the original commit? Branches are yours to do with as you please until someone else may have grabbed the commit.

                                    1. 2

                                      Sure, amend, squash, modify before you push

                                      It’s not about push. It’s about sharing. I push to many branches that aren’t being or intended to be shared with others. Thus, it’s okay to rewrite history and force push in those cases.

                                    1. 8

                                      For archeology I really love git gui blame despite its dated UI. The “blame parent commit” feature is priceless. And it does a great job at tracking files across renames too

                                      I haven’t found an equivalent tool with a more modern UI yet

                                      1. 3

                                        this proprietary piece of software the article is talking about sounds like an early 2000s game server to me where stuff like this was pretty common with linux support being an after-thought.

                                        What I do think is pretty cool is that systemd’s declarative configuration is good enough to offer this kind of customizability (minus the documentation issue about “socket”): A setup like described here could of course be accomplished with traditional shell scripts, but everybody would need to write their own script or use their own distorts custom flavor of the feature, all incompatible with each other.

                                        systemd offers this by default and all bugs found in this setup can be fixed once and and will work with every setup, even across distros.

                                        As a sysadmin, this is a big advantage for me, all the negativity surrounding systemd aside.

                                        1. 3

                                          for instance, any node with class skyscraper is considered hidden (maybe the logic is that those elements are shrouded by clouds)

                                          Is “skyscraper” some kind of design lingo that, as a non-designer, I wouldn’t necessarily know? Like the term hero image?

                                          1. 3

                                            It’s a term used to describe a certain ad banner size back in the late 90ies, early 00s.

                                            I guess the name stuck even now where ads are practically allowed to have any size and do whatever they want with the content they are embedded in

                                            1. 1

                                              Ah yes, the tall sidebar ads. I remember them fondly, because they were at least out of the damn way.

                                          1. 2

                                            Don’t blame the programmers. Blame the customers and/or product managers.

                                            I lost count of how many times I was aware of these issues but my attempts at getting this right were thwarted by a product manager or a customer who didn’t have the experience and insisted on useless validation or field formatting because “clearly everybody does it that way”

                                            1. 4

                                              Isn’t the complicated code in GNU libc related to locales, see: https://github.com/gliderlabs/docker-alpine/issues/144, https://www.openwall.com/lists/musl/2014/08/01/1.

                                              Maybe someone with more knowledge can comment.

                                              1. 11

                                                The funny thing is that isalnum cannot take Unicode codepoints, according to its specification. What’s the point of using locales for unsigned char? 8-bit encodings? But I’d be surprised if it handled cyrillic in e.g. CP1251.

                                                Update: Ok, I am surprised. After generating a uk_UA.cp1251 locale, isalnum actually handles cyrillic:

                                                $ cat > isalnum.c
                                                #include <ctype.h>
                                                #include <stdio.h>
                                                #include <locale.h>
                                                
                                                int main() {
                                                    setlocale(LC_ALL, "uk_UA.cp1251");
                                                    printf("isalnum = %s\n", isalnum('\xE0') ? "true" : "false");
                                                }
                                                
                                                $ sudo localedef -f CP1251 -i uk_UA uk_UA.cp1251
                                                $ gcc isalnum.c && ./a.out
                                                isalnum = true
                                                

                                                On the practical side, I have never used a non-utf8 locale on Linux.

                                                1. 17

                                                  I guess you haven’t been around in the 90ies then, which incidentally also is likely when the locale related complexity was introduced in glibc.

                                                  Correctness often requires complexity.

                                                  When you are privileged to only needing to deal with english input to software running on servers inter US, then, yes, all the complexity of glibc might feel over the top and unnecessary. But keep in mind others might not share that privilege and while glibc provides a complicated implementation, at least it provides a working implementation that fixes your problem.

                                                  musl does not.

                                                  1. 10

                                                    Correctness often requires complexity.

                                                    In this case, it doesn’t – it just requires a single:

                                                    _ctype_table = loaded_locale.ctypes;
                                                    

                                                    when loading up the locale, so that a lookup like this would work:

                                                    #define	isalpha(c)\
                                                            (_ctype_table[(unsigned char)(c)]&(_ISupper|_ISlower))
                                                    

                                                    Often, correctness is used as an excuse for not understanding the problem, and mashing keys until things work.

                                                    Note, this one-line version is almost the same as the GNU implementation, once you unwrap the macros and remove the complexity. The complexity isn’t buying performance, clarity, or functionality.

                                                    1. 7

                                                      Interestingly, this is exactly how OpenBSD’s libc does it, which I looked up after reading this article out of curiosity.

                                                    2. 4

                                                      Locale support in standard C stuff is always surprising to me. I just never expect anything like that at this level. My brain always assumes that for unicode anything you need a library like ICU, or a higher level language.

                                                      1. 4

                                                        Glibc predates ICU (of a Sun ancestry) by 12 years.

                                                        1. 2

                                                          It probably does. A background project I’ve been working on is a Pascal 1973 Compiler (because the 1973 version is simple) and I played around with using the wide character functions in C99. When I use the wide character version (under Linux) I can see it loads /usr/lib/gconv/gconv-modules.cache and /usr/lib/locale/locale-archive. The problem I see with using this for a compiler though (which I need to write about) is ensuring a UTF-8 locale.

                                                        2. 2

                                                          you haven’t been around in the 90ies then

                                                          – true

                                                          only needing to deal with english input to software running on servers inter US

                                                          – I’ve been using and enjoying uk_UA.utf8 on my personal machines since 2010, that is my point

                                                          The rest of your assumptions does not apply to me in the slightest (“only English input”, “servers in the US”). I agree that I just missed the time when this functionality made sense.

                                                          Still, I think that the standard library of C feels like a wrong place to put internationalization into. It’s definitely a weird GNUism to try to write as much end-user software in C as possible (but again, it was not an unreasonable choice at the time).

                                                          1. 3

                                                            Locale support is part of the POSIX standard and the glibc support for locales was all there was back in the 90ies; ICU didn’t exist yet.

                                                            You can’t fault glibc for wanting to be conformant to POSIX, especially when there was no other implementation at the time

                                                            1. 4

                                                              Locale support is part of the POSIX standard

                                                              Locales are actually part of C89, though the standard says any locales other than "C" are implementation-defined. The C89 locale APIs are incredibly problematic because they are unaware of threads and a call to setlocale in one thread will alter the output of printf in another. POSIX2008 adopted Apple’s xlocale extension (and C++11 defined a very thin wrapper around POSIX2008 locales, though the libc++ implementation uses a locale_t per facet, which is definitely not the best implementation). These define two things, a per-thread locale and _l-suffixed variants of most standard-library functions that take an explicit locale (e.g. printf_l) so that you can track the locale along with some other data structure (e.g. a document) and not the thread.

                                                              1. 2

                                                                Oh. Very interesting. Thank you. I know about the very problematic per-process setlocale, but I thought this mess was part of posix, not C itself.

                                                                Still. This of course doesn’t change my argument that we shouldn’t be complaining about glibc being standards compliant, at least not when we’re comparing the complexities of two implementations when one is standard compliant and one isn’t.

                                                                Not doing work is always simpler and considerable speedier (though in this case it turns out that the simpler implementation is also much slower), but if not doing work also means skipping standards compliance, then doing work shouldn’t be counted as being a bad thing.

                                                                1. 3

                                                                  I agree. Much as I dislike glibc for various reasons (the __block fiasco, the memcpy debacle, and the refusal to support static linking, for example), it tries incredibly hard to be standards compliant and to maintain backwards ABI compatibility. A lot of other projects could learn from it. I’ve come across a number of cases in musl where it half implements things (for example, __cxa_finalize is a no-op, so if you dlclose a library with musl then C++ destructors aren’t run, nor are any __attribute__((destructor)) C functions, so your program can end up in an interesting undefined state), yet the developers are very critical of other approaches.

                                                        3. 5

                                                          What’s the point of using locales for unsigned char? 8-bit encodings?

                                                          Yes, locales were intended for use with extended forms of ASCII.

                                                          They were a reasonable solution at the time and it’s not surprising that there are some issues 30 years later. The committee added locales so that the ANSI C standard could be adopted without any changes as an ISO standard. This cost them another year of work but eliminated many of the objections to reusing the ANSI standard.

                                                          If you’re curious about this topic then I would recommend P.J. Plauger’s The Standard C Library.
                                                          He discusses locales and the rest of a 1988 library implementation in detail.

                                                          1. 2

                                                            Yes, locales were intended for use with extended forms of ASCII.

                                                            Locales don’t just include character sets, they include things like sorting. French and German, for example, sort characters with accents differently. They also include number separators (e.g. ',' for thousands, '.' for decimals in English, ' ' for thousands, ',' for the decimal in French). Even if everyone is using UTF-8, 90% of the locale code is still necessary (though I believe that putting it in the core C/C++ standard libraries was a mistake).

                                                            1. 2

                                                              Date formatting is also different:

                                                              $ LC_ALL=en_US.UTF-8 date
                                                              Tue Sep 29 12:37:13 CEST 2020
                                                              
                                                              $ LC_ALL=de_DE.UTF-8 date
                                                              Di 29. Sep 12:37:15 CEST 2020
                                                              
                                                        4. 1

                                                          My post elsewhere explains a bit about how the glibc implementation works.

                                                        1. 8

                                                          It’s interesting that AVIF already has multiple independent implementations. AFAIK WebP after 10 years has only libwebp.

                                                          There’s C libavif + libaom, and I’ve made a pure Rust encoder based on rav1e and my own AVIF serializer.

                                                          1. 4

                                                            Do you think that’s related to the standardization process vs. the VP9 code dump approach (IIRC WebP is derived from VP9)?

                                                            1. 1

                                                              WebP is derived from an older VP8. That may be the partly the cause, because the world has quickly moved on to VP9, but I’m not really sure.

                                                              1. 1

                                                                Wasn’t it patent encumbered?

                                                                1. 1

                                                                  In the same way as AV1 is: the inventors say no but third parties make vague threats to seed FUD and make sure companies go the safe way and just license MPEG.

                                                            1. 4

                                                              I am sorry, but if you still do not know about multi-byte characters in 2020, you should really not be writing software. The 1990ies have long passed in which you could assume 1 byte == 1 char.

                                                              1. 28

                                                                https://jvns.ca/blog/2017/04/27/no-feigning-surprise/

                                                                Nobody was born knowing about multi-byte characters. There’s always new people just learning about it, and probably lots of programmers that never got the memo.

                                                                1. 5

                                                                  The famous joel article on unicode is almost old enough to vote in most countries (17 years). There is really no excuse to be oblivious to this: https://www.joelonsoftware.com/2003/10/08/the-absolute-minimum-every-software-developer-absolutely-positively-must-know-about-unicode-and-character-sets-no-excuses/

                                                                  This is esp. problematic if you read the last paragraph where the author gives encryption/decryption as an example. If somebody really is messing with low level crypto apis, they have to know this. There is no excuse. Really.

                                                                  1. 10

                                                                    Junior Programmers aren’t born having read a Joel Spolsky blog post. There are, forever, people just being exposed to the idea of multibyte characters for the first time. Particularly if, like a lot of juniors are, they’re steered towards something like the K&R C book as a “good” learning resource.

                                                                    Whether or not this blog post in particular is a good introduction to the topic is kind of a side-point. What was being pointed out to you was that everyone in 2020 is supposed to have learned this topic at some point in the past is beyond silly. There are always new learners.

                                                                    1. 4

                                                                      You are aware that there are new programmers born every day, right?

                                                                    2. 4

                                                                      Right, but this author is purporting to be able to guide others through this stuff. If they haven’t worked with it enough to see the low-hanging edge cases, they should qualify their article with “I just started working on this stuff, I’m not an expert and you shouldn’t take this as a definitive guide.” That’s a standard I apply to myself as well.

                                                                      1. 2

                                                                        We should perhaps not expect newcomers to know about encoding issues, but we should expect the tools they (and the rest of us) use to handle it with a minimum of bad surprises.

                                                                      2. 8

                                                                        That’s a little harsh, everyone has to learn sometime. I didn’t learn about character encoding on my first day of writing code, it took getting bitten in the ass by multibyte encoding a few times before I got the hang of it.

                                                                        Here is another good intro to multibyte encoding for anyone who wants to learn more: https://betterexplained.com/articles/unicode/

                                                                        1. 2

                                                                          I didn’t learn about character encoding on my first day of writing code, it took getting bitten in the ass by multibyte encoding a few times before I got the hang of it.

                                                                          Right, but you’re not the one writing and publishing an article that you intend for people to use as a reference for this type of stuff. People are responsible for what they publish, and I hold this author responsible to supply their writing with the caveat that their advice is incomplete, out-of-date, or hobby-level—based, I presume, on limited reading & experience with this stuff in the field.

                                                                        2. 8

                                                                          I’m sure that if I knew you well enough, I could find three things you didn’t know that respected developers would say means “you should really not be writing software”.

                                                                          1. 3

                                                                            Yes it’s 2020, but also, yes, people still get this wrong. 90% of packages addressed to me mangle my city (Zürich) visibly on the delivery slip, so do many local(!) food delivery services.

                                                                            Every time I make a payment with Apple Pay, the local(!) App messes up my city name in a notification (the wallet app gets it right).

                                                                            Every week I’m handling support issues with vendors delivering data to our platform with encoding issues.

                                                                            Every week somebody in my team comes to me with questions about encoding issues (even though by now they should know better)

                                                                            This is a hard problem. This is also a surprising problem (after all „it’s just strings“).

                                                                            It’s good when people learn about this. It’s good when they communicate about this. The more people write about this, the more will get it right in the future.

                                                                            We are SO far removed from these issues being consistently solved all throughout

                                                                            1. 2

                                                                              I know all that. My first name has an accented character in it. I get broken emails all the time. That still does NOT make it okay. People that write software have to know some fundamental things and character encodings is one of them. I consider it as fundamental as understanding how floats work in a computer and that they are not precise and what problems that causes.

                                                                              The article being discussed is not good and factually wrong in a few places. It is also not written in a style that makes it sound like somebody is documenting their learnings. It is written as stating facts. The tone makes a big difference.

                                                                            2. 2

                                                                              There’s a difference between knowing there’s a difference, which I suspect is reasonably common knowledge, and knowing what the difference is.

                                                                              1. 2

                                                                                There are very few things that every software developer needs to know–fewer than most lists suggest, at least. Multi-byte encodings and unicode have are about as good a candidate as exists for being included in that list.

                                                                                However, people come to software through all different paths. There’s no credential or exam you have to pass. Some people just start scripting, or writing mathematical/statistical code, and wander into doing things. Many of them will be capable of doing useful and interesting things, but are missing this very important piece of knowledge.

                                                                                What does getting cranky in the comments do to improve that situation? Absolutely nothing.

                                                                                1. 3

                                                                                  There’s no credential or exam you have to pass.

                                                                                  I think that this is one of the problems with our industry. People with zero proof of knowledge are fuzzing around with things they do not understand. I am not talking about hobbyists here, but about people writing software that is being used by people to run critical infrastructure. There is no other technical profession where that is okay.

                                                                                  1. 2

                                                                                    I think our field is big and fast enough that dealing with things we don’t understand don’t yet understand has just become part of the job description.

                                                                              1. 7

                                                                                I couldn’t actually read this submission as Gitlab was too much for my current internet connection, but I didn’t like this at all. I wouldn’t have had a huge problem with it except for the fact that by default I find scalable fonts to be very unreadable on non-high-resolution screens.

                                                                                This might just be personal opinion, and since getting a HD display it hasn’t been an issue for me, but I do wonder whether it was another case of the developers having better-than-average (or even better than the lowest reasonable) devices and dismissing anyone in a different situation.

                                                                                Accessibility, ha.

                                                                                1. 9

                                                                                  IMHO this is about a small minority opting into choosing an outdated technology (bitmap fonts) with serious limitations but advantages on their specific use-case and then blaming the developers for breaking their workflow.

                                                                                  Bitmap fonts stopped being universally useful with the decline of dot-matrix printers in the late 80ies, about 40 years ago and their usefulness gradually declined over the years as we got higher resolution displays, Unicode and lately very high resolution displays meant to run at 2x.

                                                                                  People still preferring bitmap fonts do not want to print them, have 1x monitors and use nothing but English. And they want maintainers to continue to support 40 years old technology in addition to all the new requirements around font rendering for the fonts that everybody uses because it’s convenient for them.

                                                                                  No machine made in the last ~35 years is too weak to handle vector fonts. In this instance, the argument about developers running too powerful hardware really doesn’t count any more

                                                                                  1. 1

                                                                                    the argument about developers running too powerful hardware really doesn’t count any more

                                                                                    No one said “too powerful”; the relevant question is whether developers buy fancy new hidpi displays and forget about the many users who are happy to stay on older displays with lower resolution where bitmap fonts look better.

                                                                                    1. 8

                                                                                      that is in the eye of the beholder (ever since anti-aliasing has become possible I personally preferred vector fonts - and the vast majority of users seem to agree considering the prevalence of bitmap fonts these days) and even disregarding the HiDpi support, the vector fonts still have other advantages (like being printable. or having unicode support).

                                                                                      Besides, bitmap shapes are still supported - if they are inside of an OpenType container. So a conversion of the existing fonts should totally be possible for those cases where going to a technology invented 40 years ago and widely adopted has too many drawbacks for a specific user.

                                                                                      1. 2

                                                                                        that is in the eye of the beholder

                                                                                        In my original comment I said I wouldn’t have minded except the new system looked ugly to me by default. As far as I can tell I didn’t say anything that my opinion was fact or the only right way.

                                                                                        You keep coming back to printability and unicode. Printing totally confuses me as when I used bitmap fonts I printed with them. I’d like to clarify with unicode - are you talking about composable glyphs? I’ve had success using bitmap fonts with CJK characters, but I can assume they wouldn’t work as well with something like t҉h߫i͛s͕.

                                                                                        1. 2

                                                                                          I’d like to clarify with unicode - are you talking about composable glyphs

                                                                                          The amount of characters which would need to be created is way too big for bitmap fonts where each size would need its glyphs to be manually created and opimized. Thus the amount of available bitmap fonts with reasonable amount of unicode characters is relatively small (in-act, I only know of GNU Unifont which also only contains 5 sizes and still weighs in at 9MB)

                                                                                1. 5

                                                                                  First paragraph is about no firewall running. Not sure I even want to continue reading…

                                                                                  1. 4

                                                                                    First paragraph is about no firewall running. Not sure I even want to continue reading…

                                                                                    Further along in the article, a firewall is mentioned and it seems the recommendation is to disable ICMP respones which can be annoying.

                                                                                    1. 5

                                                                                      Annoying is the least of it - disabling all of icmp breaks networking in subtle ways.

                                                                                      http://shouldiblockicmp.com/

                                                                                      1. 7

                                                                                        It’s even worse for IPv6 where ICMP is used for what ARP is used in v4 and, more importantly, where packets are never fragmented and clients rely on path MTU discovery to determine the largest size packet they can send.

                                                                                        That also relies on ICMP and those messages absolutely should pass firewalls or we‘ll forever be stuck with the smallest guaranteed packets (1280 bytes which is better than it was in v4. But still)

                                                                                      2. 1

                                                                                        Yeah, not sure when I last heard about a real ICMP flood attack, must be 10+ years ago. And no one except AWS disables it (at least I never noticed, except in company networks)…

                                                                                        1. 2

                                                                                          It’s quite commonplace for ICMP traffic to be deprioritised below other traffic types by routers—especially with off-the-shelf equipment from many large vendors—but it is, rightly, quite rare to see it filtered altogether these days. Dropping or disabling ICMP can be harmful as it throws away important information that would allow hosts to recover from some network conditions. A prominent example is path MTU discovery.

                                                                                    1. 5

                                                                                      I am puzzled why these even exist. What is the point? To have the browser be an OS?

                                                                                      1. 9

                                                                                        Yes, the dream of a PWA revolution requires the browser to have access to everything like the underlying OS does but that will never happen because it’s too easy to make malicious PWAs when there’s no central store/authority to police them.

                                                                                        I want freedom too, but the world is full of idiots who still click on “your computer is infected” popups and voluntarily install malware.

                                                                                        1. 4

                                                                                          They exist to allow web pages controlled and sand-boxed access to resources otherwise only available to black-box native apps which also happen to award Apple 30% of their revenue, so me personally, I’m taking that privacy argument with a grain of salt.

                                                                                          1. 11

                                                                                            Web apps are just as black-box as native apps. It’s not like minified JavaScript or WebAssemly is in any reasonable way comprehensible.

                                                                                            1. 8

                                                                                              I would somewhat agree if Apple was the only vendor who doesn’t support these APIs, but Mozilla agrees with Apple on this issue. That indicates that there’s some legitimacy to the privacy argument.

                                                                                              1. 2

                                                                                                The privacy reason seem not too opaque, as the standard way of identifying you is creating an identifier from your browser data. If you have some hardware attached and exposed, it makes identification more reliable, doesn’t it?

                                                                                                1. 2

                                                                                                  Apple led the way early on in adding APIs to make web apps work like native mobile apps — viewports and scrolling and gesture recognition — and allowing web apps to be added as icons to the home screen.

                                                                                                  1. 2

                                                                                                    Originally iPhones apps were supposed to be written in html/js only, but then the app store model became a cash cow and that entire idea went down the drain in favor of letting people sharecrop on their platform.

                                                                                                    1. 9

                                                                                                      I mean, too, the iOS native ecosystem is much, much, much richer and produces much better applications than even the modern web. So, maybe it’s more complicated?

                                                                                                      1. 1

                                                                                                        Agreed. I think that native apps were the plan all along; progressive-ish web app support was just a stop-gap measure until Apple finalized developer tooling. Also, given that most popular apps (not games) are free and lack in-app purchases, odds are that the App Store isn’t quite as huge of a cash cow as it is made out to be. The current top 10 free apps are TikTok, YouTube, Instagram, Facebook, Facebook Messenger, Snapchat, Cash App, Zoom, Netflix, and Google Maps. The first six make money through advertisements. Cash App (Square) uses transaction fees and charges merchants to accept money with it. Zoom makes money from paid customers. Netflix used to give Apple money but has since required that subscriptions be started from the web (if I remember correctly). Google Maps is free and ad-supported.

                                                                                                2. 1

                                                                                                  The browser already is an OS. The point of these is to have it be a more capable and competitive OS. Just so happens that at present there’s only one player who really wants that… but they’re a big one, and can throw their weight around.

                                                                                                1. 15

                                                                                                  IPv6 is just as far away from universal adoption…as it was three years ago.

                                                                                                  That seems…pretty easily demonstrably untrue? While it’s of course not a definitive, be-all-end-all adoption metric, this graph has been marching pretty steadily upward for quite a while, and is significantly higher now (~33%) than it was in 2017 (~20%).

                                                                                                  (And as an aside, it’s sort of interesting to note the obvious effect of the pandemic pushing the weekday troughs in that graph upward as so many people work from home.)

                                                                                                  1. 7

                                                                                                    I wouldn’t count it as “adoption” if it’s basically a hit or miss if your provider does it or not. So they do the natting for you?

                                                                                                    Still haven’t worked at any company (as an employee or being sent to the customer) where there was any meaningful adoption.

                                                                                                    My stuff is available via v4 and v6, unless I forget, because I don’t have ipv6 at home, because I simply don’t need it. When I tried it, I had problems.

                                                                                                    Yes, I’m 100% pessimistic about this.

                                                                                                    1. 13

                                                                                                      I adopted IPv6 around 2006 and finally removed it from all my servers this year.

                                                                                                      The “increase” in “adoption” is likely just more mobile traffic, and some providers a have native v6 and NAT64 and… shocker… it sucks.

                                                                                                      IPv4 will never go away and Jeff Huston is right: the future is NAT, always has been, always will be. The additional address space really isn’t needed, and every device doesn’t need its own dedicated IP for direct connections anyway. Your IP is not a telephone number; it’s not going to be permanent and it’s not even permanent for servers because of GEODNS anyway (or many servers behind load balancers, etc etc). IPs and ports are session identifiers, no more, no less.

                                                                                                      You’ll never get rid of the broken middle boxes on the Internet, so stop believing you will.

                                                                                                      The future is name-based addressing – separate from our archaic DNS which is too easily subverted by corporations and governments, and we will definitely be moving to a decentralized layer that runs on top of IP. We just don’t know which implementation yet. But it’s the only logical path forward.

                                                                                                      DNSSEC and IPv6 are failures. 20+ years and still not enough adoption. Put it in the bin and let’s move on and focus our efforts on better things that solve tomorrow’s problems.

                                                                                                      1. 21

                                                                                                        What I find so annoying about NAT is that it makes hard or impossible to send data from one machine to another, which was pretty much the point of the internet. Now you can only send data to servers. IPv6 was supposed to fix this.

                                                                                                        1. 8

                                                                                                          Now you can only send data to servers

                                                                                                          It’s almost as if everyone that “counts” has a server, so there’s no need for everyone to have one. This is coherent with the growing centralisation of the Internet.

                                                                                                          1. 19

                                                                                                            It just bothers me that in 2020 the easiest way to share a file is to upload to a server and send the link to someone. It’s a bit like “I have a message for you, please go to the billboard at sunshine avenue to read it.”.

                                                                                                            1. 4

                                                                                                              There are pragmatic reasons for this. If the two machines are nearby, WiFi Direct is a better solution (though Apple’s AirDrop is the only reliable implementation I’ve seen and doesn’t work with non-Apple things). If the two machines are not near each other, they need to be both on and connected at the same time for the transfer to work. Sending to a mobile device, the receiver may prefer not to grab the file until they’re on WiFi. There are lots of reasons either endpoint may remove things. Having a server handle the delivery is more reliable. It’s more analogous to sending someone a package in a big truck that will wait outside their house until they’re home and then deliver it.

                                                                                                              1. 3

                                                                                                                Bittorrent and TCP are pretty reliable. You’re right about the ‘need to be connected at the same time’ though.

                                                                                                                1. 2

                                                                                                                  Apple’s AirDrop is the only reliable implementation I’ve seen and doesn’t work with non-Apple things

                                                                                                                  Have you seen opendrop?

                                                                                                                  Seems to work fine for me, although it’s finicky to set up.

                                                                                                                  https://github.com/seemoo-lab/opendrop

                                                                                                                2. 2

                                                                                                                  I think magic wormhole is easier for the tech crowd, but still requires both systems to be on at the same time.

                                                                                                                  1. 1

                                                                                                                    https://webwormhole.io/ works really well!

                                                                                                                3. 7

                                                                                                                  This is coherent with the growing centralisation of the Internet.

                                                                                                                  My instinct tells me this might not be so good.

                                                                                                                  1. 4

                                                                                                                    So does mine. So does mine.

                                                                                                                  2. 2

                                                                                                                    Plus le change…

                                                                                                                    On the other hand, servers have never been more affordable or generally accessible: all you need is like $5 a month and the time and effort to self-educate. You can choose from a vast range of VPS providers, free software, and knowledge sources. You can run all kinds of things in premade docker containers without having much of a clue as to how they work. No, it’s not the theoretical ideal by any means, but I don’t see any occasion for hand-wringing.

                                                                                                                    1. 1

                                                                                                                      I’ve always assumed the main thing holding v6 back is the middle-men of the internet not wanting to lose their power as gatekeepers.

                                                                                                                    2. 6

                                                                                                                      Nobody in their right mind is going to use client machines without a firewall protecting them and no firewall is going to by default accept unsolicited traffic form the wider internet.

                                                                                                                      Which means you need some UPnP like mechanism on the gateway anyways. Not to map a port, but to open a port to a client address.

                                                                                                                      Btw: I’m ha huge IPv6 proponent for other reasons (mainly to not give centralized control to very few very wealthy parties due to address starvation), but the not-possible-to-open-connections argument I don’t get at all.

                                                                                                                      1. 8

                                                                                                                        Nobody in their right mind would let a gazillion services they don’t even know about run on their machines and let those services be contacted from the outside.

                                                                                                                        Why do (non-technical) people need a firewall to begin with? Mainly because they don’t trust the services that run on their machines to be secure. The correct solution is to remove those services, not add a firewall or NaT that requires traversing.

                                                                                                                        Though you were talking about UPnP, so the audience there is clearly the average non-technical Windows user, who doesn’t know how to configure their router. I have no good solution for them.

                                                                                                                        1. 8

                                                                                                                          Why do (non-technical) people need a firewall to begin with? Mainly because they don’t trust the services that run on their machines to be secure

                                                                                                                          Many OSes these days run services listening on all Interfaces. Yes, most of them could be rebound to localhost or the local network interface, but many don’t provide easy configurability.

                                                                                                                          Think stuff like portmap which is still required for NFS in many cases. Or your print spooler. Or indeed your printer’s print spooler.

                                                                                                                          This stuff should absolutely not be on the internet and a firewall blanket-prevents these from being exposed. You configure one firewall instead of n devices running m services.

                                                                                                                          1. 3

                                                                                                                            Crap, good point, I forgot about stuff on your local network you literally cannot configure effectively. Well, we’re back to configuring the router, then.

                                                                                                                        2. 1

                                                                                                                          If the firewall is in the gateway at home, then you can control it, and you can decide to forward ports and allow incoming connections to whatever machine behind it. If your home NAT is behind a CGNAT you don’t control, you are pretty much out of options for incoming connections.

                                                                                                                          IPv6 removes the need for CGNAT, fixing this issue.

                                                                                                                          1. 2

                                                                                                                            Of course but I felt like my parent poster was talking from an application perspective. And for these not much changes. An application you make and deploy on somebodies machine still won’t be able to talk to another instance of your application on another machine by default. Stuff like STUN will remain required to trick firewalls into forwarding packets.

                                                                                                                        3. 3

                                                                                                                          Yeah but this is not a fair statement. If we had no NAT this same complaint would exist and it would be “What I find so annoying about FIREWALLS is they make it hard or impossible to send data from one machine to another…”

                                                                                                                          But do you really believe having IPv6 would allow arbitrary direct connections between any two devices on the internet? There will still have to be some mechanism for securely negotiating the session. NAT doesn’t really add that much more of a burden. The problem is when people have terribly designed networks with double NAT. These same people likely would end up with double firewalls…

                                                                                                                          1. 2

                                                                                                                            Of course, NAT has been invented for a reason, and I’d prefer having NAT over not having NAT. But for those of us that want to play around with networks, it’s a shame that we can’t do it without paying for a server anymore.

                                                                                                                            1. 1

                                                                                                                              I really do find it easier to make direct connections between IPv6 devices!

                                                                                                                              Most of the devices I want to talk to each other are both behind an IPv4 NAT, so IPv6 allows them to contact each other directly with STUN servers.

                                                                                                                              Even so, Tailscale from the post linked is even easier to setup and use than IPv6, I’m a fan.

                                                                                                                          2. 17

                                                                                                                            The “increase” in “adoption” is likely just more mobile traffic

                                                                                                                            Even if so, why the scare quotes? They’re network hosts speaking Internet Protocol…do they not “count” for some reason?

                                                                                                                            You’ll never get rid of the broken middle boxes on the Internet, so stop believing you will.

                                                                                                                            Equipment gets phased out over time and replaced with newer units. Devices in widespread deployment, say, 10 years ago probably wouldn’t have supported IPv6 gracefully (if at all), but guess what? A lot of that stuff’s been replaced by things that do. Sure, there will continue to be shitty middleboxes needlessly breaking things on the internet, but that happens with IPv4 already (hard to think of a better example than NAT itself, actually).

                                                                                                                            It’s uncharacteristic because I’m generally a pessimistic person (and certainly so when it comes to tech stuff), but I’d bet that we’ll eventually see IPv6 become the dominant protocol and v4 fade into “legacy” status.

                                                                                                                            1. 4

                                                                                                                              I participated in the first World IPv6 Day back in 2011. We begged our datacenter customers to take IPv6. Only one did. Here’s how the conversation went with every customer:

                                                                                                                              “What is IPv6?”

                                                                                                                              It’s a new internet protocol

                                                                                                                              “Why do I need it?”

                                                                                                                              It’s the future!

                                                                                                                              “Does anyone in our state have IPv6?”

                                                                                                                              No, none of the residential ISPs support it or have an official rollout plan. (9 years later – still nobody in my state offers IPv6)

                                                                                                                              “So why do I need it?”

                                                                                                                              Some people on the internet have IPv6 and you would give them access to connect to you with IPv6 natively.

                                                                                                                              “Don’t they have IPv4 access too?”

                                                                                                                              Yes

                                                                                                                              “So why do I need it?”

                                                                                                                              edit: let’s also not forget that the BCP for addressing has changed multiple times. First, customers should get assigned a /80 for a single subnet. Then we should use /64s. Then they should get a /48 so they can have their own subnets. Then they should get a /56 because maybe /48 is too big?

                                                                                                                              Remember when we couldn’t use /127 for ptp links?

                                                                                                                              As discussed in [RFC7421], "the notion of a /64 boundary in the
                                                                                                                              address was introduced after the initial design of IPv6, following a
                                                                                                                              period when it was expected to be at /80".  This evolution of the
                                                                                                                              IPv6 addressing architecture, resulting in [RFC4291], and followed
                                                                                                                              with the addition of /127 prefixes for point-to-point links, clearly
                                                                                                                              demonstrates the intent for future IPv6 developments to have the
                                                                                                                              flexibility to change this part of the architecture when justified.
                                                                                                                              
                                                                                                                            2. 10

                                                                                                                              I adopted IPv6 around 2006 and finally removed it from all my servers this year.

                                                                                                                              Wait, you had support for IPv6 and your removed it? Did leaving it working cost you?

                                                                                                                              1. 3

                                                                                                                                Yes it was a constant source of failures. Dual stack is bad, and people using v6 tunnels get a terrible experience. Sixxs, HE, etc should have never offered tunneling services

                                                                                                                                1. 8

                                                                                                                                  I’m running dual stack on the edge of our production network, in the office and at my home. I have never seen any interference of one stack with another.

                                                                                                                                  The only problem I have seen was that some end-users had broken v6 routing and couldn’t reach our production v6 addresses, but that was quickly resolved. The reverse has also been true in the past (broken v4, working v6), so I wouldn’t count that against v6 in itself, though I do agree that it probably takes longer for the counter party to notice v6 issues than they would v4 ones.

                                                                                                                                  But I absolutely cannot confirm v6 to be a “constant source of failures”

                                                                                                                                  1. 3

                                                                                                                                    The only problem I have seen was that some end-users had broken v6 routing and couldn’t reach our production v6 addresses, but that was quickly resolved.

                                                                                                                                    This is the problem we constantly experienced in the early 2010s. Broken OSes, broken transit, broken ISPs. The customer doesn’t care what the reason is, they just want it to work reliably 100% of the time. It’s also not fun when due to Happy Eyeballs and latency changes the client can switch between v4 and v6 at random.

                                                                                                                                  2. 1

                                                                                                                                    Is there any data on what the tunnelling services are used for though? Just asking because some friends were just using them for easier access to VMs that weren’t public per se, or devices/services in a network (with the appropriate firewall rules to only allow trusted sources)

                                                                                                                                2. 2

                                                                                                                                  This is the first time I downvoted a post so I figure I’d explain why.

                                                                                                                                  For one, you point to a future of more of the status quo: More NAT, IPv4. But at the same time you also claim the world is going to drop one of the biggest status quo’s of DNS for a wholly brand new name resolution service? Also, how would a decentralized networking layer be able to STUN/TURN the 20+ layers of NAT we’re potentially looking at in our near future?

                                                                                                                                  1. 1

                                                                                                                                    Oh no, we aren’t going to drop DNS, we will just not use it for the new things. Think Tor hidden services, think IPFS (both have problems in UX and design, but are good analogues). These things are not directly tied to legacy DNS; they can exist without it. Legacy DNS will exist for a very long time, but it won’t always be an important part of new tech.

                                                                                                                                  2. 2

                                                                                                                                    The future is name-based addressing – separate from our archaic DNS which is too easily subverted by corporations and governments, and we will definitely be moving to a decentralized layer that runs on top of IP. We just don’t know which implementation yet. But it’s the only logical path forward.

                                                                                                                                    So this would solve the IPv4 addressing problem? While I certainly agree with “every device doesn’t need its own dedicated IP”, the amount us usable IPv4 addresses is about 3.3 billion (excluding multicast, class E, rfc1918, localhost, /8s assigned to Ford etc.) which really isn’t all that much if you want to connect the entire world. It’ll be a tight fit at best.

                                                                                                                                    I wonder how hard it would be to start a new ISP, VPS provider, or something like that today. I would imagine it’s harder than 10 years ago; who do you ask for IP addresses?

                                                                                                                                    1. 1

                                                                                                                                      Some of the pressure on IPv6 addresses went away with SRV records. For newer protocols that baked in SRV from the start, you can run multiple (virtual) machines in a data center behind a single public IPv4 address and have the service instances run on different ports. For things like HTTP, you need a proxy because most (all?) browsers don’t look for SRV records. If you consider IP address + port to be the thing a service needs, we have a 48-bit address space, which is a bit cramped for IoT things, but ample for most server-style things.

                                                                                                                                  3. 5

                                                                                                                                    That graph scares me tbh. It looks consistent with an S-curve which flattens out well before 50%. I hope that’s wrong, and it’s just entering a linear phase, but you’d hope the exponential-ish growth phase would at least have lasted a lot longer.

                                                                                                                                    1. 3

                                                                                                                                      Perhaps there’s some poetic licence there, but 13% in 3 years isn’t exactly a blazing pace, and especially if we assume that the adoption curve is S-shaped, it’s going to take at least another couple of decades for truly universal adoption.

                                                                                                                                      1. 7

                                                                                                                                        It’s not 13%, it’s 65%. (13 percentage points.)

                                                                                                                                        1. 1

                                                                                                                                          Yup, right about two decades to get to 90% with S-curve growth. I mean, it’s not exponential growth, but it’s steady and two decades is about 2 corporate IT replacement lifecycles.

                                                                                                                                        2. 2

                                                                                                                                          That seems…pretty easily demonstrably untrue? While it’s of course not a definitive, be-all-end-all adoption metric, this graph has been marching pretty steadily upward for quite a while, and is significantly higher now (~33%) than it was in 2017 (~20%).

                                                                                                                                          I think that’s too simplistic of an interpretation of that chart; if you look at the “Per-Country IPv6 adoption” you see there are vast differences between countries. Some countries like India, Germany, Vietnam, United States, and some others have a fairly significant adoption of IPv6, whereas many others have essentially no adoption.

                                                                                                                                          It’s a really tricky situation, because it requires the entire world to cooperate. How do you convince Indonesia, Mongolia, Nigeria, and many others to use IPv6?

                                                                                                                                          So I’d argue that “IPv6 is just as far away from universal adoption” seems pretty accurate; once you start the adoption process it seems to take at least 10-15 years, and many countries haven’t even started yet.

                                                                                                                                          1. 1

                                                                                                                                            How do you convince Indonesia, Mongolia, Nigeria, and many others to use IPv6?

                                                                                                                                            By giving them too few IPv4 blocks to begin with? Unless they’re already hooked on carrier grade NAT, the scarcity of addresses could be a pretty big incentive to switch.

                                                                                                                                            1. 1

                                                                                                                                              I’m not sure if denying an economic resource to those kind of countries is really fair; certainly in a bunch of cases it’s probably just lack of resources/money (or more pressing problems, like in Syria, Afghanistan, etc.)

                                                                                                                                              I mean, we (the Western “rich”) world shoved the problem ahead of us for over 20 years, and now suddenly the often lesser developed countries actually using the least amount of addresses need to spend a lot of resources to quickly implement IPv6? Meh.

                                                                                                                                              1. 2

                                                                                                                                                My comment wasn’t normative, but descriptive. Many countries already starve for IPv4 addresses.

                                                                                                                                                now suddenly the often lesser developed countries actually using the least amount of addresses need to spend a lot of resources to quickly implement IPv6?

                                                                                                                                                If “suddenly” means they were knew it would happen like 2 decades ago, and “quickly” means they’d have over 10 years to get to it… In any case, IPv6 has already been implemented in pretty much every platform out there. It’s more a matter of deployment now. The end points are already capable. We may have some routers who still aren’t IPv6 capable, but there can’t be that many by now, even in poorer countries. I don’t see anyone spending “a lot” of resources.

                                                                                                                                          2. 1

                                                                                                                                            perhaps the author is going by the absolute number of hosts rather than percentage

                                                                                                                                          1. 3

                                                                                                                                            The part about Apple helping with the ARM laptops made me laugh.

                                                                                                                                            They won’t even support otheros, apparently. They’ll boot into nothing but Apple code.

                                                                                                                                            1. 5

                                                                                                                                              That’s not true. Safe Boot can be disabled. They made that point last time during this years WWDC in the platform state of the union

                                                                                                                                              1. 4

                                                                                                                                                By disabling they mean allowing non-latest versions macOS. Federighi said in an interview that they will not allow non-Apple OSes on Apple Silicon, and virtualization should be enough.

                                                                                                                                                1. 3

                                                                                                                                                  Understood. I’m not much of an Apple fan, so I did of course skip WWDC.

                                                                                                                                                  I do hope you’re right and that they do not go back on their word. Else these machines would be dead weights once Apple decides not to support them anymore.

                                                                                                                                                2. 4

                                                                                                                                                  otheros

                                                                                                                                                  Wasn’t that something for the Playstation 3 ?

                                                                                                                                                  1. 2

                                                                                                                                                    I read somewhere that they would support a Chromebook-like “unsigned boot” option that would allow alternative OSes.

                                                                                                                                                    1. 2

                                                                                                                                                      What I had read is:

                                                                                                                                                      https://www.theverge.com/2020/6/24/21302213/apple-silicon-mac-arm-windows-support-boot-camp

                                                                                                                                                      From this article:

                                                                                                                                                      Update, June 25th 7:45AM ET: Article updated with comment from an Apple VP confirming Boot Camp will not be available on ARM-based Macs.

                                                                                                                                                      I thus suspect these will be very locked down.

                                                                                                                                                      1. 10

                                                                                                                                                        Bootcamp is for running windows and providing windows drivers. Microsoft only licenses ARM windows to its hardware partners which apple isn’t one of. So there is no point in providing windows drivers if you can’t get windows.

                                                                                                                                                        Secure Boot can be disabled by booting into recovery mode. Then the Mac does and will continue to boot whatever you want

                                                                                                                                                        1. 2

                                                                                                                                                          Hopefully Apple will indeed allow booting non-signed systems and Microsoft will re-evaluate their policies regarding non-x86 platforms.

                                                                                                                                                  1. 6

                                                                                                                                                    Note that this “vacuuming” only deals with internal fragmentation, not the external filesystem fragmentation

                                                                                                                                                    From the sqlite docs this does not seem accurate, as it recreates the database file. https://sqlite.org/lang_vacuum.html

                                                                                                                                                    Surely removing the old file does address filesystem fragmentation…

                                                                                                                                                    1. 1

                                                                                                                                                      But the new file might be re-fragmented by the OS. Applications have very little control whether the OS chooses to allocate a continuous chunk for their data

                                                                                                                                                      1. 2

                                                                                                                                                        But why is that relevant? if the OS is fragmenting fresh files, why is that being blamed on sqlite?