1. 4

    Isn’t the complicated code in GNU libc related to locales, see: https://github.com/gliderlabs/docker-alpine/issues/144, https://www.openwall.com/lists/musl/2014/08/01/1.

    Maybe someone with more knowledge can comment.

    1. 11

      The funny thing is that isalnum cannot take Unicode codepoints, according to its specification. What’s the point of using locales for unsigned char? 8-bit encodings? But I’d be surprised if it handled cyrillic in e.g. CP1251.

      Update: Ok, I am surprised. After generating a uk_UA.cp1251 locale, isalnum actually handles cyrillic:

      $ cat > isalnum.c
      #include <ctype.h>
      #include <stdio.h>
      #include <locale.h>
      
      int main() {
          setlocale(LC_ALL, "uk_UA.cp1251");
          printf("isalnum = %s\n", isalnum('\xE0') ? "true" : "false");
      }
      
      $ sudo localedef -f CP1251 -i uk_UA uk_UA.cp1251
      $ gcc isalnum.c && ./a.out
      isalnum = true
      

      On the practical side, I have never used a non-utf8 locale on Linux.

      1. 17

        I guess you haven’t been around in the 90ies then, which incidentally also is likely when the locale related complexity was introduced in glibc.

        Correctness often requires complexity.

        When you are privileged to only needing to deal with english input to software running on servers inter US, then, yes, all the complexity of glibc might feel over the top and unnecessary. But keep in mind others might not share that privilege and while glibc provides a complicated implementation, at least it provides a working implementation that fixes your problem.

        musl does not.

        1. 10

          Correctness often requires complexity.

          In this case, it doesn’t – it just requires a single:

          _ctype_table = loaded_locale.ctypes;
          

          when loading up the locale, so that a lookup like this would work:

          #define	isalpha(c)\
                  (_ctype_table[(unsigned char)(c)]&(_ISupper|_ISlower))
          

          Often, correctness is used as an excuse for not understanding the problem, and mashing keys until things work.

          Note, this one-line version is almost the same as the GNU implementation, once you unwrap the macros and remove the complexity. The complexity isn’t buying performance, clarity, or functionality.

          1. 7

            Interestingly, this is exactly how OpenBSD’s libc does it, which I looked up after reading this article out of curiosity.

          2. 4

            Locale support in standard C stuff is always surprising to me. I just never expect anything like that at this level. My brain always assumes that for unicode anything you need a library like ICU, or a higher level language.

            1. 4

              Glibc predates ICU (of a Sun ancestry) by 12 years.

              1. 2

                It probably does. A background project I’ve been working on is a Pascal 1973 Compiler (because the 1973 version is simple) and I played around with using the wide character functions in C99. When I use the wide character version (under Linux) I can see it loads /usr/lib/gconv/gconv-modules.cache and /usr/lib/locale/locale-archive. The problem I see with using this for a compiler though (which I need to write about) is ensuring a UTF-8 locale.

              2. 2

                you haven’t been around in the 90ies then

                – true

                only needing to deal with english input to software running on servers inter US

                – I’ve been using and enjoying uk_UA.utf8 on my personal machines since 2010, that is my point

                The rest of your assumptions does not apply to me in the slightest (“only English input”, “servers in the US”). I agree that I just missed the time when this functionality made sense.

                Still, I think that the standard library of C feels like a wrong place to put internationalization into. It’s definitely a weird GNUism to try to write as much end-user software in C as possible (but again, it was not an unreasonable choice at the time).

                1. 3

                  Locale support is part of the POSIX standard and the glibc support for locales was all there was back in the 90ies; ICU didn’t exist yet.

                  You can’t fault glibc for wanting to be conformant to POSIX, especially when there was no other implementation at the time

                  1. 4

                    Locale support is part of the POSIX standard

                    Locales are actually part of C89, though the standard says any locales other than "C" are implementation-defined. The C89 locale APIs are incredibly problematic because they are unaware of threads and a call to setlocale in one thread will alter the output of printf in another. POSIX2008 adopted Apple’s xlocale extension (and C++11 defined a very thin wrapper around POSIX2008 locales, though the libc++ implementation uses a locale_t per facet, which is definitely not the best implementation). These define two things, a per-thread locale and _l-suffixed variants of most standard-library functions that take an explicit locale (e.g. printf_l) so that you can track the locale along with some other data structure (e.g. a document) and not the thread.

                    1. 2

                      Oh. Very interesting. Thank you. I know about the very problematic per-process setlocale, but I thought this mess was part of posix, not C itself.

                      Still. This of course doesn’t change my argument that we shouldn’t be complaining about glibc being standards compliant, at least not when we’re comparing the complexities of two implementations when one is standard compliant and one isn’t.

                      Not doing work is always simpler and considerable speedier (though in this case it turns out that the simpler implementation is also much slower), but if not doing work also means skipping standards compliance, then doing work shouldn’t be counted as being a bad thing.

                      1. 3

                        I agree. Much as I dislike glibc for various reasons (the __block fiasco, the memcpy debacle, and the refusal to support static linking, for example), it tries incredibly hard to be standards compliant and to maintain backwards ABI compatibility. A lot of other projects could learn from it. I’ve come across a number of cases in musl where it half implements things (for example, __cxa_finalize is a no-op, so if you dlclose a library with musl then C++ destructors aren’t run, nor are any __attribute__((destructor)) C functions, so your program can end up in an interesting undefined state), yet the developers are very critical of other approaches.

              3. 5

                What’s the point of using locales for unsigned char? 8-bit encodings?

                Yes, locales were intended for use with extended forms of ASCII.

                They were a reasonable solution at the time and it’s not surprising that there are some issues 30 years later. The committee added locales so that the ANSI C standard could be adopted without any changes as an ISO standard. This cost them another year of work but eliminated many of the objections to reusing the ANSI standard.

                If you’re curious about this topic then I would recommend P.J. Plauger’s The Standard C Library.
                He discusses locales and the rest of a 1988 library implementation in detail.

                1. 2

                  Yes, locales were intended for use with extended forms of ASCII.

                  Locales don’t just include character sets, they include things like sorting. French and German, for example, sort characters with accents differently. They also include number separators (e.g. ',' for thousands, '.' for decimals in English, ' ' for thousands, ',' for the decimal in French). Even if everyone is using UTF-8, 90% of the locale code is still necessary (though I believe that putting it in the core C/C++ standard libraries was a mistake).

                  1. 2

                    Date formatting is also different:

                    $ LC_ALL=en_US.UTF-8 date
                    Tue Sep 29 12:37:13 CEST 2020
                    
                    $ LC_ALL=de_DE.UTF-8 date
                    Di 29. Sep 12:37:15 CEST 2020
                    
              4. 1

                My post elsewhere explains a bit about how the glibc implementation works.

              1. 8

                It’s interesting that AVIF already has multiple independent implementations. AFAIK WebP after 10 years has only libwebp.

                There’s C libavif + libaom, and I’ve made a pure Rust encoder based on rav1e and my own AVIF serializer.

                1. 4

                  Do you think that’s related to the standardization process vs. the VP9 code dump approach (IIRC WebP is derived from VP9)?

                  1. 1

                    WebP is derived from an older VP8. That may be the partly the cause, because the world has quickly moved on to VP9, but I’m not really sure.

                    1. 1

                      Wasn’t it patent encumbered?

                      1. 1

                        In the same way as AV1 is: the inventors say no but third parties make vague threats to seed FUD and make sure companies go the safe way and just license MPEG.

                  1. 4

                    I am sorry, but if you still do not know about multi-byte characters in 2020, you should really not be writing software. The 1990ies have long passed in which you could assume 1 byte == 1 char.

                    1. 28

                      https://jvns.ca/blog/2017/04/27/no-feigning-surprise/

                      Nobody was born knowing about multi-byte characters. There’s always new people just learning about it, and probably lots of programmers that never got the memo.

                      1. 5

                        The famous joel article on unicode is almost old enough to vote in most countries (17 years). There is really no excuse to be oblivious to this: https://www.joelonsoftware.com/2003/10/08/the-absolute-minimum-every-software-developer-absolutely-positively-must-know-about-unicode-and-character-sets-no-excuses/

                        This is esp. problematic if you read the last paragraph where the author gives encryption/decryption as an example. If somebody really is messing with low level crypto apis, they have to know this. There is no excuse. Really.

                        1. 10

                          Junior Programmers aren’t born having read a Joel Spolsky blog post. There are, forever, people just being exposed to the idea of multibyte characters for the first time. Particularly if, like a lot of juniors are, they’re steered towards something like the K&R C book as a “good” learning resource.

                          Whether or not this blog post in particular is a good introduction to the topic is kind of a side-point. What was being pointed out to you was that everyone in 2020 is supposed to have learned this topic at some point in the past is beyond silly. There are always new learners.

                          1. 4

                            You are aware that there are new programmers born every day, right?

                          2. 4

                            Right, but this author is purporting to be able to guide others through this stuff. If they haven’t worked with it enough to see the low-hanging edge cases, they should qualify their article with “I just started working on this stuff, I’m not an expert and you shouldn’t take this as a definitive guide.” That’s a standard I apply to myself as well.

                            1. 2

                              We should perhaps not expect newcomers to know about encoding issues, but we should expect the tools they (and the rest of us) use to handle it with a minimum of bad surprises.

                            2. 8

                              That’s a little harsh, everyone has to learn sometime. I didn’t learn about character encoding on my first day of writing code, it took getting bitten in the ass by multibyte encoding a few times before I got the hang of it.

                              Here is another good intro to multibyte encoding for anyone who wants to learn more: https://betterexplained.com/articles/unicode/

                              1. 2

                                I didn’t learn about character encoding on my first day of writing code, it took getting bitten in the ass by multibyte encoding a few times before I got the hang of it.

                                Right, but you’re not the one writing and publishing an article that you intend for people to use as a reference for this type of stuff. People are responsible for what they publish, and I hold this author responsible to supply their writing with the caveat that their advice is incomplete, out-of-date, or hobby-level—based, I presume, on limited reading & experience with this stuff in the field.

                              2. 8

                                I’m sure that if I knew you well enough, I could find three things you didn’t know that respected developers would say means “you should really not be writing software”.

                                1. 3

                                  Yes it’s 2020, but also, yes, people still get this wrong. 90% of packages addressed to me mangle my city (Zürich) visibly on the delivery slip, so do many local(!) food delivery services.

                                  Every time I make a payment with Apple Pay, the local(!) App messes up my city name in a notification (the wallet app gets it right).

                                  Every week I’m handling support issues with vendors delivering data to our platform with encoding issues.

                                  Every week somebody in my team comes to me with questions about encoding issues (even though by now they should know better)

                                  This is a hard problem. This is also a surprising problem (after all „it’s just strings“).

                                  It’s good when people learn about this. It’s good when they communicate about this. The more people write about this, the more will get it right in the future.

                                  We are SO far removed from these issues being consistently solved all throughout

                                  1. 2

                                    I know all that. My first name has an accented character in it. I get broken emails all the time. That still does NOT make it okay. People that write software have to know some fundamental things and character encodings is one of them. I consider it as fundamental as understanding how floats work in a computer and that they are not precise and what problems that causes.

                                    The article being discussed is not good and factually wrong in a few places. It is also not written in a style that makes it sound like somebody is documenting their learnings. It is written as stating facts. The tone makes a big difference.

                                  2. 2

                                    There’s a difference between knowing there’s a difference, which I suspect is reasonably common knowledge, and knowing what the difference is.

                                    1. 2

                                      There are very few things that every software developer needs to know–fewer than most lists suggest, at least. Multi-byte encodings and unicode have are about as good a candidate as exists for being included in that list.

                                      However, people come to software through all different paths. There’s no credential or exam you have to pass. Some people just start scripting, or writing mathematical/statistical code, and wander into doing things. Many of them will be capable of doing useful and interesting things, but are missing this very important piece of knowledge.

                                      What does getting cranky in the comments do to improve that situation? Absolutely nothing.

                                      1. 3

                                        There’s no credential or exam you have to pass.

                                        I think that this is one of the problems with our industry. People with zero proof of knowledge are fuzzing around with things they do not understand. I am not talking about hobbyists here, but about people writing software that is being used by people to run critical infrastructure. There is no other technical profession where that is okay.

                                        1. 2

                                          I think our field is big and fast enough that dealing with things we don’t understand don’t yet understand has just become part of the job description.

                                    1. 7

                                      I couldn’t actually read this submission as Gitlab was too much for my current internet connection, but I didn’t like this at all. I wouldn’t have had a huge problem with it except for the fact that by default I find scalable fonts to be very unreadable on non-high-resolution screens.

                                      This might just be personal opinion, and since getting a HD display it hasn’t been an issue for me, but I do wonder whether it was another case of the developers having better-than-average (or even better than the lowest reasonable) devices and dismissing anyone in a different situation.

                                      Accessibility, ha.

                                      1. 9

                                        IMHO this is about a small minority opting into choosing an outdated technology (bitmap fonts) with serious limitations but advantages on their specific use-case and then blaming the developers for breaking their workflow.

                                        Bitmap fonts stopped being universally useful with the decline of dot-matrix printers in the late 80ies, about 40 years ago and their usefulness gradually declined over the years as we got higher resolution displays, Unicode and lately very high resolution displays meant to run at 2x.

                                        People still preferring bitmap fonts do not want to print them, have 1x monitors and use nothing but English. And they want maintainers to continue to support 40 years old technology in addition to all the new requirements around font rendering for the fonts that everybody uses because it’s convenient for them.

                                        No machine made in the last ~35 years is too weak to handle vector fonts. In this instance, the argument about developers running too powerful hardware really doesn’t count any more

                                        1. 1

                                          the argument about developers running too powerful hardware really doesn’t count any more

                                          No one said “too powerful”; the relevant question is whether developers buy fancy new hidpi displays and forget about the many users who are happy to stay on older displays with lower resolution where bitmap fonts look better.

                                          1. 8

                                            that is in the eye of the beholder (ever since anti-aliasing has become possible I personally preferred vector fonts - and the vast majority of users seem to agree considering the prevalence of bitmap fonts these days) and even disregarding the HiDpi support, the vector fonts still have other advantages (like being printable. or having unicode support).

                                            Besides, bitmap shapes are still supported - if they are inside of an OpenType container. So a conversion of the existing fonts should totally be possible for those cases where going to a technology invented 40 years ago and widely adopted has too many drawbacks for a specific user.

                                            1. 2

                                              that is in the eye of the beholder

                                              In my original comment I said I wouldn’t have minded except the new system looked ugly to me by default. As far as I can tell I didn’t say anything that my opinion was fact or the only right way.

                                              You keep coming back to printability and unicode. Printing totally confuses me as when I used bitmap fonts I printed with them. I’d like to clarify with unicode - are you talking about composable glyphs? I’ve had success using bitmap fonts with CJK characters, but I can assume they wouldn’t work as well with something like t҉h߫i͛s͕.

                                              1. 2

                                                I’d like to clarify with unicode - are you talking about composable glyphs

                                                The amount of characters which would need to be created is way too big for bitmap fonts where each size would need its glyphs to be manually created and opimized. Thus the amount of available bitmap fonts with reasonable amount of unicode characters is relatively small (in-act, I only know of GNU Unifont which also only contains 5 sizes and still weighs in at 9MB)

                                      1. 5

                                        First paragraph is about no firewall running. Not sure I even want to continue reading…

                                        1. 4

                                          First paragraph is about no firewall running. Not sure I even want to continue reading…

                                          Further along in the article, a firewall is mentioned and it seems the recommendation is to disable ICMP respones which can be annoying.

                                          1. 5

                                            Annoying is the least of it - disabling all of icmp breaks networking in subtle ways.

                                            http://shouldiblockicmp.com/

                                            1. 7

                                              It’s even worse for IPv6 where ICMP is used for what ARP is used in v4 and, more importantly, where packets are never fragmented and clients rely on path MTU discovery to determine the largest size packet they can send.

                                              That also relies on ICMP and those messages absolutely should pass firewalls or we‘ll forever be stuck with the smallest guaranteed packets (1280 bytes which is better than it was in v4. But still)

                                            2. 1

                                              Yeah, not sure when I last heard about a real ICMP flood attack, must be 10+ years ago. And no one except AWS disables it (at least I never noticed, except in company networks)…

                                              1. 2

                                                It’s quite commonplace for ICMP traffic to be deprioritised below other traffic types by routers—especially with off-the-shelf equipment from many large vendors—but it is, rightly, quite rare to see it filtered altogether these days. Dropping or disabling ICMP can be harmful as it throws away important information that would allow hosts to recover from some network conditions. A prominent example is path MTU discovery.

                                          1. 5

                                            I am puzzled why these even exist. What is the point? To have the browser be an OS?

                                            1. 9

                                              Yes, the dream of a PWA revolution requires the browser to have access to everything like the underlying OS does but that will never happen because it’s too easy to make malicious PWAs when there’s no central store/authority to police them.

                                              I want freedom too, but the world is full of idiots who still click on “your computer is infected” popups and voluntarily install malware.

                                              1. 4

                                                They exist to allow web pages controlled and sand-boxed access to resources otherwise only available to black-box native apps which also happen to award Apple 30% of their revenue, so me personally, I’m taking that privacy argument with a grain of salt.

                                                1. 11

                                                  Web apps are just as black-box as native apps. It’s not like minified JavaScript or WebAssemly is in any reasonable way comprehensible.

                                                  1. 8

                                                    I would somewhat agree if Apple was the only vendor who doesn’t support these APIs, but Mozilla agrees with Apple on this issue. That indicates that there’s some legitimacy to the privacy argument.

                                                    1. 2

                                                      The privacy reason seem not too opaque, as the standard way of identifying you is creating an identifier from your browser data. If you have some hardware attached and exposed, it makes identification more reliable, doesn’t it?

                                                      1. 2

                                                        Apple led the way early on in adding APIs to make web apps work like native mobile apps — viewports and scrolling and gesture recognition — and allowing web apps to be added as icons to the home screen.

                                                        1. 2

                                                          Originally iPhones apps were supposed to be written in html/js only, but then the app store model became a cash cow and that entire idea went down the drain in favor of letting people sharecrop on their platform.

                                                          1. 9

                                                            I mean, too, the iOS native ecosystem is much, much, much richer and produces much better applications than even the modern web. So, maybe it’s more complicated?

                                                            1. 1

                                                              Agreed. I think that native apps were the plan all along; progressive-ish web app support was just a stop-gap measure until Apple finalized developer tooling. Also, given that most popular apps (not games) are free and lack in-app purchases, odds are that the App Store isn’t quite as huge of a cash cow as it is made out to be. The current top 10 free apps are TikTok, YouTube, Instagram, Facebook, Facebook Messenger, Snapchat, Cash App, Zoom, Netflix, and Google Maps. The first six make money through advertisements. Cash App (Square) uses transaction fees and charges merchants to accept money with it. Zoom makes money from paid customers. Netflix used to give Apple money but has since required that subscriptions be started from the web (if I remember correctly). Google Maps is free and ad-supported.

                                                      2. 1

                                                        The browser already is an OS. The point of these is to have it be a more capable and competitive OS. Just so happens that at present there’s only one player who really wants that… but they’re a big one, and can throw their weight around.

                                                      1. 15

                                                        IPv6 is just as far away from universal adoption…as it was three years ago.

                                                        That seems…pretty easily demonstrably untrue? While it’s of course not a definitive, be-all-end-all adoption metric, this graph has been marching pretty steadily upward for quite a while, and is significantly higher now (~33%) than it was in 2017 (~20%).

                                                        (And as an aside, it’s sort of interesting to note the obvious effect of the pandemic pushing the weekday troughs in that graph upward as so many people work from home.)

                                                        1. 7

                                                          I wouldn’t count it as “adoption” if it’s basically a hit or miss if your provider does it or not. So they do the natting for you?

                                                          Still haven’t worked at any company (as an employee or being sent to the customer) where there was any meaningful adoption.

                                                          My stuff is available via v4 and v6, unless I forget, because I don’t have ipv6 at home, because I simply don’t need it. When I tried it, I had problems.

                                                          Yes, I’m 100% pessimistic about this.

                                                          1. 13

                                                            I adopted IPv6 around 2006 and finally removed it from all my servers this year.

                                                            The “increase” in “adoption” is likely just more mobile traffic, and some providers a have native v6 and NAT64 and… shocker… it sucks.

                                                            IPv4 will never go away and Jeff Huston is right: the future is NAT, always has been, always will be. The additional address space really isn’t needed, and every device doesn’t need its own dedicated IP for direct connections anyway. Your IP is not a telephone number; it’s not going to be permanent and it’s not even permanent for servers because of GEODNS anyway (or many servers behind load balancers, etc etc). IPs and ports are session identifiers, no more, no less.

                                                            You’ll never get rid of the broken middle boxes on the Internet, so stop believing you will.

                                                            The future is name-based addressing – separate from our archaic DNS which is too easily subverted by corporations and governments, and we will definitely be moving to a decentralized layer that runs on top of IP. We just don’t know which implementation yet. But it’s the only logical path forward.

                                                            DNSSEC and IPv6 are failures. 20+ years and still not enough adoption. Put it in the bin and let’s move on and focus our efforts on better things that solve tomorrow’s problems.

                                                            1. 21

                                                              What I find so annoying about NAT is that it makes hard or impossible to send data from one machine to another, which was pretty much the point of the internet. Now you can only send data to servers. IPv6 was supposed to fix this.

                                                              1. 8

                                                                Now you can only send data to servers

                                                                It’s almost as if everyone that “counts” has a server, so there’s no need for everyone to have one. This is coherent with the growing centralisation of the Internet.

                                                                1. 18

                                                                  It just bothers me that in 2020 the easiest way to share a file is to upload to a server and send the link to someone. It’s a bit like “I have a message for you, please go to the billboard at sunshine avenue to read it.”.

                                                                  1. 4

                                                                    There are pragmatic reasons for this. If the two machines are nearby, WiFi Direct is a better solution (though Apple’s AirDrop is the only reliable implementation I’ve seen and doesn’t work with non-Apple things). If the two machines are not near each other, they need to be both on and connected at the same time for the transfer to work. Sending to a mobile device, the receiver may prefer not to grab the file until they’re on WiFi. There are lots of reasons either endpoint may remove things. Having a server handle the delivery is more reliable. It’s more analogous to sending someone a package in a big truck that will wait outside their house until they’re home and then deliver it.

                                                                    1. 3

                                                                      Bittorrent and TCP are pretty reliable. You’re right about the ‘need to be connected at the same time’ though.

                                                                      1. 2

                                                                        Apple’s AirDrop is the only reliable implementation I’ve seen and doesn’t work with non-Apple things

                                                                        Have you seen opendrop?

                                                                        Seems to work fine for me, although it’s finicky to set up.

                                                                        https://github.com/seemoo-lab/opendrop

                                                                      2. 2

                                                                        I think magic wormhole is easier for the tech crowd, but still requires both systems to be on at the same time.

                                                                        1. 1

                                                                          https://webwormhole.io/ works really well!

                                                                      3. 7

                                                                        This is coherent with the growing centralisation of the Internet.

                                                                        My instinct tells me this might not be so good.

                                                                        1. 4

                                                                          So does mine. So does mine.

                                                                        2. 2

                                                                          Plus le change…

                                                                          On the other hand, servers have never been more affordable or generally accessible: all you need is like $5 a month and the time and effort to self-educate. You can choose from a vast range of VPS providers, free software, and knowledge sources. You can run all kinds of things in premade docker containers without having much of a clue as to how they work. No, it’s not the theoretical ideal by any means, but I don’t see any occasion for hand-wringing.

                                                                          1. 1

                                                                            I’ve always assumed the main thing holding v6 back is the middle-men of the internet not wanting to lose their power as gatekeepers.

                                                                          2. 6

                                                                            Nobody in their right mind is going to use client machines without a firewall protecting them and no firewall is going to by default accept unsolicited traffic form the wider internet.

                                                                            Which means you need some UPnP like mechanism on the gateway anyways. Not to map a port, but to open a port to a client address.

                                                                            Btw: I’m ha huge IPv6 proponent for other reasons (mainly to not give centralized control to very few very wealthy parties due to address starvation), but the not-possible-to-open-connections argument I don’t get at all.

                                                                            1. 8

                                                                              Nobody in their right mind would let a gazillion services they don’t even know about run on their machines and let those services be contacted from the outside.

                                                                              Why do (non-technical) people need a firewall to begin with? Mainly because they don’t trust the services that run on their machines to be secure. The correct solution is to remove those services, not add a firewall or NaT that requires traversing.

                                                                              Though you were talking about UPnP, so the audience there is clearly the average non-technical Windows user, who doesn’t know how to configure their router. I have no good solution for them.

                                                                              1. 8

                                                                                Why do (non-technical) people need a firewall to begin with? Mainly because they don’t trust the services that run on their machines to be secure

                                                                                Many OSes these days run services listening on all Interfaces. Yes, most of them could be rebound to localhost or the local network interface, but many don’t provide easy configurability.

                                                                                Think stuff like portmap which is still required for NFS in many cases. Or your print spooler. Or indeed your printer’s print spooler.

                                                                                This stuff should absolutely not be on the internet and a firewall blanket-prevents these from being exposed. You configure one firewall instead of n devices running m services.

                                                                                1. 3

                                                                                  Crap, good point, I forgot about stuff on your local network you literally cannot configure effectively. Well, we’re back to configuring the router, then.

                                                                              2. 1

                                                                                If the firewall is in the gateway at home, then you can control it, and you can decide to forward ports and allow incoming connections to whatever machine behind it. If your home NAT is behind a CGNAT you don’t control, you are pretty much out of options for incoming connections.

                                                                                IPv6 removes the need for CGNAT, fixing this issue.

                                                                                1. 2

                                                                                  Of course but I felt like my parent poster was talking from an application perspective. And for these not much changes. An application you make and deploy on somebodies machine still won’t be able to talk to another instance of your application on another machine by default. Stuff like STUN will remain required to trick firewalls into forwarding packets.

                                                                              3. 3

                                                                                Yeah but this is not a fair statement. If we had no NAT this same complaint would exist and it would be “What I find so annoying about FIREWALLS is they make it hard or impossible to send data from one machine to another…”

                                                                                But do you really believe having IPv6 would allow arbitrary direct connections between any two devices on the internet? There will still have to be some mechanism for securely negotiating the session. NAT doesn’t really add that much more of a burden. The problem is when people have terribly designed networks with double NAT. These same people likely would end up with double firewalls…

                                                                                1. 2

                                                                                  Of course, NAT has been invented for a reason, and I’d prefer having NAT over not having NAT. But for those of us that want to play around with networks, it’s a shame that we can’t do it without paying for a server anymore.

                                                                                  1. 1

                                                                                    I really do find it easier to make direct connections between IPv6 devices!

                                                                                    Most of the devices I want to talk to each other are both behind an IPv4 NAT, so IPv6 allows them to contact each other directly with STUN servers.

                                                                                    Even so, Tailscale from the post linked is even easier to setup and use than IPv6, I’m a fan.

                                                                                2. 17

                                                                                  The “increase” in “adoption” is likely just more mobile traffic

                                                                                  Even if so, why the scare quotes? They’re network hosts speaking Internet Protocol…do they not “count” for some reason?

                                                                                  You’ll never get rid of the broken middle boxes on the Internet, so stop believing you will.

                                                                                  Equipment gets phased out over time and replaced with newer units. Devices in widespread deployment, say, 10 years ago probably wouldn’t have supported IPv6 gracefully (if at all), but guess what? A lot of that stuff’s been replaced by things that do. Sure, there will continue to be shitty middleboxes needlessly breaking things on the internet, but that happens with IPv4 already (hard to think of a better example than NAT itself, actually).

                                                                                  It’s uncharacteristic because I’m generally a pessimistic person (and certainly so when it comes to tech stuff), but I’d bet that we’ll eventually see IPv6 become the dominant protocol and v4 fade into “legacy” status.

                                                                                  1. 4

                                                                                    I participated in the first World IPv6 Day back in 2011. We begged our datacenter customers to take IPv6. Only one did. Here’s how the conversation went with every customer:

                                                                                    “What is IPv6?”

                                                                                    It’s a new internet protocol

                                                                                    “Why do I need it?”

                                                                                    It’s the future!

                                                                                    “Does anyone in our state have IPv6?”

                                                                                    No, none of the residential ISPs support it or have an official rollout plan. (9 years later – still nobody in my state offers IPv6)

                                                                                    “So why do I need it?”

                                                                                    Some people on the internet have IPv6 and you would give them access to connect to you with IPv6 natively.

                                                                                    “Don’t they have IPv4 access too?”

                                                                                    Yes

                                                                                    “So why do I need it?”

                                                                                    edit: let’s also not forget that the BCP for addressing has changed multiple times. First, customers should get assigned a /80 for a single subnet. Then we should use /64s. Then they should get a /48 so they can have their own subnets. Then they should get a /56 because maybe /48 is too big?

                                                                                    Remember when we couldn’t use /127 for ptp links?

                                                                                    As discussed in [RFC7421], "the notion of a /64 boundary in the
                                                                                    address was introduced after the initial design of IPv6, following a
                                                                                    period when it was expected to be at /80".  This evolution of the
                                                                                    IPv6 addressing architecture, resulting in [RFC4291], and followed
                                                                                    with the addition of /127 prefixes for point-to-point links, clearly
                                                                                    demonstrates the intent for future IPv6 developments to have the
                                                                                    flexibility to change this part of the architecture when justified.
                                                                                    
                                                                                  2. 10

                                                                                    I adopted IPv6 around 2006 and finally removed it from all my servers this year.

                                                                                    Wait, you had support for IPv6 and your removed it? Did leaving it working cost you?

                                                                                    1. 3

                                                                                      Yes it was a constant source of failures. Dual stack is bad, and people using v6 tunnels get a terrible experience. Sixxs, HE, etc should have never offered tunneling services

                                                                                      1. 8

                                                                                        I’m running dual stack on the edge of our production network, in the office and at my home. I have never seen any interference of one stack with another.

                                                                                        The only problem I have seen was that some end-users had broken v6 routing and couldn’t reach our production v6 addresses, but that was quickly resolved. The reverse has also been true in the past (broken v4, working v6), so I wouldn’t count that against v6 in itself, though I do agree that it probably takes longer for the counter party to notice v6 issues than they would v4 ones.

                                                                                        But I absolutely cannot confirm v6 to be a “constant source of failures”

                                                                                        1. 3

                                                                                          The only problem I have seen was that some end-users had broken v6 routing and couldn’t reach our production v6 addresses, but that was quickly resolved.

                                                                                          This is the problem we constantly experienced in the early 2010s. Broken OSes, broken transit, broken ISPs. The customer doesn’t care what the reason is, they just want it to work reliably 100% of the time. It’s also not fun when due to Happy Eyeballs and latency changes the client can switch between v4 and v6 at random.

                                                                                        2. 1

                                                                                          Is there any data on what the tunnelling services are used for though? Just asking because some friends were just using them for easier access to VMs that weren’t public per se, or devices/services in a network (with the appropriate firewall rules to only allow trusted sources)

                                                                                      2. 2

                                                                                        This is the first time I downvoted a post so I figure I’d explain why.

                                                                                        For one, you point to a future of more of the status quo: More NAT, IPv4. But at the same time you also claim the world is going to drop one of the biggest status quo’s of DNS for a wholly brand new name resolution service? Also, how would a decentralized networking layer be able to STUN/TURN the 20+ layers of NAT we’re potentially looking at in our near future?

                                                                                        1. 1

                                                                                          Oh no, we aren’t going to drop DNS, we will just not use it for the new things. Think Tor hidden services, think IPFS (both have problems in UX and design, but are good analogues). These things are not directly tied to legacy DNS; they can exist without it. Legacy DNS will exist for a very long time, but it won’t always be an important part of new tech.

                                                                                        2. 2

                                                                                          The future is name-based addressing – separate from our archaic DNS which is too easily subverted by corporations and governments, and we will definitely be moving to a decentralized layer that runs on top of IP. We just don’t know which implementation yet. But it’s the only logical path forward.

                                                                                          So this would solve the IPv4 addressing problem? While I certainly agree with “every device doesn’t need its own dedicated IP”, the amount us usable IPv4 addresses is about 3.3 billion (excluding multicast, class E, rfc1918, localhost, /8s assigned to Ford etc.) which really isn’t all that much if you want to connect the entire world. It’ll be a tight fit at best.

                                                                                          I wonder how hard it would be to start a new ISP, VPS provider, or something like that today. I would imagine it’s harder than 10 years ago; who do you ask for IP addresses?

                                                                                          1. 1

                                                                                            Some of the pressure on IPv6 addresses went away with SRV records. For newer protocols that baked in SRV from the start, you can run multiple (virtual) machines in a data center behind a single public IPv4 address and have the service instances run on different ports. For things like HTTP, you need a proxy because most (all?) browsers don’t look for SRV records. If you consider IP address + port to be the thing a service needs, we have a 48-bit address space, which is a bit cramped for IoT things, but ample for most server-style things.

                                                                                        3. 5

                                                                                          That graph scares me tbh. It looks consistent with an S-curve which flattens out well before 50%. I hope that’s wrong, and it’s just entering a linear phase, but you’d hope the exponential-ish growth phase would at least have lasted a lot longer.

                                                                                          1. 3

                                                                                            Perhaps there’s some poetic licence there, but 13% in 3 years isn’t exactly a blazing pace, and especially if we assume that the adoption curve is S-shaped, it’s going to take at least another couple of decades for truly universal adoption.

                                                                                            1. 7

                                                                                              It’s not 13%, it’s 65%. (13 percentage points.)

                                                                                              1. 1

                                                                                                Yup, right about two decades to get to 90% with S-curve growth. I mean, it’s not exponential growth, but it’s steady and two decades is about 2 corporate IT replacement lifecycles.

                                                                                              2. 2

                                                                                                That seems…pretty easily demonstrably untrue? While it’s of course not a definitive, be-all-end-all adoption metric, this graph has been marching pretty steadily upward for quite a while, and is significantly higher now (~33%) than it was in 2017 (~20%).

                                                                                                I think that’s too simplistic of an interpretation of that chart; if you look at the “Per-Country IPv6 adoption” you see there are vast differences between countries. Some countries like India, Germany, Vietnam, United States, and some others have a fairly significant adoption of IPv6, whereas many others have essentially no adoption.

                                                                                                It’s a really tricky situation, because it requires the entire world to cooperate. How do you convince Indonesia, Mongolia, Nigeria, and many others to use IPv6?

                                                                                                So I’d argue that “IPv6 is just as far away from universal adoption” seems pretty accurate; once you start the adoption process it seems to take at least 10-15 years, and many countries haven’t even started yet.

                                                                                                1. 1

                                                                                                  How do you convince Indonesia, Mongolia, Nigeria, and many others to use IPv6?

                                                                                                  By giving them too few IPv4 blocks to begin with? Unless they’re already hooked on carrier grade NAT, the scarcity of addresses could be a pretty big incentive to switch.

                                                                                                  1. 1

                                                                                                    I’m not sure if denying an economic resource to those kind of countries is really fair; certainly in a bunch of cases it’s probably just lack of resources/money (or more pressing problems, like in Syria, Afghanistan, etc.)

                                                                                                    I mean, we (the Western “rich”) world shoved the problem ahead of us for over 20 years, and now suddenly the often lesser developed countries actually using the least amount of addresses need to spend a lot of resources to quickly implement IPv6? Meh.

                                                                                                    1. 2

                                                                                                      My comment wasn’t normative, but descriptive. Many countries already starve for IPv4 addresses.

                                                                                                      now suddenly the often lesser developed countries actually using the least amount of addresses need to spend a lot of resources to quickly implement IPv6?

                                                                                                      If “suddenly” means they were knew it would happen like 2 decades ago, and “quickly” means they’d have over 10 years to get to it… In any case, IPv6 has already been implemented in pretty much every platform out there. It’s more a matter of deployment now. The end points are already capable. We may have some routers who still aren’t IPv6 capable, but there can’t be that many by now, even in poorer countries. I don’t see anyone spending “a lot” of resources.

                                                                                                2. 1

                                                                                                  perhaps the author is going by the absolute number of hosts rather than percentage

                                                                                                1. 3

                                                                                                  The part about Apple helping with the ARM laptops made me laugh.

                                                                                                  They won’t even support otheros, apparently. They’ll boot into nothing but Apple code.

                                                                                                  1. 5

                                                                                                    That’s not true. Safe Boot can be disabled. They made that point last time during this years WWDC in the platform state of the union

                                                                                                    1. 4

                                                                                                      By disabling they mean allowing non-latest versions macOS. Federighi said in an interview that they will not allow non-Apple OSes on Apple Silicon, and virtualization should be enough.

                                                                                                      1. 3

                                                                                                        Understood. I’m not much of an Apple fan, so I did of course skip WWDC.

                                                                                                        I do hope you’re right and that they do not go back on their word. Else these machines would be dead weights once Apple decides not to support them anymore.

                                                                                                      2. 4

                                                                                                        otheros

                                                                                                        Wasn’t that something for the Playstation 3 ?

                                                                                                        1. 2

                                                                                                          I read somewhere that they would support a Chromebook-like “unsigned boot” option that would allow alternative OSes.

                                                                                                          1. 2

                                                                                                            What I had read is:

                                                                                                            https://www.theverge.com/2020/6/24/21302213/apple-silicon-mac-arm-windows-support-boot-camp

                                                                                                            From this article:

                                                                                                            Update, June 25th 7:45AM ET: Article updated with comment from an Apple VP confirming Boot Camp will not be available on ARM-based Macs.

                                                                                                            I thus suspect these will be very locked down.

                                                                                                            1. 10

                                                                                                              Bootcamp is for running windows and providing windows drivers. Microsoft only licenses ARM windows to its hardware partners which apple isn’t one of. So there is no point in providing windows drivers if you can’t get windows.

                                                                                                              Secure Boot can be disabled by booting into recovery mode. Then the Mac does and will continue to boot whatever you want

                                                                                                              1. 2

                                                                                                                Hopefully Apple will indeed allow booting non-signed systems and Microsoft will re-evaluate their policies regarding non-x86 platforms.

                                                                                                        1. 6

                                                                                                          Note that this “vacuuming” only deals with internal fragmentation, not the external filesystem fragmentation

                                                                                                          From the sqlite docs this does not seem accurate, as it recreates the database file. https://sqlite.org/lang_vacuum.html

                                                                                                          Surely removing the old file does address filesystem fragmentation…

                                                                                                          1. 1

                                                                                                            But the new file might be re-fragmented by the OS. Applications have very little control whether the OS chooses to allocate a continuous chunk for their data

                                                                                                            1. 2

                                                                                                              But why is that relevant? if the OS is fragmenting fresh files, why is that being blamed on sqlite?

                                                                                                          1. 19

                                                                                                            It seems that when you linked to the Stack Overflow question, the title got transmogrified to

                                                                                                            What is the “–>” operator in C++?

                                                                                                            The two hyphens were replaced with an en dash to result in something even more nonsensical than the original code ;-)

                                                                                                            1. 4

                                                                                                              The easiest way to see that someone is using Apple products is that their operating system breaks anything code-related like that, by replacing -- with —, replacing "" with ”“, etc.

                                                                                                              Luckily, you can disable that (look for “smart punctuation”), but it seems pretty arrogant for Apple to assume that nobody cares about what character their key presses actually results in. Two dashes and an en-dash carry completely different meaning.

                                                                                                              1. 3

                                                                                                                The people who do care about -- are much more likely to know how to write -- than the people who want an are able to write an without the -- shortcut.

                                                                                                                From that perspective, this is a sensible default.

                                                                                                                1. 2

                                                                                                                  But the consequences of automatically replacing characters like that are much bigger than the consequences of not doing it. There are so many pages on the web with shell commands which don’t work because Apple’s software automatically broke them.

                                                                                                                  1. 1

                                                                                                                    What software from Apple are you referring to?

                                                                                                                    My blog posts use a filter that converts straight quotes to curly quotes, and two dashes to one m-dash, but it of course does not affect content within code blocks. But I create all content in a Linux VPS.

                                                                                                                    1. 1

                                                                                                                      Both iOS and macOS will automatically convert ascii quotes to unicode “smart quotes” and two dashes to unicode en-dashes (or em-dashes, I don’t know) in all native text fields. That means, when you use Safari on a Mac to write a blog post, the system will mess up your content even before your CMS’s backend even sees the text.

                                                                                                                      1. 3

                                                                                                                        This is something that, at least on iOS, is hidden behind a Smart punctuation setting that must be opt-in—I don’t remember turning it off, and I’ve just enabled it to test it. It doesn’t ring a bell on macOS, but it might be possible.

                                                                                                                        (Later edit: Huh, seems that macOS Safari has its own text substitutions which are not synced to the OS-level settings, so it does seem that by default it substitutes smart quotes and dashes for their plain counterparts when typing)

                                                                                                                        Otherwise, the bulk of the problem is smart punctuation plugins in CMSes, when they’re not context-aware. Just yesterday I opened up an O’Reilly book on sed and awk and it had all its snippets in smart quotes…

                                                                                                                        1. 2

                                                                                                                          Thanks for the clarification, I was not aware of this behavior!

                                                                                                                  2. 2

                                                                                                                    I wouldn’t put all of the blame on Apple—blogs and CMSes do this kind of thing too. One giveaway is when someone follows a digit with a double quote, like

                                                                                                                    Now the screen should say, "The number you entered was 5".
                                                                                                                    

                                                                                                                    and the closing quote is (incorrectly) turned into a double-prime (″). The Apple OSes don’t do this, and yet it’s rampant across the web. You see it everywhere once you start looking for it.

                                                                                                                1. 4

                                                                                                                  The point about ‘disturbing activities’ becoming impossible to monitor is very valid. Remote access software companies like Teamviewer removed the ability to turn the users screen black from the free edition after scammers abused it. Encryption is a slightly different matter however.

                                                                                                                  Equally, encrypted free video conferencing tools exist, take Jitsi Meet for example. By disabling encryption for free users, they will force those with wrong intent on to other platforms, only delegating the problem.

                                                                                                                  1. 2

                                                                                                                    By disabling encryption for free users, they will force those with wrong intent on to other platforms, only delegating the problem.

                                                                                                                    Which is an understandable stance for them to take. They had enough bad press at the beginning of the COVID lockdowns with regards to lax security. They don’t want the future bad press about whatever abuse is happening with them not doing anything (because they can’t).

                                                                                                                    Let other providers deal with that kind of bad press.

                                                                                                                    It’s up to zoom to decide what type of customers they want and what type they don’t want.

                                                                                                                    1. 2

                                                                                                                      What was the TeamViewer thing about turning the user screen black?

                                                                                                                      1. 1

                                                                                                                        Tech Support Scammers used to used to turn their victims screens black to ‘fix their computers’ when in fact they would be trying to steal bank details etc… The black screen was also used by refund scammers to use inspect element to make it seem as if the victims bank balance had increased. This was only possible because the ‘black screen’ function came with the free version. After TeamViewer was alerted about this, they moved the feature to the paid version. Sadly, not all companies followed suit, Zoho assist still has this feature in the free version.

                                                                                                                        By removing desired features from the free version, you push ‘bad actors’ onto over platforms. In Zooms case this will be Jitsi Meet.

                                                                                                                    1. 15

                                                                                                                      No, source code these days isn’t “fundamentally wider”. Keeping things simple is still an advantage. Think about the amount of nesting which can go on beyond 80 columns. We don’t need more spaghetti in the kernel.

                                                                                                                      1. 5

                                                                                                                        I really feel current best practice leads to wider code: we use longer variable names, longer parameter names and longer function names.

                                                                                                                        We mostly stopped using goto and rely on blocks for flow control and often we pass closures to other functions, causing further indentation.

                                                                                                                      1. 1

                                                                                                                        Also interesting: JEP 369: Migrate to GitHub. Nice that one of the goals is “ensure that OpenJDK Community can always move to a different source-code hosting provider”.

                                                                                                                        1. 5

                                                                                                                          Oh, that’s even more interesting. Too bad the heavy use of GitHub API can quickly result in vendor lock-in where you cannot migrate easily. I don’t have any complains about the API itself but it’s sad that even if we have a distributed version control system it cannot be used without a collection of proprietary services.

                                                                                                                          1. 4

                                                                                                                            Lock-in is true no matter what though. The moment you go farther than plain git hosting, you will take on dependencies which will cause you trouble.

                                                                                                                            Even with a self-hosted solution you will eventually get into trouble keeping up to date as OSes go out of support and/or security patches dry up.

                                                                                                                            As long as you have a valid migration plan up your sleeve, github is as good or bad as any other third-party solution and arguably so much more convenient and time-saving compared to a first-party solution that the trade-offs are still worth it.

                                                                                                                            You’d rather have a PR review system that works well, is available now and is known to many users than multiple months of development to end up with an inferior solution that will lack user engagement and binds resources not available in other places - a solution which you might have to throw away anyways a few years down the road because the platform you have chosen went out of support or doesn’t run on supported OSes.

                                                                                                                            Yes. You don’t have control over githubs roadmap. But compare the self-hosted landscape from 2007 when github launched to what is best-practice and available now, 13 years later. And then compare the effort you would have gone through to keep up with those dependencies to what it would have taken you to keep up with github.

                                                                                                                            Github is the more stable platform. And they have a lot of incentive to keep it that way.

                                                                                                                            Yes, I would love if free and open platforms could be as feature-ful, accepted by users and easy to maintain, but they aren’t and thus at least for now, the positives for the project and thus for its users and developers do outweigh the drawbacks. And with every passing year, the trust put onto github by it’s users only increases.

                                                                                                                            1. 2

                                                                                                                              Thank you for your insight Glenn!

                                                                                                                            2. 4

                                                                                                                              If only it were legal to reimplement an API…

                                                                                                                              1. 3

                                                                                                                                @ianloic I chuckled loudly when I read this. 😂

                                                                                                                          1. 2

                                                                                                                            I made something like this (though with a bookmarklet rather than an extension or add-on) as a fun project in 2010 when node.js was still very, very new and I wanted to try my hands on that new-fangled server-side JS.

                                                                                                                            I even got to talk about it on jsconf.eu (it was my first ever public speaking engagement, so be very patient). And I did write a series of blog posts about its development (remember: in 2010, node was very new, so this was interesting).

                                                                                                                            It was all fun and games until two somewhat mainstream-y articles were posted about it. That’s when the spammers found out that this was practically a free relay because they would request one alias for each spam mail they wanted to send out, abusing my infrastructure’s good mail sending rep.

                                                                                                                            I tried battling them for a few months but then I came to the conclusion that I must be mad to sink so much time into a fun project and pulled the plug.

                                                                                                                            I wish mozilla the best of luck with their endeavour.

                                                                                                                            1. 4

                                                                                                                              I am not surprised. Several people have been nagging me to run this. I’ve always told them that I already run unbound(8), and that pi-hole is PHP, so it’s a non-starter.

                                                                                                                              If they knew what they’re doing they’d use something else than php.

                                                                                                                              1. 8

                                                                                                                                Sure, and the blog post itself is on Wordpress… so also PHP.

                                                                                                                                I did appreciate the effort put into the post to see how exploitable it might be, including the limitations in this particular case. I’m also not a massive fan of PHP, but it’s part of a few things I use, and I’m aware that it’s there. In my case, it’s Pi-Hole, pfSense and TTRSS.

                                                                                                                                The thing that ties into both aspects here is that this is an authenticated issue. In other words, you would have to be currently logged in to do perform the RCE. For my uses, I’m the only person logged into any of these PHP services, though everyone on my home network gets the benefit of the first two. The vulnerability is present in this case, but in the theme of, “authenticated admin user has elevated access.” Of course they do, and that same admin probably installed the service in the first place.

                                                                                                                                1. 2

                                                                                                                                  And there appears to be a CSRF check, which is pretty important - otherwise even if it’s on a private LAN and authenticated-only, visiting any site that carries the payload is enough to cause this to fire.

                                                                                                                                  1. 1

                                                                                                                                    Excellent point. That’s the sort of thing that makes home routers such a target too, along with default admin credentials and default internal IP addresses.

                                                                                                                                    I haven’t checked, but Pi-Hole also has a fairly short session time-out, meaning that the window that someone might exploit this issue (if there was no CSRF check) should be short.

                                                                                                                                2. 4

                                                                                                                                  unbound has had its fair share of CVEs too, so what’s your point? (and for the record, I also run unbound)

                                                                                                                                  1. 4

                                                                                                                                    To be fair, the bug was missing regex anchoring followed by missing quoting of shell arguments, both of which unfortunately are very common in programming in general, independent of the chosen language.

                                                                                                                                    I would blame PHP for the various command execution APIs that just take a string to feed into the shell and very few ones that take an arguments array which then are also inconvenient one way or another, but that’s also true for many other languages.

                                                                                                                                    I think it’s bad that stuff like this still happens at all, but even after years of experience and even though I’m somewhat sensitized to such issues, they do still happen even to myself. Rarely, but they happen.

                                                                                                                                    Security is hard. You only need to screw up once to have the same security as if you screwed up everywhere :-(

                                                                                                                                    1. 4

                                                                                                                                      You can write bad code in any language, good code in most. Whinging about this or that language not being up to snuff is boring. Don’t like PHP? Don’t use it then. Same goes for C, Java, Javascript and COBOL for all I care. Reading about some person’s dislike of language A is almost as boring as reading about the same person extolling the virtues of fashionable language B.

                                                                                                                                      1. 2

                                                                                                                                        And yet not all languages are equal. For different problems, some languages can be said to be objectively better than other languages.

                                                                                                                                        1. 1

                                                                                                                                          Sure, but that is not what this is about. PHP is objectively better for writing webby things than, say, assembly or COBOL or APL. For writing an operating system I’d rather use assembly and other low-level languages, etc. What this is about is the generic whinging which some are wont to produce when confronted with popular-unpopular languages.

                                                                                                                                          1. 2

                                                                                                                                            So why is a complaint annoying that could, if one is being gracious, be read as a criticism of PHP in the network stack and DNS space? Wouldn’t that be within the bounds of acceptable discourse, into which your comment comes just to remind everybody how such comments are to be read?

                                                                                                                                            1. 1

                                                                                                                                              Generic comments referring to opinionated refutations of languages are always annoying since they do not add anything of value to the discussion, they’re the language version of ‘Windoze sucks’, ‘leenucks sucks even more’, ‘Macs are drool-proof because they are made for babies’ and such.

                                                                                                                                              It would be different if they criticism was pointed and accurate, i.e. if the CVE was clearly caused by some deficiency in PHP which would have been avoided had the developers only used $language_a or $language_b.

                                                                                                                                    1. 8

                                                                                                                                      Using plain PHP templates is a bad idea because

                                                                                                                                      • it relies on the fact that PHP treats accesses to undefined variables as something relatively normal
                                                                                                                                      • it doesn’t do any kind of string quoting/escaping by default and there’s no way to add default processing. Yes, you could be using htmlspecialchars everywhere, but forget it once and you have an XSS at your hand. Proper template engines escape by default. Forget to mark something as HTML and you have visible markup on the page which is way better than XSS.
                                                                                                                                      • PHP templates allow unmitigated access to global state and due to the they how PHP keeps request state as mutable global dictionaries, this means that PHP templates can even mutate request state at will.
                                                                                                                                      • include() puts the template file into the current scope, so a template gets access to all of the variables in scope inside of the rendering function (which, as the article explained, also included $this.
                                                                                                                                      • Because the templates are plain PHP, there’s nothing a template can’t do, including accessing external resources, reading the file system, etc. Yes. you shouldn’t put business logic in your templates, but it happens and then you’re screwed a few years down the line.

                                                                                                                                      People have invented template engines for reasons. Most of them were and still are valid reasons.

                                                                                                                                      1. 2

                                                                                                                                        Author here. You are mostly right, but in most cases I’d consider those a feature and not necessarily a problem. Those features help keeping things simple. Of course you can abuse those features, but you shouldn’t.

                                                                                                                                        As for the escaping to avoid XSS: you are very right. This is the weakest point of this approach to doing templates and requires a certain amount of developer discipline when designing the templates…

                                                                                                                                        1. 2

                                                                                                                                          I’d consider those a feature and not necessarily a problem

                                                                                                                                          so did I nearly 20 years ago and now I wish I hadn’t.

                                                                                                                                          Of course you can abuse those features, but you shouldn’t.

                                                                                                                                          people always think that and poof 10 years (if you’re lucky. probably sooner) later they drown in technical dept and the rewriting-effort starts to get going.

                                                                                                                                          requires a certain amount of developer discipline when designing the templates

                                                                                                                                          discipline doesn’t work. Never has. You only need to forget a single htmlspecialchars() to get the equivalent security of having none. A solution that requires the developer to take care of all instances of something when all an attacker needs is a single instance can’t scale.

                                                                                                                                          Simplicity is nice, but not at this price.

                                                                                                                                          1. 2

                                                                                                                                            I appreciate where you are coming from. I’ve worked in “enterprise” software development where my approach would make people lose their shit. I’m trying to get that way of working out of my system and build neat simple solutions that are not perfect, but are worth considering, probably for many (smaller) projects. If nothing else, it introduces you to some nice long forgotten PHP features :-)

                                                                                                                                          2. 1

                                                                                                                                            It would be possible to write a template validator that inspects template PHP files and has a whitelist of acceptable PHP features. Things like <?=$var?> could be flagged, and the Tpl class could have an extra function so that you can do <?=$this->unescaped($var)?> if you really mean it. You only need to run the validator when you ship the template, in the same way you already run the rest of your code through Phan and/or Psalm.

                                                                                                                                            About $this being in the scope, I do think that’s a feature, but for shorter templates, would it be possible to make in-scope functions, so that you can write <?=e($var)?> instead of <?=$this->e($var)?>?

                                                                                                                                          3. 2

                                                                                                                                            This is a bit contradictory as all PHP template engines suffer from these problems.

                                                                                                                                            1. 2

                                                                                                                                              That doesn’t mean they’re good. There are template engines that can give you at least default escaping. I’ve used to maintain PHPTAL that has context-sensitive escaping and even ensures HTML is well-formed.

                                                                                                                                              Some template engines claim to be “universal” or “format-independent”, so that you can use the same syntax for HTML, e-mails, and even SQL if you want. But in practice it means they’re not good for anything: you get XSS, messed up e-mail encodings, and SQL injections.

                                                                                                                                              “Just don’t write bugs” approach doesn’t work, so you really need a template engine where security vulnerability isn’t the default behavior.

                                                                                                                                              1. 2

                                                                                                                                                There is no default escaping. There is only code. If you rely on the template engine you’re relying on someone else to do the escaping for you.

                                                                                                                                                All output is done using ‘echo’, ‘print’, etc. Make a custom “always escaping” function or method, and you have just as good “default escaping” as any template engine can provide.

                                                                                                                                                1. 2

                                                                                                                                                  “There is only code” entirely misses the point of secure defaults. If you have to remember to use an escaping function, you will eventually forget it, and create an XSS vuln.

                                                                                                                                                  PHP templates == XSS, and this is a people problem, not a code problem.

                                                                                                                                                  PHP makes it particularly messy:

                                                                                                                                                  • Humans are bad at noticing absence of things, so a code review is more likely to miss <?=$foo than it would ${foo|unsafe} (both equally risky, but one looks more innocent).

                                                                                                                                                  • Escaping is technically required pretty much everywhere in HTML for syntax correctness, but there’s a commonly held belief that escaping is only for “untrusted” data or strings that “contain unsafe characters”. Or that strings can be “sanitized” by stripping tags. This creates disagreements about what even has to be escaped.

                                                                                                                                                  1. 2

                                                                                                                                                    You’re missing the point. The people who make the engine also have to remember to escape and what not, they will eventually forget it too. It doesn’t matter.

                                                                                                                                                    Besides, when you add a huge monster of a template engine the chance of errors, mistakes, and security issues increase exponential.

                                                                                                                                                    Adding some engine doesn’t automatically solve these problems. Good coding solves these problems.

                                                                                                                                                    1. 1

                                                                                                                                                      There are far less boundaries/inputs in the template engine than there are in your own code, combined with the amortization of effort.

                                                                                                                                                      “Good coding” doesn’t solve the massive safety issues we have with programming the same way that “good driving” makes seatbelts redundant.

                                                                                                                                            2. 1

                                                                                                                                              I think it’s fine for smaller projects; e.g. the type where everything is just in one or a few pages. Not having an external dependency is a pretty good advantage in those cases.

                                                                                                                                            1. 15

                                                                                                                                              I find it funny that most of these big tech companies claim they’re all in on going environmentally friendly, but yet you’re forced to throw a perfectly good piece of hardware into the trash after a few years. E-wasting doesn’t count because I feel most people don’t do this and e-wasting is arguably a joke. Most of your e-wasted stuff ends up on a boat to a developing nation where its ripped apart by hand and the toxic chemicals get into the local water supply.

                                                                                                                                              1. 3

                                                                                                                                                Consider a plausible alternative:

                                                                                                                                                Assume that a big vendor builds its hardware thick/strong enough to last for ten years physically. Say a phone with thicker, stronger glass in front, a battery design sized for longevity, a strong case made of thick metal, and a stock of spare parts large enough that even the average customer uses the device for ten years, the demand for repairs is 95% likely to be met. Assume it sells millions, like the real devices in the real world. Assume further that most buyers replace it after two or three years anyway, perhaps because some new apps ask for newer, faster CPUs or more RAM/storage. How much glass, metal, chemicals did the vendor waste on building longevity?

                                                                                                                                                It’s not obvious to me that catering to the people who want longevity is a net win over today’s state.

                                                                                                                                                1. 4

                                                                                                                                                  What you are describing for a substantial part of this post is a rugged device, which is an entirely different class. You don’t need a strong metal case and a thick glass to keep a device safe that sits on the couch for most of the time.

                                                                                                                                                  No one disagrees that when someone sits on it, it’s a tradeoff to make whether it breaks or not.

                                                                                                                                                  The post is talking about software longevity, which is much easier to create.

                                                                                                                                                  1. 2

                                                                                                                                                    Actually, what I had in mind was a phone rugged enough to survive in a pocket for years without being bent out of shape or destroyed by a keyring, and with a battery designed to be charged n thousand times over many years instead of being designed for quick charging and maximum initial capacity.

                                                                                                                                                    Are phones carried around in pockets a parficularly rugged class of device? I think not.

                                                                                                                                                    EDIT: Wait, are you suggesting that manufacturers should provide software updates well after the end of a device’s expected physical lifetime?

                                                                                                                                                    1. 4

                                                                                                                                                      No, I’m suggesting that manufacturers should not block people from providing their own software updates.

                                                                                                                                                      1. 1

                                                                                                                                                        I think you mean “should not bootlock their devices”, right?

                                                                                                                                                        As I understand it (from idling in the LineageOS irc channel) the big problem of devices like the one in the blog post is that the hardware is buggy and there’s no documentation, not even a sensible git commit log for the kernel source. MediaTek in particular has a rotten reputation among LineageOS contributors. Some/many devices are also bootlocked and that could be a big problem, but in a way it isn’t. The comparable devices that aren’t bootlocked, or for which security bugs are well known, don’t have a lot of LineageOS ports, see?

                                                                                                                                                        Whether you’re killed by two bullets or three doesn’t make much difference, you’re just as dead.

                                                                                                                                                        You could of course demand documentation, sensible driver code and an open booter. That at least makes it simple to spot unsuitable vendors.

                                                                                                                                                      2. 1

                                                                                                                                                        My old dumbphone (a Sony Ericsson w710i) is still alive and kicking after almost 15 years. I’ve dropped it countless times, I carry it in the same pocket as my keys. Battery life is still pretty good (I charge it once a week or so). I would definitely consider it “rugged”.

                                                                                                                                                        1. 1

                                                                                                                                                          Phone hardware tech took a giant step back in reliability with the advent of glass fronted touchscreens (don’t @ me). Older Nokias and Ericsson phones were really durable.

                                                                                                                                                    2. 3

                                                                                                                                                      Assume further that most buyers replace it after two or three years anyway, perhaps because some new apps ask for newer, faster CPUs or more RAM/storage.

                                                                                                                                                      I don’t think that’s a given. Games consoles come to mind - developers are able to wring more and more out of the same hardware over its lifespan. There’s lots of room for creativity given that kind of constraint, and I’d love to see it happen in mobile development.

                                                                                                                                                      1. 3

                                                                                                                                                        No doubt a global recession will facilitate this…

                                                                                                                                                        Consoles aren’t really analogous. The console makers often make decent amount of money from each game sold - the hardware is a loss leader - so it makes sense to provide games for as long as possible for each console generation.

                                                                                                                                                        Phone/tablet hardware makers (apart from Apple) don’t make much money after the initial sale.

                                                                                                                                                        1. 4

                                                                                                                                                          Consoles aren’t really analogous. The console makers often make decent amount of money from each game sold - the hardware is a loss leader - so it makes sense to provide games for as long as possible for each console generation.

                                                                                                                                                          They are - kind of. And they are a good example how long term stability yields improvements through giving people the possibility to gain experience.

                                                                                                                                                          The price of the platform[1] is roughly stable and the sturdiness of the platform is a major part of marketing and success. They just selected their item margin to be negative. Still, they are constantly being changed internally.

                                                                                                                                                          Apple is a good example on the high end: their margins after sale aren’t that high and they struggle to sell monthly services. They are actively moving to improve that. Google has also made the model work for them: they get fees from vendors for certification and have vendors build the hardware platform from them.

                                                                                                                                                          One could say that consoles are a prime example of someone looking at the problem an making their business work early!

                                                                                                                                                          [1]: For those interested, there’s a good interview on the background of the XBox 360 with their German marketing manager, sadly in German: https://www.stayforever.de/2019/12/xbox-360/. Just as an example: he makes the interesting point that consoles built out of standard components can’t drop in price over their lifespan as components need to be constantly changed - they will just not be fabricated anymore. That’s also the reason why HD sizes get bigger over the lifecycles: the vendors just don’t sell smaller ones anymore.

                                                                                                                                                          1. 1

                                                                                                                                                            Thanks a lot of expanding on this. Very interesting.

                                                                                                                                                            FWIW I’m a satisfied Xbox user and I really appreciate how easy it is to acquire older games at good prices without having to hunt around for used items[1]

                                                                                                                                                            [1] and yes I know this is probably bad for resale and for people owning physical media but it’s so damn convenient!

                                                                                                                                                        2. 2

                                                                                                                                                          Of course it’s not a given. The question is how often it would happen, and whether the percentage would be such that the resource waste would be larger than the waste due to the actual short lifecycles.

                                                                                                                                                          1. 1

                                                                                                                                                            Games consoles come to mind - developers are able to wring more and more out of the same hardware over its lifespan.

                                                                                                                                                            The problem is that developers aren’t targeting a ten year old device, they’re targeting at most two year old devices. As long as only a tiny portion of their target market is using old devices, this won’t change. In the console case, the developer could motivate the extra optimization work with the knowledge that all their users would see the benefit since they’re all using known hardware.

                                                                                                                                                            It’d help if OS vendors would support devices for longer periods, of course. I think we’ll need to give them some slack for the initial period of mobile device growth, since device capabilities grew enormously year to year, but today I don’t it’s unreasonable to require OS updates for ten years.

                                                                                                                                                            1. 1

                                                                                                                                                              The problem is even worse then this: You don’t even have to look for old devices. Lower end devices are already in the range where you are targeting a spec from 5 years ago.

                                                                                                                                                              Apple got this a little in check by reminding developers that the devices get beefier for more multi-tasking, not for one application taking more space. I still run around with an iPhone SE and have absolutely not performance issues - I just have to use less background services over an X. So developers in that ecosystem are already used to supporting at least 5 year old devices.

                                                                                                                                                              But this problem is really hard if you want to enter the lower end market: Firefox OS for some parts tanked because it was mainly targeting cheap devices, a device class that well-paid developers would never use. For that reason, the reference devices were also much beefier then the deploy target, which lead to a very unusable ecosystem.

                                                                                                                                                              There’s a huge underserved market untapped because serving the top end pays well enough.

                                                                                                                                                              1. 1

                                                                                                                                                                Yeah, I was imagining a world where mobile devices have a 10-year lifespan and 10-year release frequency to match. I’m guessing batteries are the main thing holding that back at the moment (edit: or they could just be replaceable).

                                                                                                                                                          2. 3

                                                                                                                                                            People can simply not use the device.

                                                                                                                                                            The correct way to vote with your wallet when nobody makes an item that has the attribute you want (longevity) is to not buy any more. Instead, what most people do is buy a new item, and complain on the internet.

                                                                                                                                                            Many times I have read and heard “watch what people do, not what they say”. Corporations do.

                                                                                                                                                            As long as consoomers continue to consoom, companies will continue to produce.

                                                                                                                                                            1. 1

                                                                                                                                                              but yet you’re forced to throw a perfectly good piece of hardware into the trash after a few years

                                                                                                                                                              at what point is it the ethical thing to do to force “perfectly good piece”s of hardware into the trash if not doing so compromises the security of how many “even better” devices. Is there a threshold? Is it based on the amount of securable but kept insecure devices? Is it based on the difficulty of the exploit? How do you measure that?

                                                                                                                                                              I would argue that if supporting a device causes other devices to be less protected against attacks, then that initial device is not “perfectly good”, but is in-fact broken and needs to be fixed or replaced.

                                                                                                                                                              While some attacks on TLS 1.1 are still very theoretical, we know that nation states probably have the means to break it at this point and if you keep supporting TLS < 1.2, then by the nature of downgrading attacks, you’re putting every device at risk, even if it would support later TLS versions.

                                                                                                                                                              1. 23

                                                                                                                                                                at what point is it the ethical thing to do to force “perfectly good piece”s of hardware into the trash if not doing so compromises the security of how many “even better” devices. Is there a threshold?

                                                                                                                                                                The tablet in question is capable of running a present-day TLS stack. It doesn’t even really need ported; we’re taking about ARMv6 at worst, and raspbian already has a TLS stack working on that architecture. It just needs a release to be zipped up, signed, and distributed.

                                                                                                                                                                This really has little to do with TLS, and nothing up do with the website operators. They’re following best practice to ensure that rogue ISPs don’t modify their page in transit. I blame the OEM, and partly blame Google for tying TLS stack updates to the OEM.

                                                                                                                                                                1. 1

                                                                                                                                                                  And you blame the customer, right? I assume the customer bought a tablet without even trying to choose the vendor with the longest support lifecycle. The blog post doesn’t say “I picked the vendor with the longest support lifecycle.”

                                                                                                                                                                  EDIT: If that sounds unfriendly, that’s because my phone is from the vendor with the longest support lifecycle, and it’s not long enough for my taste, and I think that if more customers would care about the support lifecycle at the time of purchase then more vendors would promise two years and mine would promise four. Complaining years later is futile, just empty words. The time of purchase is when you can vote with your wallet.

                                                                                                                                                                  1. 3

                                                                                                                                                                    my phone is from the vendor with the longest support lifecycle, and it’s not long enough for my taste

                                                                                                                                                                    Jolla still provides SailfishOS updates for its original Jolla phone which was released about 6 years and a half ago, so you probably didn’t choose the vendor with the longest support lifecycle :).

                                                                                                                                                                    1. 1

                                                                                                                                                                      I don’t recall seeing any Jolla phones at the time (around September 2018).

                                                                                                                                                                    2. 2

                                                                                                                                                                      The post addresses this and points out that even if they wanted, they couldn’t modernise/service the device themselves. That is independent of the support lifecycle.

                                                                                                                                                                      1. 2

                                                                                                                                                                        In that case the question to ask at purchase time is whether they could modernise or service the device themselves, and the posting doesn’t say “I was careful to buy a tablet with an unlocked bootloader and no MediaTek SoC” either.

                                                                                                                                                                        1. 2

                                                                                                                                                                          As suppliers rarely document their intended service period, this decision is hard to make. And this is essentially “blame the problem on the customer”.

                                                                                                                                                                          1. 1

                                                                                                                                                                            I didn’t have much problem finding a phone vendor with a documented period, and reports from third parties that “my” vendor’s period was the longest I could expect to find.

                                                                                                                                                                            I suspect OP didn’t even bother to try, then complained about a lack of success.

                                                                                                                                                                  2. 3

                                                                                                                                                                    That’s a bad argument. Maybe some years ago it could have been decided that looking forward, TLS stacks should be easily upgradable. Maybe our whole OS design is flawed if we have to throw away heaps of just 2-3 year old devices if they can’t be reused. Oh wait, does anyone throw away a PC because they won’t get any updates to their OS? No, no one does.

                                                                                                                                                                    Maybe Android was a bad idea.

                                                                                                                                                                1. 9

                                                                                                                                                                  Securing MTA must be a cursed job.

                                                                                                                                                                  Back in the old days we had near weekly RCEs in sendmail and exim and these days it’s OpenSMTPD with strong ties to the f’ing OpenBSD project. That’s the one project I expect an RCE the least from; much less two in as many months.

                                                                                                                                                                  Email is hard.

                                                                                                                                                                  1. 5

                                                                                                                                                                    It’s actually 3 — this one has two separate CVE’s in a single release, including a full local escalation to root on Fedora due to Fedora-specific bugs adding an extra twist (CVE-2020-8793).

                                                                                                                                                                    The other bug here (CVE-2020-8794) is a remote one in the default install; although the local user still has to initiate an action to trigger an outgoing connection to an external mail server of the attacker, so, I guess OpenBSD might not count it towards the remote-default count of just two bugs since years ago.

                                                                                                                                                                    1. 2

                                                                                                                                                                      I guess OpenBSD might not count it towards the remote-default count of just two bugs since years ago.

                                                                                                                                                                      I feel like that would be disingenuous. I realize it’s not enabled by default in a way that’s exploitable but in the default install there’s literally nothing running that’s even listening really (you can enable OpenSSH in a default install, I suppose); this is of course the correct way to configure things by default. However, the statement degenerates to “no remotely exploitable bugs in our TCP/IP stack and OpenSSH”…which is awesome, but…

                                                                                                                                                                      (Also, it’s easy to criticize: I’ve never written enterprise grade software used by millions.)

                                                                                                                                                                      1. 1

                                                                                                                                                                        Can you explain more about why you think that’s disingenuous? OpenBSD making this claim doesn’t seem different to me than folks saying that this new bug is remotely exploitable. It’s very specific and if something doesn’t meet the specific criteria then it doesn’t apply. Does that make sense?

                                                                                                                                                                        It is my opinion that the statement should be removed – not because it’s not accurate but because I just think it’s tacky.

                                                                                                                                                                        1. 4

                                                                                                                                                                          IMHO it’s disingenuous because it implies that there are only two remote holes in a heck of a long time on a working server. It’s like saying “this car has a 100% safety record in its default state,” that is, turned off.

                                                                                                                                                                          (I’m reminded of Microsoft bragging about Windows NT’s C2 security rating, while neglecting to mention that it got that rating only on a system that didn’t have a network card installed and its floppy drive glued shut.)

                                                                                                                                                                          I’m not sure if they include OpenSSH in their “default state” (I think it is enabled by default), but other than OpenSSH there’s nothing else running that’s remotely reachable. Most people want to use OpenBSD for things other than just an OpenSSH server (databases, mail servers, web servers, etc), and they might get an inflated sense of security from statements like that

                                                                                                                                                                          (Note that OpenBSD is remarkably secure and their httpd and other projects are excellent and more secure than most alternatives, but that’s not quite the point. Again, it’s easy for me to criticize, sitting here having not written software that has been used by millions.)

                                                                                                                                                                          1. 2

                                                                                                                                                                            I appreciate you taking the time to elaborate. I think the claim is tacky as it seems to be more provocative than anything else – whether true or not. I don’t think it’s needed because I think what OpenBSD stands for speaks for itself. I think I understand why the claim was used in the past but this conversation about it comes up every time there’s a bug – whether remote or not. The whole thing is played out.

                                                                                                                                                                            1. 2

                                                                                                                                                                              AFAIK OpenSMTPD is enabled by default, but does local mail delivery only with the default config. This makes the claim about “only 2 remote holes” still stand still, though I agree with your analysis of bullshit-o-meter of this slogan. But hey, company slogans are usually even more bullshit-ridden, so I don’t care.

                                                                                                                                                                        2. 1

                                                                                                                                                                          You’re saying a local user has to do something to make it remote? Can you explain how that makes it remote?

                                                                                                                                                                          1. 2

                                                                                                                                                                            One of the exploitation paths is parsing responses from remote SMTP servers, so you need to request that OpenSMTP connect out to an attacker-controlled server (e.g. by sending email).

                                                                                                                                                                            It looks like on some older versions there’s a remote root without local user action needed…

                                                                                                                                                                            1. 1

                                                                                                                                                                              I reckon I’ll go back and read the details again. However, if something requires that a local user do a very specific thing under very specific circumstances (attacker controlled server, etc.) in order to exploit – that does not jive with my definition of remote.

                                                                                                                                                                              1. 3

                                                                                                                                                                                Apparently you can remotely exploit the server by triggering a bounce message.

                                                                                                                                                                        3. 2

                                                                                                                                                                          Step zero is don’t run as root and don’t have world writable directories.

                                                                                                                                                                          .

                                                                                                                                                                          .

                                                                                                                                                                          .

                                                                                                                                                                          Sorry, was I yelling?

                                                                                                                                                                          1. 4

                                                                                                                                                                            Mail is hard that way in that the daemon needs to listen to privileged ports and the delivery agent needs to write into directories only readable and writable by a specific user.

                                                                                                                                                                            Both of these parts require root rights.

                                                                                                                                                                            So your step zero is impossible to accomplish for an MTA. You can use multiple different processes and only run some privileged, but you cannot get away with running none of them as root if you want to work within the framework of traditional Unix mail.

                                                                                                                                                                            Using port redirection and virtual users exposing just IMAP you can work around those issues, but them you’re leaving the traditional Unix setup and you’re adding more moving parts to the mix (like a separate imap daemon) which might or might not bring additional security concerns

                                                                                                                                                                            1. 2

                                                                                                                                                                              At least on Linux there’s a capability for binding into privileged ports that is (the cap) not equivalent to root.

                                                                                                                                                                              1. 3

                                                                                                                                                                                yes. or you redirect the port. but that still leaves mail delivery.

                                                                                                                                                                                As I said in my original comment: email is hard and that’s ok. I take issue with people reducing these vulnerabilities (or any issue they don’t fully understand) to “just do X - it’s so easy” (which is a strong pointer they don’t understand the issue)

                                                                                                                                                                                Which is why I sit in my rant about still using C for (relatively) new projects when safer languages exist, though - oh boy is it tempting to be dropping a quick “buffer overflows are entirely preventable in as-performant but more modern languages like rust. why did you have to write OpenSMPTD in C”, but I’m sure there were good reasons - especially for people as experienced and security focused as the OpenBSD folks.

                                                                                                                                                                                1. 3

                                                                                                                                                                                  It’s hard if you impose the constraint that you need to support the classical UNIX model of email that was prevalent from the late 70s to the mid 90s. I was once very attached to this model but it’s based on UNIX file-system permissions that are hard to reason about and implement safely and successfully. The OpenSMTPD developers didn’t make these mistakes because they’re stupid, it’s really really hard. But it’s an unfortunate choice for a security focused system to chose to implement a hard model for email rather than making POP/IMAP work well, or some other approach to getting email under the control of a the recipient without requiring priviledges.

                                                                                                                                                                              2. 1

                                                                                                                                                                                Not sure any of these are true, but more of a self-imposed traditional limitation.

                                                                                                                                                                                Lower ports being bindable by root only could easily be removed; given linux has better security mechanisms to restrict lower port binding, like selinux, I’m not even sure why the kernel still imposes this moronic concept on people. Mail delivery (maildir, mbox, whatever zany construct) can also be done giving limited rw access to the specific user and the MDA. hell, MAIL on my system just points to /var/spool/mail which is owned by root anyhow.

                                                                                                                                                                                1. 1

                                                                                                                                                                                  selinux isn’t everywhere.

                                                                                                                                                                          1. 3

                                                                                                                                                                            I see 8 rules I need to keep in mind when using stringify() to map objects like this. Also, if multiple if/else statements are “not scalable at all”, how is the mapObj and the duplicated key strings in the regex better?

                                                                                                                                                                            If I needed to do this I think I would reach for Map() instead of JSON.stringify(), but perhaps I’m missing something?

                                                                                                                                                                            1. 18

                                                                                                                                                                              Also, the regex replacement is dangerous as the keys might appear in the values of a different object with the same shape. This really isn’t a good solution.

                                                                                                                                                                              1. 2

                                                                                                                                                                                How does Map help?

                                                                                                                                                                                1. 1

                                                                                                                                                                                  It doesn’t really; my kneejerk way of doing this would look like:

                                                                                                                                                                                  for (let [key, value] of new Map(Object.entries(todayILearn))) {
                                                                                                                                                                                      if (key === '_id') output.set('id', value);
                                                                                                                                                                                      else if (key === 'created_at') output.set('createdAt', value);
                                                                                                                                                                                      else if (key === 'updated_at') output.set('updatedAt', value);
                                                                                                                                                                                      else output.set(key, value);
                                                                                                                                                                                  }
                                                                                                                                                                                  

                                                                                                                                                                                  Now I’ve written it out it’s no different from the author’s first “inelegant” solution :-)

                                                                                                                                                                                  1. 1

                                                                                                                                                                                    It may seem a bit “ineleganter” because of these ifs and all, but looks significantly safer and significantly more readable to me then the stringify version there. I haven’t tested it, but I assume it’s also much more performant.

                                                                                                                                                                                    I think there is value in the article in that that it shows some “rules” on how things get stringified (I just wish the naming was better), but the example is a totally wrong one in my opinion - both in proposing the dangerous regex and the use case not being a good one for stringify in the first place, at least in my opinion.

                                                                                                                                                                                    Maybe with some work the article can get improved?

                                                                                                                                                                              1. 22

                                                                                                                                                                                For me, Catalina worked fine since I’ve installed Beta 2 last summer.

                                                                                                                                                                                Even the usual suspects, VMWare, Vagrant/VirtualBox and Homebrew survived the update just fine.

                                                                                                                                                                                I see no crashes (at least not more than the usual once-every-two-months need to reboot), nor other weirdness. Compared to Mojave, even the random Bluetooth disconnects I had with my Magic Trackpad stopped happening.

                                                                                                                                                                                Of course this is total non-news and I’m not going to publish a blogpost saying that Catalina is fine for me nor would that reach, much less survive on, the front page of any news aggregator if I actually were to write such a blog post.

                                                                                                                                                                                Unfortunately, we only read about people having issues and we conclude that everybody must have problems.

                                                                                                                                                                                1. 6

                                                                                                                                                                                  Hear hear! I think we should share our positive experiences more often, since it’s in our nature to latch on and to spread the negative ones.

                                                                                                                                                                                  1. 3

                                                                                                                                                                                    I think we should share our positive experiences more often, since it’s in our nature to latch on and to spread the negative ones.

                                                                                                                                                                                    From the article:

                                                                                                                                                                                    It’s interesting to me how — apart from the usual fanboys — I still haven’t seen any unequivocally positive feedback about Mac OS Catalina.

                                                                                                                                                                                    Is your experience positive (it got better) or is your experience neutral (it didn’t get worse)? What is the best thing about Catalina, and what would be the elevator pitch for why somebody should install it?

                                                                                                                                                                                    1. 3

                                                                                                                                                                                      apart from the usual fanboys

                                                                                                                                                                                      I certainly wouldn’t consider myself an Apple fanboy, but when I look at the current state of all the other platforms, nothing comes quite close enough for what I need and like.

                                                                                                                                                                                      Is your experience positive (it got better) or is your experience neutral (it didn’t get worse)?

                                                                                                                                                                                      I take issue with the implication that things chugging along just fine is somehow not a positive thing. I was doing well yesterday, and I am doing about the same today. Not better and not worse. Am I somehow worse off today because of that? If I am not worse off, then is my experience not positive?

                                                                                                                                                                                      What is the best thing about Catalina, and what would be the elevator pitch for why somebody should install it?

                                                                                                                                                                                      Two things sprung to mind immediately:

                                                                                                                                                                                      • I was pleasantly surprised both this week and the week before that the OS notified me of my daily average use of the computer over the prior days. The numbers were, unsurprisingly to me, extremely high. I knew I was spending too much time on the computer recently, but the computer itself giving me hard numbers is what finally convinced me to take steps toward spending less time on it.

                                                                                                                                                                                      • I really appreciate the increased security that notarization brings. Hell, I’ve even written a native app and gotten it sandboxed, notarized and deployed on the App Store so I’ve experienced more “pain” than the average Catalina user in this regard, yet I still think it’s a fantastic improvement.

                                                                                                                                                                                      1. 1

                                                                                                                                                                                        That looks super interesting.

                                                                                                                                                                                        I really like separating any UI-application in a client-server layer, to avoid accidentally letting business logic, heavy computation, and blocking IO to creep into/block the UI threads (something I’ve seen happen with many QT-based GUI-application). The risk of this is heavily reduced by making the barrier more concrete through separate processes with RPC, while also adding opportunity for more resilience since the client and server process can be restarted separately if an issue occurs.

                                                                                                                                                                                        Would be interesting to see some more details of the GUI/Swift part of the application for someone not familiar with that toolchain, are you planning to write more articles in this series?

                                                                                                                                                                                        1. 1

                                                                                                                                                                                          Thanks! Yes, I do plan to write more about the technical bits of the app in the future. I think I’ll do another post after I’ve got the Windows port ready. I’m thinking it would be fun to compare Windows Forms and SwiftUI in a post. It might be a while before I get to it, though, because I’m juggling a ton of stuff at once at the moment.

                                                                                                                                                                                          In the mean time, the application is source available so you can take a look at the GUI code here.

                                                                                                                                                                                          1. 1

                                                                                                                                                                                            Interesting, looking forward for it.

                                                                                                                                                                                            Are you planning to showcase how you work with XCode? Would be nice to get a overview of how the workflow looks for these kind of apps.

                                                                                                                                                                                  2. 3

                                                                                                                                                                                    same experience here. I have encountered some crashes on my work laptop, but since I’ve had zero issues on my home laptop, I’d much sooner attribute that to my employer’s custom management software than the OS itself.