1. 4

    I drive a 2006 WRX. It does have a fancy Pioneer nav unit that can connect to my phone for music/podcasts and (on very rare occasions) Android Auto. Everything else on the car is manual toggles switches and physical buttons. Nothing is network connected. If I need details on something in the car, I need to connect a scanner to the ODB2 port. It uses petrol and has a manual transmission.

    I cannot imagine every wanting a Wi-Fi enabled car that connects to a network or whose primary controls are touch screen. I’ve driven a friends Cherokee where even the A/C controls were on touch screen and I hated it.

    If I or someone ever totals it, I will be very very sad. For as long as it runs or I can fix it, you’ll have to pry it from my cold dead hands.

    Self driving cars are a ridiculous pipe dream. Can we please stop adding Internet to cars, washers and dryers, toasters and musical instruments?

    1. 2

      But we have IPv6 now, so we should be able to add connectivity to just about everything! /s

      My ride is a 2019, but the base model, so not only it was relatively cheap, it’s also mostly buttons, knobs or switches, except for the radio system (which has modern features, although I don’t recall it having WiFi nor SirisuXM). I am afraid this might be the last new car I ever buy that is this simple.

    1. 14

      Would have been nice to mention the type builtin, at least for bash, that helps newcomers distinguish between different kinds of commands:

      $ type cat
      cat is /usr/bin/cat
      $ type cd
      cd is a shell builtin
      $ type ls
      ls is aliased to `ls -Fh'
      
      1. 5

        Wow, I’ve been using Unix for most of my computing life (30 years?) and I didn’t know about type.

        1. 1

          It is great to find duplicates in your PATH: type - all Shows you all places where exists

        2. 2

          I use which as opposed to type and it seems to do the exact same thing.

          1. 9

            You should use type instead. More than you ever wanted to know on why:

            https://unix.stackexchange.com/questions/85249/why-not-use-which-what-to-use-then

            1. 1

              Interesting. As a long time DOS user, I expected type to behave like cat. I typically use which as if it is just returning the first result from whereis, e.g. xxd $(which foo) | vim -R -. I didn’t know about the csh aliases, because the last time I used csh was in the nineties when I thought that since I use C, surely csh is a better fit for me than something whose name starts with a B, which clearly must be related to BCPL.

              1. 1

                I did not know about type and after knowing about it for 15 seconds now I almost completely agree with you. The only reason you could want to use which is to avoid complicating the readlink $(which <someprogram>) invocation on guix or nixos systems. That is; which is still useful in scripts that intend to use the path, type has an output of the form <someprogram> is <path to someprogram>.

                Edit: OK I followed a link from the article to some stackoverflow that goes through the whole bonanza of these scripts and I think whereis <someprogram> is probably better than readlink $(which <someprogram>).

                1. 3

                  @ilmu type -p will return just the path.

                  1. 2

                    Two problems with whereis: 1) it’s not available everywhere, and 2) it can return more than one result, so you have to parse its output. So for that use case I’ll probably stick with which until someone points me at a simple program that does the same thing without the csh aliases.

              2. 1

                Interesting. In fish shell, type gives you the full definition of the function for built-ins that are written in fish, and builtin -n lists all the bultins. There’s a surprising about of fish code around the cd builtin.

              1. 4

                it may replace GNU Coreutils

                Is this going to result in a 4MB ls command? I’m still don’t totally understand how Rust publishes shared libraries and integrates with shares libraries without shims via crates. It still seems like Go: packaging every dependency together like a system-tool version of Java.

                The other major point of concern should be licensing. clang + llvm + rust base tools means getting away from the GPL. Will we see commercial Linux distributions in the future with no real free equivalents; where only the kernel is released and none of the underlying tooling? The Darwin/BSD of Linux distros?

                1. 3

                  Don’t we see the latter already? Oracle Linux et al? And isn’t Darwin and example of why Linux itself would remain free? Also I don’t think using clang, llvm, etc. will change much about a project’s license. The reverse where GCC is used also didn’t seem to have a huge effect on BSD licensed code.

                  Also I don’t think the Go / Java comparison is fair, cause with Java you will still need to install Java itself, which on itself pulls in a huge amount of third party software as dependencies.

                  Also static linking is possible in C as well and dynamic linking I think is possible in Go by now and I think in Rust? If that’s what you meant.

                  I still agree on file sizes though. And then your Docker base images will be gigabytes. ;)

                  1. 3

                    I’m not sure this is much of an issue. Open/FreeBSD are licensed with similar licenses to most of the rust ecosystem, but there aren’t really fully commercial versions of these. Besides, nobody’s stopping you from writing GPL code in Rust, it’s just that MIT is a more common license.

                    1. 1

                      There’s certainly commercial systems based off FreeBSD, IIRC Sony have been using it as the base for PlayStations for years. There’s also several storage and firewall vendors who’ve built their commercial systems on FreeBSD, and not to forget Darwin/macOS itself using much of the freebsd user land.

                    2. 2

                      Will we see commercial Linux distributions in the future with no real free equivalents; where only the kernel is released and none of the underlying tooling? The Darwin/BSD of Linux distros?

                      It’s called Android and it’s the most widely deployed Linux distro. Okay, AOSP exists, and you can build it, but most Android software depends on various Google proprietary services.

                      1. 1

                        By “commercial” I assume you mean non-free or proprietary? There are already many commercial distros and used to be many more :)

                      1. 9

                        The whole blog is dedicated to bashing Proctorio, how weird

                        1. 15

                          Eh. It claims to be about “exam spyware analysis” and bashes the similarly named ProctorTrack too. I don’t find it too weird; if I were forced to use such software, I would probably inspect it and might go so far as to write a blog complaining about it, if reporting the problems I saw in other ways brought no joy.

                          1. 8

                            I am forced to use Moodle when I teach and I have definitely considered launching a novelty Twitter account just to bash the software. Not so much because I think it would matter, but because doing so might be cathartic… :-)

                            1. 4

                              And Moodle while clumsy to death is one of the less bad of the bunch by my small experience.

                            2. 1

                              How’s that weird?

                              1. 4

                                He (or she) created a blog just for bashing a single company. Even the domain is “proctor.ninja”. Maybe they are an employee, maybe they thinks Proctorio is the worst evil, and maybe it is, but they definetly has a grudge.

                                1. 9

                                  Proctor io has a history of suing experts who critique their shady practices. I would probably attempt to remain incognito if I was making these claims as well

                              2. 1

                                There was an entire blog dedicated to how xkcd sucks. It seems like there are … several now. The original (with the hyphen) was hilarious.

                                There’s also an entire mastodon instance dedicated to SalesForce fandom .. or at least it seems at first. It’s difficult to tell if they’re really fans, making fun of it ironically or a little of both.

                                1. 1

                                  Wait until you find out about https://twitter.com/memenetes. They even sell merch about bashing Kubernetes! XD

                                1. 72

                                  Honestly, for a general-purpose laptop recommendation, it’s hard to recommend anything but the new ARM MacBooks. […] I just hate the cult of personality around built around a ThinkPad that only exists as a shadow in a cave.

                                  Do you want to tell him or shall I?

                                  1. 17

                                    Tell me about what?

                                    My recommendations are tempered by things like Mac OS (every OS sucks in its own unique ways), but they’re the fastest laptops you can get, get actual all-day battery life without ceremony, are lightweight, and have good build quality. This is based around actually using one as my everyday laptop - Apple really has made significant improvements. Unless someone has other requirements (i.e pen, x86 virt, etc.), they’re good all-around.

                                    1. 49

                                      The quote is just kind of funny to read since Apple products have been almost synonymous with fanboyism and cultish followings for decades, while the thinkpad crowd has levied that exact same criticism.

                                      I mean personally I don’t actually disagree with you, I think Apple makes good hardware and “thinkpad people” have gotten just as bad as “apple people” in terms of misguided brand loyalty. It’s just funny because what was quoted feels like very much a role reversal in a very long standing trend.

                                      1. 27

                                        Maybe it’s just my circles but I don’t see Apple fanboyism as much as I see “anti-Apple” fanboyism.

                                        1. 44

                                          That’s because you hang out on sites like Lobsters.

                                          1. 3

                                            Honestly, the “Apple fanboys” are nowadays mostly one of those things that “everybody knows” despite not really bring true. Sure, you can find the occasional example, but you’re more likely to find a handful of mildly positive comments about Apple and then a hundred-comment subthread shitting on both Apple and “all these fanboys posting in here”. And basically any thread about laptops will have multiple subthreads of people loudly proclaiming and getting upvoted and lots of supportive replies for saying Apple is evil, Apple’s hardware and software are shit, everybody should run out and switch to Thinkpads.

                                            Which is just kind of amusing, really.

                                        2. 16

                                          The quote is just kind of funny to read since Apple products have been almost synonymous with fanboyism and cultish followings for decades

                                          Yes, and I think the M1 is a prime example of the hype, further boosted by Apple’s following. The M1 is a very impressive chip. But if you were only reading the orange site and some threads here, it is many generations ahead of the competition, while in reality the gap between recent AMD APUs and the M1 is not very large. And a substantial amount of the efficiency and performance gap would be closed if AMD could actually use 5nm production capacity.

                                          From the article:

                                          Honestly, for a general-purpose laptop recommendation, it’s hard to recommend anything but the new ARM MacBooks.

                                          Let’s take a more balanced view. The M1 Macs are great if you want to run macOS. ThinkPads (and some other models) are great if you want to run Windows or Linux.

                                          1. 12

                                            Do the competitors run fanless?

                                            I’m happy with my desktop so I don’t have a stake in this game, but what would appeal to me about the M1 non-Pro Macbook is the fanless heat dissipation with comparable performance.

                                            1. 6

                                              I mean are there actually laptops that are running super long like the M1? Even back with the Air for me Macs having reliable long batteries was a huge selling point compared to every other laptop (I know they throttle like crazy to do this, but at least the battery works better than other laptops I have owned) . I think Apple deserves loads of praise for shopping laptops that don’t require you to carry your charger around (for decent time frames relative to competition until maybe super recently)

                                              Full disclaimer: despite really wanting an M1’s hardware I’m an Ubuntu user so…

                                            2. 5

                                              I don’t have any brand loyalty towards thinkpads per se but rather the active community of modifications and upgrades. There are things like the nitropad (from nitrokey) that is preinstalled with HEADs and has some minor modifications or refurbishing as well as many other companies are selling second hand thinkpads in this way, but I think nothing beats xyte.ch (where I got my most recent laptop).

                                              The guy is an actual expert and will help you chose the modifications you want (for me I wanted to remove bluetooth, microphone, put an atheros wifi so I can use linux-libre, change the CPU to be more powerful, also the monitor changed to 4k and there were other options also.. like maybe putting an FPGA like fomu in the internal USB of the bluetooth or choices around the hard drives and ports you want) after choosing my mods and sending him 700$ he spent a month doing all my requested changes, flashing libreboot/HEADs and then fedexed it to me with priority.

                                              This was my best online shopping experience in my life and I think this kind of stuff will never exist for apple laptops.

                                              1. 3

                                                Hmm fanboyism. Must fight… urge to explain… why PCs are better than laptops. :-p

                                                1. 1

                                                  Oh, I know all about the dumb fanboy shit. I’ve at least outlined my reasoning as pragmatic instead of dogmatic, I hope.

                                              2. 12

                                                I just really like running Linux. Natively, not in a VM. I have a recent P14s running Void Linux with sway/wayland and all the hardware works. I know there’s been some effort to get Linux working on the new M1 chips/hardware, but I know it’s going to be mostly out-of-the-box for modern Dell/Thinkpad/HP laptops.

                                                With Microsoft likely jumping ship over to ARM, I’m really hoping Linux doesn’t get completely left behind as a desktop (laptop) OS.

                                                1. 7

                                                  It seems like some people mistake the appreciation of quality Apple hardware for a cult.

                                                  1. 18

                                                    It may seem like that, but isn’t. Of the two Macs I currently own, one is in for repair (T2 crashed, now won’t boot at all) and one has been (CPU would detect under voltage and switch off with only one USB device plugged in). Of the ~80 Macs I’ve ever deployed (all since 2004), five have failed within two years and a further three have had user-replaceable parts DOA. This doesn’t seem like a great strike rate.

                                                    BTW I’ve been lucky and never had any of the recall-issue problems nor a butterfly keyboard failure.

                                                    1. 2

                                                      While I strongly prefer my Dell Precision (5520), I haven’t really had the same experiences as you.

                                                      I have a work laptop which is a MacBook and gets a bit toasty - but I use it every day and have not had any issues so far.

                                                      My own laptop was a 2011 MacBook Pro and it took spilling a glass of wine on it to kill it, prior to that there were no problems. Once I did break the keyboard by trying to clean it and had to get it repaired. Maybe it was getting slow and there was some pitting on the aluminium where my hands laid (since I used it every day for 6 years). It died in 2017.

                                                      Those are the two MacBooks I owned

                                                    2. 8

                                                      There might be some selection bias at work, but I have been following Louis Rossmann’s youtube channel and I absolutely do not associate Apple with good quality.

                                                      1. 5

                                                        Louis Rossman has a vested interest in repairable laptops as he runs a repair shop and Apple is actively hostile to third-party repairs.

                                                        Not saying what Apple does is good for the consumer (though it’s often why resale value of their laptops is high)- but I would assume that Louis is the epitome of a biased source.

                                                      2. 5

                                                        I have used MacBooks from 2007-2020. I had two MacBooks with failing memory, one immediately after purchase, one 1-2 months after purchase. I also had a MacBook Air (pre-butterfly) with a failing key. I had a butterfly MacBook Pro with keys that would often get stuck.

                                                        The quality of average is very average. I think the number of problems I had with MacBooks is very average for laptops. However, Apple offers really great service (at least here in Western European countries), which made these hardware issues relatively painless to deal with.

                                                        1. 5

                                                          Apple doesn’t merely make good hardware, it makes Apple hardware, in that its hardware is often different from the mainstream. Butterfly keyboards, for example, or some of their odder mouse designs. it’s possible to appreciate good hardware without thinking Apple’s specific choices are worth buying, even if you concede they’re good implementations of those choices you dislike.

                                                      1. 2

                                                        I always wondered why a flag emoji sometimes just shows up as two letters on some systems. Now I know.

                                                        Also I think it’s amazing that Emoji is essentially turning into its own hieroglyphic languages represented entirely as unicode characters. We are literally creating a new picture-language.

                                                        1. 8

                                                          I have mixed feelings about this one. It’s a good trick if you don’t have good enough skills to be effective at recovery without it. But once you go past one or two servers that you’re running in production long term I think you should be able to solve this in a better way. Or at least be really aware you’re using weird tricks rather than proper solutions.

                                                          “A better way” being: separate users for separate purposes, separate partition for system / runtime data / logs, block reservations for root where needed, log rotation, monitoring / altering, knowing how to quickly find the real cause using du and deal with that.

                                                          1. 8

                                                            “A better way” should be alerting. You have something that sends you a text or e-mail or something you check daily as you approach 90% or whatever threshold you want to reach.

                                                            1. 5

                                                              That’s only one piece of the puzzle though. Alerting is great, but if something is spamming content, by the time you log in, you may be already 100% full and failing. Containing that problem to one service and keeping the system running needs to happen as well.

                                                              1. 1

                                                                I thought it was a solved problem – make different users for services and put disk quotas on their directories

                                                          1. 2

                                                            Any examples of finished art?

                                                            I think the interesting thing about emoji pixel art is that you could represent the art simply as unicode strings, monospaced and preformatted. It could potentially take a lot less space than a traditional pixel art PNG, but then the art itself would be highly dependent on the rendering font. If you didn’t attach the text to a specific font, it would look totally different on varying platforms.

                                                            1. 3

                                                              Example art: https://twitter.com/s_han_non_lin/status/1372937642946535428

                                                              you could represent the art simply as unicode strings

                                                              The “copy” link in the tool copies the unicode string to your clipboard with new lines. It only works well if you’ve used emoji in every square, as that preserves the spacing.

                                                            1. 2

                                                              I’m current in the process of using CSS Grid to replace an old layout and it’s really damn simple once you work your head around it. I’ve got an older site where I use Foundation 5. I barely have to touch it and it works well enough, but I wouldn’t recommend using something like Foundation today.

                                                              It all really comes down to tradeoffs. The opening page of Tailwind looks neat, but I feel like a lot of stuff could be accomplished without a framework as well and you’ll be dealing with the same level of headache.

                                                              1. 4

                                                                I wonder what happened to their fire suppression system. My old university used halon gas suppressors to prevent water/sprinkler system damage in the event of a fire. I wonder if they had something in place and it didn’t go off, or if it wasn’t enough to control the fire.

                                                                1. 7

                                                                  From this forum post[1] about OVH data centers and extensive use of Google Translate (five years of French in school? can barely remember a lick :))

                                                                  • Their fire suppression system was a traditional sprinkler system, not a water mist system suitable for electronics rooms or an inert gas system
                                                                  • Their sprinklers would be manually activated by datacenter operators, not one that automatically responds to a fire
                                                                  • There was a significant amount of wood used in the construction, and it didn’t look like much thought was given to firewalls (insert data center joke here)

                                                                  If all those are true, I’d suspect that the fire was either already out of control when the sprinkler system was activated, or that the data center operators didn’t activate it in time out of fear of collateral damage.

                                                                  1. 1

                                                                    Dans le cas de Roubaix 4, le Datacenter est fait avec beaucoup de bois

                                                                    It mentions that RBX4 (another DC > 250km away in Roubaix), the fire was in SGB2 (in Strasbourg).

                                                                    1. 3

                                                                      Correct. However, Le Monde did say that the datacenter had wooden floor:

                                                                      « Le feu s’est rapidement propagé dans le bâtiment. On a mis en place un important dispositif hydraulique, à l’aide d’un bateau-pompe de grande puissance [qui a prélevé l’eau du Rhin], pour éviter la propagation aux bâtiments attenants », a déclaré à l’Agence France-Presse Damien Harroué, commandant des opérations de secours. « Les planchers sont en bois, et le matériel informatique, bien chauffé ; ça va brûler. Ce sont des matières plastiques, ça génère des fumées importantes et des flammes », a-t-il ajouté, pour expliquer l’important dégagement de fumée et la rapidité de propagation de l’incendie.

                                                                      My loose translation as a native french speaker:

                                                                      “The fire rapidly spread into the building. We set up water plan, using a high power water pump [which used the Rhin river’s water], to prevent the spread to other neighbouring buildings.” told Damien Harroué, commandant of the rescue operation, to the Agence France-Presse. “The floors were made out of wood, and with the hot computing devices, it is going to burn. It’s the plastic materials which created significant smoke and flames” he added to explain the significant smoke emission and the speed at which the fire spread.

                                                                1. 8

                                                                  Flash effectively became the new Java applet, and now there’s no more Flash. No reason for the Java applet to come back.

                                                                  1. 4

                                                                    Flash was very lightweight and amazingly powerful, before Adobe bought Macromedia and slowly rotted it to garbage. Apple insisting it was a no-go on their first venture into the smartphone market also gave it pause.

                                                                    Java applets were always problematic when it came to performance, which is what allowed Flash to thrive. It is the end of an era for sure.

                                                                    1. 7

                                                                      One might say that Flash died once it started to want to look like Java (AS3 and later, AIR). To this day, we still don’t have a visual interactive editing environment like Flash. It was the VisualBasic for creative coders.

                                                                      1. 2

                                                                        Flash, too, was a document system (well, animations) that got pushed to be an application layer.

                                                                  1. 6

                                                                    Am I the only person who’s had no problems with PulseAudio? The only thing I had to work around was a dumb hardware bug.

                                                                    1. 7

                                                                      Today? No. Back when it became the default on Fedora, Ubuntu & friends, yeah, you’d have been the only one. It needed years before it was reasonably reliable, not the least because the way upstream treated bug reports – and the wider open source community in general – was abysmal.

                                                                      1. 3

                                                                        Same. Bluetooth devices were flaky, Steam games were occasionally filled with static. It was so hit and miss.

                                                                        Today it works really well for me. I don’t have a need for super low latency, so I haven’t tried Jack, but for everything else it works pretty well. Even bluetooth devices.

                                                                        I was afraid it was going to turn into another systemd, but I think it’s faired better. It’s far better than esound and arts for sure (for those old enough to remember the really old sound server concepts).

                                                                        1. 4

                                                                          PulseAudio improved enormously after project leadership changed. A responsible approach to development with no abrasiveness towards users regardless of tech-literacy runs circles around any ego-driven process, no matter how much technical expertise is there.

                                                                          As far as PulseAudio is concerned, it certainly didn’t help that for all the Linux-related technical expertise there was in the beginning, knowledge of practical multimedia applications was very obviously not there, so many dubious choices, at every level, were framed as innovations or useful design trade-offs simply because nobody understood the real-life implications. That is, nobody who really cared, trying to report a bug or submit a patch was generally enough to drive people with good intentions away.

                                                                      2. 1

                                                                        Until I uninstalled it, it would regularly make my sound device disappear randomly. Other times it would mute one ear but not the other upon insertion of headphones. Getting rid of it was one of the best moves I’ve ever made when it comes to software on my laptop.

                                                                        I believe you that you haven’t run into issues, but I chalk it down to just a lot of luck.

                                                                        1. 1

                                                                          I was thinking the same thing. I’ve been on Linux 100% of the time for the last 5 years (off and on before that). No problems with PulseAudio.

                                                                        1. 19

                                                                          This is very upsetting. I went in expecting argumentative clickbait and found a solid, nuanced discussion!

                                                                          I think SemVer is one of the better standards we have, overall. But as with any tool, there’s a time and a place for it and I thought this article did a good job of acknowledging that while also covering some common issues with how it is used. Thanks, OP.

                                                                          1. 3

                                                                            Yep, and the very first thing the author mentions is exactly what I was thinking: test coverage.

                                                                            If you write a bunch of tests, you can bump your dependencies and make sure the pass. Sure you might bump dependencies and the tests pass and then in production breaks. Well hopefully you can write a test for that thing you missed.

                                                                            Having good tests is so essential to help preventing dependency rot.

                                                                          1. 8

                                                                            I am one of several k8s admins at work and I really hate k8s. In the past I’ve been at another shop as a developer where we used DC/OS (marathon/mesos) which I found a lot easier from a developer perspective, but my own experiments with it made me want to stab that terrible Java scheduler that ate resources for no damn reason. (K8S is written in Go and is considerably leaner as far as resources, but a much bigger beast when it comes to config/deployment).

                                                                            I’ve dabbled with Nomad before and I do know some advertising startups that actively use it for all their apps/jobs. If I was getting into the startup space again, I’d probably look at using it.

                                                                            K8S is a hot mess of insane garbage. When it’s configured and running smoothly, a good scheduler helps a lot when doing deployments and rolling/zero-downtime updates. But they tend to consume a lot of nodes and it’s very difficult to go from 1 to 100 (Having your simple proof of concept running on just 1 system and then scale up to n adding redundancy and masters). Some people talk about minikube or k3s, but they’re not true 0 to scale systems.

                                                                            I did a whole post on what I think about docker and scheduling systems a few years back:

                                                                            https://battlepenguin.com/tech/my-love-hate-relationship-with-docker-and-container-orchestration-systems/

                                                                            1. 4

                                                                              You should look at juju. It uses LXC/LXD clustering to avoid a lot of the shortcomings of k8s (which are many and varied). Maybe Nomad is better, but it’s all expressed in a language named after the founding company. This in and of itself is enough reason to squint really hard and ask “why?”.

                                                                              Also: https://github.com/rollcat/judo It’s like ansible, but written in Go and only for the most basic of all basic kinds of provisioning.

                                                                              1. 3

                                                                                re: HCL

                                                                                I look at it this way. HCL is a (from the README) a toolkit for building config lanuages… “Inspired by libucl, nginx configuration, and others.” yaml is a pain to hand edit when they get large (ie k8s). json is a pain too (no comments for example - as an aside, why are we (still) using serialization formats for config files!?). toml is … okay… but a bit strange to get the structure right. It brings consistency (mostly) between their own products and being open source, means others can adopt it as well.

                                                                                1. 3

                                                                                  My understanding is you can use JSON anywhere HCL is accepted by the tools as well, so if you’re generating it out of some other system you can emit JSON not have to emit HCL.

                                                                                  I much prefer writing HCL[2] for configuring things, it’s a little clearer than YAML (certainly less footguns, no) and supports comments unlike JSON.

                                                                                  1. 2

                                                                                    It’s not the language itself that bothers me (it’s a little weird as I would rather use a more-universally-accepted solution, but that’s my personal preference and I do not impose that on anyone else). it’s that it is owned by a company that is known for taking products and making them closed and expensive. This is precisely what companies do, though, and it’s not too surprising. You can get an “enterprise” version of any product hashicorp builds. The question remains: will HCL ever be forced into an “enterprise” category? Will it ever force users to accept a license that they do not agree or pay to use it? YAML/JSON have the advantage of being community-built so I doubt that will ever happen to them.

                                                                                    I realize now that I’m grandstanding here and proclaiming the requirement of using FOSS – but I don’t wholeheartedly agree to that. I have no problem using proprietary software (I use several every day, in fact). I’m just remaining a little squinty-eyed at HCL specifically. I don’t know that I could bring myself to choose HCL for tasks at my day job for things that do not inherently require it.

                                                                                    That brings me full circle back to my point: be careful, HCL is born from a commercial entity that may not always play nice. Hashicorp has generally in the past, but there are examples of how the companies with the best intentions do not always keep their principles.

                                                                              1. 9

                                                                                I’ve been a very happy Void user for about 6 years now. It’s really a hidden gem of a distro. The people running things are smart, extremely competent, and have good taste.

                                                                                That’s a great blog post, too, with all of the reasons for the switch well-explained.

                                                                                1. 5

                                                                                  I’m a long term Gentoo user (2004 ~ 2009 and 2012 ~ now) but have used Void on my router, media PC, self-hosting server and now on my primary laptop for about a month (also went full Sway/Wayland and stuck with it this time). It really is an amazing distribution. I’m surprised as how many packages it has and how well it’s put together.

                                                                                  I don’t have any irons in the fire in openssl vs libressl. I remember when libre started, but had no idea the two were still so vastly different as far as linking/implementing. I’m all for whatever makes maintaining easier and not sacrificing security. This seems like a good move, and a good time to catch myself on the libressl development news too.

                                                                                1. 20

                                                                                  I love plain text protocols, but … HTTP is neither simple to implement nor neither fast to parse.

                                                                                  1. 7

                                                                                    Yeah the problem of parsing text-based protocols in an async style has been floating around my head for a number of years. (I prefer not to parse in the async or push style, but people need to do both, depending on the situation.)

                                                                                    This was motivated by looking at the nginx and node.js HTTP parsers, which are both very low level C. Hand-coded state machines.


                                                                                    I just went and looked, and this is the smelly and somewhat irresponsible code I remember:

                                                                                    https://github.com/nodejs/http-parser/blob/master/http_parser.c#L507

                                                                                    /* Proxied requests are followed by scheme of an absolute URI (alpha).

                                                                                    • All methods except CONNECT are followed by ‘/’ or ‘*’.

                                                                                    I say irresponsible because it’s network-facing code with tons of state and rare code paths, done in plain C. nginx has had vulnerabilities in the analogous code, and I’d be surprised if this code didn’t.


                                                                                    Looks like they have a new library and admit as much:

                                                                                    https://github.com/nodejs/llhttp

                                                                                    Let’s face it, http_parser is practically unmaintainable. Even introduction of a single new method results in a significant code churn.

                                                                                    Looks interesting and I will be watching the talk and seeing how it works!

                                                                                    But really I do think there should be text-based protocols that are easy to parse in an async style (without necessarily using Go, where goroutines give you your stack back)

                                                                                    Awhile back I did an experiment with netstrings, because length-prefixed protocols are easier to parse async than delimiter-based protocols (like HTTP and newlines). I may revisit that experiment, since Oil will likely grow netstrings: https://www.oilshell.org/release/0.8.7/doc/framing.html


                                                                                    OK wow that new library uses a parser generator I hadn’t seen:

                                                                                    https://llparse.org/

                                                                                    https://github.com/nodejs/llparse

                                                                                    which does seem like the right way to do it: do the inversion automatically, not manually.

                                                                                    1. 4

                                                                                      Was going to say this. Especially when you have people misbehaving around things like Content-Length, Transfer-Encoding: chunked and thus request smuggling seems to imply it’s too complex. Plus, I still don’t know which response code is appropriate for every occasion.

                                                                                      1. 2

                                                                                        Curious what part of HTTP you think is not simple? And on which side (client, server)

                                                                                        1. 5

                                                                                          There’s quite a bit. You can ignore most of it, but once you get to HTTP/1.1 where chunked-encoding is a thing, it starts getting way more complicated.

                                                                                          • Status code 100 (continue + expect)
                                                                                          • Status code 101 - essentially allowing hijacking of the underlying connection to use it as another protocol
                                                                                          • Chunked transfer encoding
                                                                                          • The request “method” can technically be an arbitrary string - protocols like webdav have added many more verbs than originally intended
                                                                                          • Properly handling caching/CORS (these are more browser/client issues, but they’re still a part of the protocol)
                                                                                          • Digest authentication
                                                                                          • Redirect handling by clients
                                                                                          • The Range header
                                                                                          • The application/x-www-form-urlencoded format
                                                                                          • HTTP 2.0 which is now a binary protocol
                                                                                          • Some servers allow you specify keep-alive to leave a connection open to make more requests in the future
                                                                                          • Some servers still serve different content based on the User-Agent header
                                                                                          • The Accept header

                                                                                          There’s more, but that’s what I’ve come up with just looking quickly.

                                                                                          1. 3

                                                                                            Would add to this that it’s not just complicated because all these features exist, it’s very complicated because buggy halfway implementations of them are common-to-ubiquitous in the wild and you’ll usually need to interoperate with them.

                                                                                            1. 1

                                                                                              And, as far as I know, there is no conformance test suite.

                                                                                              1. 1

                                                                                                Ugh, yes. WPT should’ve existed 20 years ago.

                                                                                            2. 2

                                                                                              Heh, don’t forget HTTP/1.1 Pipelining. Then there’s caching, and ETags.

                                                                                          2. 2

                                                                                            You make a valid point. I find it easy to read as a human being though which is also important when dealing with protocols.

                                                                                            I’ve found a lot of web devs I’ve interviewed have no idea that HTTP is just plain text over TCP. When the lightbulb finally goes on for them a whole new world opens up.

                                                                                            1. 4

                                                                                              It’s interesting to note that while “original HTTP” was plain text over TCP, we’re heading toward a situation where HTTP is a binary protocol run over an encrypted connection and transmitted via UDP—and yet the semantics are still similar enough that you can “decode” back to something resembling HTTP/1.1.

                                                                                              1. 1

                                                                                                UDP? I thought HTTP/2 was binary over TCP. But yes, TLS is a lot easier thanks to ACME cert issues and LetsEncrypt for sure.

                                                                                                1. 2

                                                                                                  HTTP/3 is binary over QUIC, which runs over UDP.

                                                                                            2. 1

                                                                                              SIP is another plain text protocol that is not simple to implement. I like it and it is very robust though. And it was originally modeled after HTTP.

                                                                                            1. 2

                                                                                              EFI is not ubiquitous, especially not in embedded systems.

                                                                                              What embedded systems support the systemd bootloader specification and do not have an EFI-compatible bootloader?

                                                                                              1. 1

                                                                                                Yea, I didn’t understand that either. Microsoft’s Nokia phones had UEFI+ARM, even though they were locked. I think people have been able to unlock them, but there are still a ton of missing drivers to be able to run your own stuff on old Win phones.

                                                                                                Honestly we should see MORE UEFI on all embedded devices. The Pi5 should support UEFI. All the SoCs like the Beagle Bone and Banana Pi should use UEFI. No one uses device tree and it’s garbage anyway. UEFI would go a long way to reducing embedded and SoC e-waste by just giving us a common way to at least start trying to reverse engineer old ARM devices.

                                                                                                1. 6

                                                                                                  No one uses device tree and it’s garbage anyway.

                                                                                                  It may be garbage, but almost every platform besides x86 uses it on every board (at least for Linux).

                                                                                                  1. 3

                                                                                                    No one uses device tree and it’s garbage anyway.

                                                                                                    Unfortunately for my mental sanity, almost every embedded Linux gadget uses device trees. Most of them don’t have the underlying firmware to dynamically build and query it, of course, the device tree gets “fed” to the kernel through various other mechanisms (usually via U-Boot, which acts as both bootloader and firmware), but device trees are otherwise quite universal. The tooling is at the intersection of horrifying and useless, which is why everyone hates device trees – it is garbage, but literally everyone uses it. Even for “platform devices”, which you can technically initialise without device trees, too.

                                                                                                    IMHO adding an UEFI layer to embedded devices wouldn’t help much. Most of the peripheral devices get connected on buses that don’t really support enumeration anyway – I2C, SPI, all sorts of internal SoC buses and so on. So you’d get on-board firmware that enumerates devices from a statically-defined device tree anyway, except with an extra layer of manufacturer-supplied bugs, upgrading headaches and so on.

                                                                                                    The idea of loading various platforms with BIOS-like firmwares has been floated around the embedded space for a very long time, since the mid 90s at the very least. It hasn’t caught on because it doesn’t actually help much, certainly not enough to warrant the extra cost. Things like the Beagle Bone and the Banana Pi are definitely not representative of the embedded systems space at large – they’re hobbyist platforms that often do get used as reference designs for real devices, too, but that’s not how cost-sensitive design, where Linux and Android particularly shine, is usually done.

                                                                                                    1. 1

                                                                                                      Slap U-Boot on it. Now you have UEFI.

                                                                                                  1. 1

                                                                                                    I remember getting a new laptop with Lion on it and I desperately tried everything to get Snow Leopard installed on it because I hated Lion that much. I hated it so much I started doing absolutely everything in a Gentoo VM. Once I got a permanent Linux box, I reformatted the Mac with Windows and only used it for games.

                                                                                                    The biggest problem I had with Lion (I didn’t see it mentioned in the article, maybe I missed it) was Mission Control.

                                                                                                    Expose spread apart all your windows so you could see all of of them and select the one you wanted. Workspaces could have both rows and columns.

                                                                                                    Mission Control would group all similar windows together and it was terrible compared to the full blow out of expose. You could also only have one row of workspaces.

                                                                                                    To this day, KDE Plasma and other window managers on Linux allow for Expose type window expansion (the last reminisces of those old Compiz effects are still options in KWin) and multiple rows of desktops. Mac has never added these features back in as far as I know.

                                                                                                    1. 2

                                                                                                      For quite a while, I had a habit of acquiring used macs to run Linux on them.

                                                                                                      [Snow] Leopard was the only OS X version I seriously considered using, when I saw they finally implemented real virtual desktops like any UNIX DE did for decades. I was really surprised when they removed that in Lion, and I’m glad I stayed with Linux.

                                                                                                      1. 1

                                                                                                        They didn’t remove it with Lion? They still have workspaces (just renamed to “spaces”).

                                                                                                        They changed it from a 2D grid of workspaces to a 1D horizontal line of spaces, added a really nice touch pad gesture to switch to the workspace to the left or right, and let you switch workspace, add/remove workspaces and rearrange workspaces from Mission Control. Fairly similar to GNOME’s implementation of virtual desktops actually.

                                                                                                        1. 1

                                                                                                          What GNOME3 and OS X do doesn’t allow using virtual desktops for organizing your workspace. Without an option to enable a fixed layout, that counts as a feature removal.

                                                                                                          1. 1

                                                                                                            I’m skeptical about the usefulness of 1D workspaces. I think the 2D interface helps a lot with organization (this also goes for file management).

                                                                                                            On 10.5 and 10.6, I would have my main development environment on (1,1), my music player on (1,2) and my web browser on (2,1). Going from (1,1) to (1,2) and (2,1) was Ctrl-Down and Ctrl-Right.

                                                                                                      1. 1

                                                                                                        I post a lot of my own stuff here if it fits into any of the tags, but I also try to submit a fair amount of content I pull from other RSS feeds that I think would fit. I need to get back to tech posts though; my last two were moderated out.

                                                                                                        1. 2

                                                                                                          If we keep using these argument then we should all all code in Rust/C++. But we got JavaScript/Ruby/Python/PHP?

                                                                                                          to me, I found that icon fonts are better. simply because It fit my needs where I don’t need advanced features that SVG offers. All I need is display an icon, change its color with CSS. font awesome icon font works great for me.

                                                                                                          1. 25

                                                                                                            The important part of the argument was not that SVG icons are more flexible, but that icon fonts can interfere with a bunch of features, break down in a bunch of scenarios, and create additional complexity for a bunch of other software that has to deal with them.

                                                                                                            1. 4

                                                                                                              In-fact, to this day icon fonts remain the only way to add scalable icons purely from CSS without altering the page markup (and going back to mixing presentation and content).

                                                                                                              All pure SVG solutions either require changes to the markup or don’t allow CSS to alter the images colors.

                                                                                                              1. 4

                                                                                                                You can render an svg from css by setting background-image?

                                                                                                                1. 1

                                                                                                                  Yes, but then you can’t change the color with CSS

                                                                                                                  1. 5

                                                                                                                    Actually you can change SVG colors with CSS (including background images), if you are willing to use some black-magic color transformation tricks : https://css-tricks.com/solved-with-css-colorizing-svg-backgrounds/ (thar’ be dragons!)

                                                                                                                2. 2

                                                                                                                  That’s true, and a very strange limitation to still be stuck with.

                                                                                                                  Howver, most icon font use is not doing that. Usually the page is littered with empty i tags or similar, and often the icon isn’t even recoloured!

                                                                                                                  1. 2

                                                                                                                    You can change the color of an SVG using filter:

                                                                                                                    .iconinverted {
                                                                                                                    filter: invert(0.5) sepia(1) saturate(5) hue-rotate(175deg);
                                                                                                                    }
                                                                                                                    

                                                                                                                    I use that on my contact page

                                                                                                                    1. 1

                                                                                                                      I think this is the main argument for icon fonts – most of the time the use is just to improve the design, not the content.

                                                                                                                    2. 0

                                                                                                                      I totally see what you are saying. Very clear.