Threads for klingtnet

    1.  

      The what’s new page not only provides a lot of details, it also gives a better overview than the linked release notes, IMO.

    2. 31

      I owned one of these as my first work laptop and I cannot agree, it’s a decent laptop but not the best one by far. What I disliked the most was it’s abysmal display, dark, low resolution, bad color reproduction. As usual with Lenovo, it’s a lottery game with the screen and from the model number you cannot infer what manufacturer the screen is from. The keyboard was pretty good though, even though it had a lot of flex and feels pretty cheap compared to what you get nowadays. Also, I don’t get the point of carrying another battery pack, to swap it out you need to power down the machine. HP’s elitebook 8460[w/p] models could be configured with a 9-cell battery and an optional battery slice which gave them almost a full day of battery life. Those elitebooks were built like a tank but at the same time very heavy. Compared to the X220 they’re the better laptops in my opinion. However, the best laptop is an Apple silicon MacBook Air. It’s so much better than what else is available that it’s almost unfair. No fan noise, all day battery life, instant power on and very powerful. It would be great if it could run any Linux distribution though, but macOS just works and is good enough for me.

      1. 7

        I totally disagree, and I have both an X220 and an M1 MacBook Air.

        I much prefer to the X220. In fact, I have 2 of them, and I only have the MBA because work bought me one. I would not pay for it myself.

        I do use the MBA for travel sometimes, because at a conference it’s more important to have something very portable, but it is a less useful tool in general.

        I am a writer. The keyboard matters more than almost anything else. The X220 has a wonderful keyboard and the MBA has a terrible keyboard, one of the worst on any premium laptop.

        Both my X220s have more RAM, 1 or 2 aftermarket SSDs, and so on. That is impossible with the MBA.

        My X220s have multiple USB 2, multiple USB 3, plus Displayport plus VGA. I can have it plugged in and still run 3 screeens, a keyboard, a mouse, and still have a spare port. On the MBA this means carrying a hub and thus its thinness and lightness goes away.

        I am 6’2”. I cannot work on a laptop in a normal plane seat. I do not want to have to carry my laptop on board. But you cannot check in a laptop battery. The X220 solves this: I can just unplug its battery in seconds, and take only the battery on board. I can also carry a charged spare, or several.

        The X220 screen is fine. I am 55. I owned 1990s laptops. I remember 1980s laptops. I remember greyscale passive-matrix LCDs and I know why OSes have options to help you find the mouse cursor. The X220 screen is fine. A bit higher-res would be fine but I cannot see 200 or 300ppi at laptop screen range so I do not need a bulky GPU trying to render invisibly small pixels. It is a work tool; I do not want to watch movies on it.

        I have recently reviewed the X13S Arm Thinkpad, and the Z13 AMD Thinkpad, and the X1 Carbon gen 12.

        My X220 is better than all of them, and I prefer it to all of those and to the MacBook Air.

        I say all this not to say YOU ARE WRONG because you are entitled to your own opinions and choices. I am merely trying to clearly explain why I do not agree with them.

        … And why it really annoys me that you and your choices have so limited the market that I have to use a decade-old laptop to get what I want in a laptop because your choices apparently outweigh mine and nobody makes a laptop that does what I want in a laptop any more, including the makers of my X220.

        That is not fair and that is not OK.

        1. 5

          It’s perfectly fair to like the X220 and other older laptop models, that’s simply personal preference.

          … nobody makes a laptop that does what I want in a laptop any more, including the makers of my X220.

          Probably because your requirements are very specific and “developer” laptops are niche market.

          … And why it really annoys me that you and your choices have so limited the market that I have to use a decade-old laptop to get what…

          Neither I, nor anyone else who bought an Apple product is responsible for your choice of a laptop.

          1. 5

            The core of my disagreement is with this line:

            Probably because your requirements are very specific and “developer” laptops are niche market.

            1. I don’t think my requirements are very specific.
            2. I am not a developer, and I don’t know what a “developer” laptop is meant to be.
            3. I don’t think it’s that niche: a. Mine is a widely-held view b. The fact there is such a large aftermarket in classic Thinkpads and parts for them, even upgrade motherboards, falsifies this claim.
            4. It was not a specialist tool when new; it was a typical pro-grade machine. It’s not a niche product.
            5. This change in marketing is not about ignoring niche markets. It’s about two things: reducing cost, and thus increasing margin; and about following trends and not doing customer research.

            Comparison: I want a phone with a removable battery, a headphone socket, physical buttons I can use with gloves on, and at least 2 SIM slots plus a card slot. These are all simple easy requirements which were ubiquitous a decade ago, but are gone now, because everyone copies the market leaders, without understanding what makes them the market leader.

            1. 2

              If there was a significant market for a new laptop with the features similar to the X220, there would be such a laptop offered for sale.

              There’s no conspiracy.

              1. 2

                I didn’t claim there was any conspiracy.

                Whereas ISTM that your argument amounts to “if people wanted that they’d buy it, so if they don’t, they mustn’t want it”. Which is trivially falsified: this does not work if there is no such product to buy.

                But there used to be, same as I used to have a wide choice of phones with physical buttons, headphone sockets, easily augmented storage, etc.

                In other markets, companies are thriving by supplying products that go counter to industry trends. For instance, the Royal Enfield company supplies inexpensive, low-powered motorcycles that are easily maintained by their owners, which goes directly counter to the trend among Japanese motorcycles of constantly increasing power, lowering weight, and removing customer-maintainability by making highly-integrated devices with sealed, proprietary electronics controlling them.

                Framework laptops are demonstrating some of this for laptops.

                When I say major brands are lacking innovation, derivative, and copy one another, this is hardly even a controversial statement. Calling it a conspiracy theory is borderline offensive and I am not happy with that.

                1. 2

                  Margins in the laptop business are razor-thin. Laptops are seen as a commodity. The biggest buyers are businesses who simply want to provide their employees with a tool to do their jobs.

                  These economic facts do tend to converge available options towards a market-leader sameness, but that’s simply how the market works.

                  Motorcycles are different. They’re consumer/lifestyle products. You don’t ride a Royal Enfield because you need to, you do it because you want to, and you want to signal within the biker community what kind of person you are.

                  1. 2

                    Still no.

                    Laptops are seen as a commodity.

                    This is the core point. For instance, my work machine, which I am not especially fond of, is described in reviews as being a standard corporate fleet box.

                    I checked the price when reviewing the newer Lenovos, and it was about £800 in bulk.

                    But I have reviewed the X1 Carbon as a Linux machine, the Z13 similarly, and the Arm-powered X13s both with Windows and with Linux.

                    These are, or were when new, all ~£2000 premium devices, some significantly more.

                    And yet, my budget-priced commodity fleet Dell has more ports than any of them, even the flagship X1C – that has 4 USB ports, but the Dell, at about a third of the price, has all those and HDMI and Ethernet.

                    This is not a cost-cutting thing at the budget end of the market. These are premium devices.

                    And FWIW I think you’re wrong about the Enfields, too. The company is Indian, and survived decades after the UK parent company died, outcompeted by cheaper, better-engineered Japanese machines.

                    Enfield faded from world view, making cheap robust low-spec bikes for a billion Indian people who couldn’t afford cars. Then some people in the UK noticed that they still existed, started importing them, and the company made it official, applied for and regained the “Royal” prefix and now exports its machines.

                    But the core point that I was making was that in both cases, it is the budget machines at the bottom of the market which preserve the ports. It is the expensive premium models which are the highly-integrated, locked-down sealed units.

                    This is not cost-cutting; it is fashion-led. Like womens’ skirts and dresses without pockets, it is designed for looks not practicality, and sold for premium prices.

                    1. 1

                      Basically, what I am reading from your comments is that Royal Enfield motorcycles (I knew about the Indian connection, btw, but didn’t know they’d made a comeback in the UK) and chunky black laptops with a lot of ports is for people with not a lot of money, or who prefer to not spend a lot of money for bikes or laptops.

                      Why there are not more products aimed at this segment of the market is left as an exercise to the reader.

                      1.  

                        ISTM that you are adamantly refusing to admit that there is a point here.

                        Point Number 1:

                        This is not some exotic new requirement. It is exactly how most products used to be, in laptops, in phones, in other sectors. Some manufacturers cut costs, sold it as part of a “fashionable” or “stylish” premium thing, everyone else followed along like sheep… And now it is ubiquitous, and some spectators, unable to follow the logic of cause and effect, say “ah well, it is like that because nobody wants those features any more.”

                        And no matter how many of us stand up and say “BUT WE WANT THEM!” apparently we do not count for some reason.

                        Point Number 2:

                        more products aimed at this segment of the market

                        That’s the problem. Please, I beg you, give me links to any such device available in the laptop market today, please.

                        1.  

                          I don’t doubt there are people who want these features. They’re vocal enough.

                          But there are not enough of them (either self-declared, or found via market research) for a manufacturer to make the bet that they will make money making products for this market.

                          It’s quite possible that an new X220-like laptop would cost around $5,000. Would such a laptop sell enough to make money back for the manufacturer?

                          1.  

                            It’s quite possible that an new X220-like laptop would cost around $5,000. Would such a laptop sell enough to make money back for the manufacturer?

                            The brown manual wagon problem: everyone who says they want one will only buy them 7 years later used.

          2. 2

            “Probably because your requirements are very specific and “developer” laptops are niche market.”

            I’d suggest an alternate reason. Yes, developer laptops are a niche market. But I’d propose that laptops moving away from the X220 is a result of chasing “thinner and lighter” above all else, plus lowering costs. And the result when the majority of manufacturers all chase the same targets, you get skewed results.

            Plus: User choice only influences laptop sales so much. I’m not sure what the split is, but many laptops are purchased by corporations for their workforce. You get the option of a select few laptops that business services / IT has procured, approved, and will support. If they are a Lenovo shop or a Dell shop and the next generation or three suck, it has little impact on sales because it takes years before a business will offer an alternative. If they even listen to user complaints.

            And if I buy my own laptop, new, all the options look alike - so there’s no meaningful way to buy my preference and have that influence product direction.

            “Neither I, nor anyone else who bought an Apple product is responsible for your choice of a laptop.”

            Mostly true. The popularity of Apple products has caused the effect I described above. When Apple started pulling ahead of the pack, instead of (say) Lenovo saying “we’ll go the opposite direction” the manufacturers chased the Apple model. In part due to repeated feedback that users want laptops like the Air, so we get the X1 Carbons. And ultimately all the Lenovo models get crappy chicklet keyboards, many get soldiered RAM, fewer ports, etc. (As well as Dell, etc.)

            (Note I’m making some pretty sweeping generalizations here, but my main point is that the market is limited not so much because the OP’s choices are “niche” but because the market embraces trends way too eagerly and blindly.)

            1. 2

              Mostly true. The popularity of Apple products has caused the effect I described above. When Apple started pulling ahead of the pack, instead of (say) Lenovo saying “we’ll go the opposite direction” the manufacturers chased the Apple model. In part due to repeated feedback that users want laptops like the Air, so we get the X1 Carbons. And ultimately all the Lenovo models get crappy chicklet keyboards, many get soldiered RAM, fewer ports, etc. (As well as Dell, etc.)

              This reminds me a great deal of my recurring complaint that it’s hard to find a car with a manual transmission anymore. Even down to the point that, last time I was shopping, I looked at German-designed/manufactured vehicles, knowing that the prevailing sentiment last time I visited Germany was that automatic transmissions were for people who were elderly and/or disabled.

              I think the reasons are very similar.

              1. 1

                The move to hybrid and electric has also shrunk the market for manual transmissions.

                I’ve done my time with manual. My dual-clutch automatic has at least as good fuel economy and takes a lot of the drudge out of driving.

            2. 1

              All of this! Well said, Joe.

        1. 13

          This, but Asahi still has a long, long way to go before it can be considered stable enough to be a viable replacement for macOS.

          For the time being, you’re pretty much limited to running macOS as a host OS and then virtualize Linux on top of it, which is good enough for 90% of use cases anyway. That’s what I do and it works just fine, most of the time.

          1. 3

            Out of curiosity, what are you using for virtualization? The options for arm64 virtualization seemed slim last I checked (UTM “works” but was buggy. VMWare Fusion only has a tech preview, which I tried once and also ran into problems). Though this was a year or two ago, so maybe things have improved.

            1. 5

              VMware and Parallels have full versions out supporting Arm now, and there are literally dozens of “light” VM runners out now, using Apple’s Virtualisation framework (not to be confused with the older, lower level Hypervisor.framework)

            2. 2

              I’m using UTM to run FreeBSD and also have Podman set up to run FreeBSD containers (with a VM that it manages). Both Podman (open source) and Docker Desktop (free for orgs with fewer than, I think, 250 employees) can manage a Linux VM for running containers. Apple exposes a Linux binary for Rosetta 2 that Docker Desktop uses, so can run x86 Linux containers.

            3. 2

              I’m not speaking for @petar, but I use UTM when I need full fat Linux. (For example, to mount an external LUKS-encrypted drive and copy files.) That said, I probably don’t push it hard enough to run into real bugs. But the happy path for doing something quick on a Ubuntu or Fedora VM has not caused me any real headaches.

              It feels like most of the other things I used to use a Linux VM for work well in Docker desktop. I still have my ThinkPad around (with a bare metal install) in case I need it, but I haven’t reached for it very often in the past year.

    3. 1

      The page shows a modal that says

      Keep reading Real Python by creating a free account or signing in:

      Luckily, the archived page is less user hostile.

    4. 1
      var script = document.createElement('script');
      script.src = "https://unpkg.com/htmx.org@1.9.5"
      script.integrity = "sha384-xcuj3WpfgjlKF+FXhSQFQ0ZNr39ln+hwjN3npfM9VBnUskLolQAcN80McRIVOPuO";
      script.crossOrigin = 'anonymous';
      script.onload = function() {
          var body = document.querySelector("body");
          body.setAttribute('hx-boost', "true");
          htmx.process(body);
      }
      document.head.appendChild(script);
      

      Useless use of var. This is the same as writing window.script = document.createElement('script');. 😕

      1. 1

        My JS skills are lacking, what would be the “correct” thing to do instead?

        1. 2

          script = document.createElement…

          Would be equivalent to what you wrote.

          If you want the name to be invisible to other scripts then you could use an “IIFE” or a module.

          Edit: or you could maybe just wrap the script in braces and use let. I can’t remember what version of JS you need for that to work (non-strict, strict or module mode).

        2. 2

          There are a variety of choices. The quickest is to write an IIFE:

          (function() {
            var script = document.createElement('script');
            //...
          })();
          

          Another choice is to run the script in modern mode by linking to it with <script type="module" src="/whatever.js"></script>, but that sometimes has unwanted side effects (if a script expected to write to the window, for example). I’m not sure Sphinx has a mode for that.

          Another thing Sphinx probably doesn’t have a mode for is just writing the script tag directly, since all this JS does is add another script tag that you can just write yourself.

          <script 
            src="https://unpkg.com/htmx.org@1.9.5"
            integrity="sha384-xcuj3WpfgjlKF+FXhSQFQ0ZNr39ln+hwjN3npfM9VBnUskLolQAcN80McRIVOPuO"
            cross-origin="anonymous"
            onload="document.body.setAttribute('hx-boost', 'true'); htmx.process(document.body);"
          ></script>
          
    5. 3

      What are build times like for go compared to rust? Is it faster for comparably sized large projects?

      1. 15

        based on my own personal (anecdotal) experience, small to medium sized projects take N minutes to compile via rust, and N seconds to compile via go.

        after initial compilation, subsequent compilations will still take longer in rust, but it’s much more comparable (both languages do some caching to make things faster here).

        1. 1

          mold is our savior.

      2. 8
      3. 3

        If the article authors would provide the source code repositories we could easily measure the how long it would, but based on my personal experience the difference in compile times is huge. Go compile times are usually in the single digit seconds range, except for projects using a lot of Cgo or if they’re very large, like Kubernetes.

        1. 4

          Could I politely yes, and … ? Yes, the go compiler is faster, but in my experience, any large go project will use linters, many of them. Linting is slow. The linters catch many errors, like not checking a returned error which the rust compiler requires you to check. I suppose that means you can get an executable quickly, but I feel comparing Go’s compilation speed vs rust without linting doesn’t capture real world usage.

          1. 4

            In my experience, the speed of the Go compiler allows you to move quickly during development. And yes, you do spend a few extra seconds linting before committing your code or cutting a release. I don’t typically run the compiler and linter in lock step.

            During development, we tend to compile frequently but lint less often. This is especially true if you’re already using VSCode or Goland for incremental linting.

          2. 4

            Linting and compiling are completely decoupled. You can do them in parallel if you’d like.

          3. 4

            Linters are usually just as fast as the compiler, O(seconds).

    6. 63

      It is an indictment of our field that this opinion is even controversial. Of course XML is better than YAML. YAML is a pile of garbage that should be abandoned completely. XML at least has a lot of decent engineering behind it for specific purposes.

      1. 68

        Meh, these kind of absolute statements don’t really shed any light on the problem

        Seems like fodder for self-righteous feelings

        1. 28

          You’re right. The principles should be laid out: Ease of reasoning about configuration file formats is vastly more important than conveniences for writing specific values. Implicit conversion among types beyond very basic lifting of integer types is a bad idea, especially for configuration file formats. Grammars for configuration file formats should be simple enough to write a complete, correct grammar as a one day project.

          XML is kind of a weird animal because it’s playing the role equivalent to text for JSON. The principles above apply to the DTD you apply to your XML schema.

          1. 1

            Where does YAML do implicit type conversions?

            1. 6

              The Norway problem is a good example of this.

              1. 2

                There is no implicit type conversion going on on the YAML side. no is a boolean in YAML, just like false is a boolean in JSON. If a YAML parser converts it to a string, that’s the parser’s problem.

                1. 3

                  Ha. I can tell you’ve never written a parser before!

                  1. 2

                    No, @xigoi is right, strictly speaking. The parser is where this conversion is going on. Only if it cannot read it as anything else, it reads unquoted literals as if they were quoted strings. Of course, to a user that is neither here nor there: the rules need to be memorized to be able to use unquoted literals correctly.

                    1. 6

                      the rules need to be memorized to be able to use unquoted literals correctly

                      You’ll have a better time if you just use quotes by default… I don’t understand the appeal of unquoted literals in YAML

                      This, for me, is the root of it. YAML is fine as long as you are explicit. Now what it takes to be explicit is going to be driven by what types you intend to use. It seem, to me, that the majority of yaml use cases intend to use only a handful of scalar types and a handful collection types. That small set of types, not coincidentally, is basically the same as what you get in JSON and properly formed JSON is always valid YAML. So I would assert that if you use YAML and explicitly quote string values that you are effectively getting a slightly looser JSON parser which happens to allow you to write a flavor of JSON which is much easier for human concerns; I.E. less picky about trailing commas, supports comments, and is easier on the eyes with some of its constructs.

                      Of course, we’ve got a whole shitload of options these days, so I wouldn’t be surprised if some other markup/serialization format is better in any given specific domain. Different tools for different jobs…

                      One thing I will absolutely agree with is that YAML is awful when used as a basis for psuedo-DSLs, as you see in things like ansible and a lot of CICD systems.

                      1. 2

                        I think we basically agree, but in my opinion one should accept that people are lazy (or forgetful) and use shortcuts, or even copy/paste bad examples. This is like saying sloppiness in PHP or JS is not a problem because one can always use ===.

                        Most people don’t have the discipline to be explicit all the time (some don’t have the discipline to ever be explicit), therefore it’s probably safer to avoid tools with overly prominent inbuilt footguns entirely.

        2. 3

          TBH it seems that way because it almost feels pointless to reiterate the absurdity of YAML.

        3. 7

          Rubbish, the list of semantic surprises in YAML is long and established. The problems with XML boil down to “my fingies are sore fwom all the typing” and fashion.

          1. 21

            One of the most talented developers I know can only work for 2-3 hours a day on a good day because of RSI. I don’t think your patronising take carries the weight you think it does.

            1. 3

              That some people have physical difficulties does not at all impact the validity of the greater population’s supposed concerns about verbosity.

              1. 3

                Let’s also make websites inaccessible because most people don’t need screen readers, shall we?

                1. 1

                  You’re making my point. We have accessibility standards and specialised tools. We don’t demand web pages don’t have videos.

          2. 10

            There are other issues with XML. Handling entities is complex as are the rules for name spacing. Writing an XML parser is complex so most people use libxml2, which is a massive library that doesn’t have a great security track record. For most YAML use cases (where the input data is trusted) this doesn’t matter too much. Parsing YAML is also incredibly hard so everyone uses the same YAML parser library.

            1. 1

              Problems in a specific parser can’t be called problems in the format itself. For what it’s worth YAML’s popular parsers have also had horrible security problems in the past.

              If you have a minute to go into detail, I’m interested in what I’ve missed that makes namespaces complicated, I found them pleasing when used correctly, and frankly used so infrequently that it hardly ever came up, outside of specific formats that used xml as a container, for example MXML. But this knowledge is old now in my case, so I probably just missed the use case that you’re referring to.

              The entity expansions should never have been a thing, that much I’m sure we can all agree on. DTDs were a mistake, but XSD cleaned most of that up; but unless you were building general XML tooling you could in most cases ignore schemas and custom entities completely.

              What’s good about XML (aside from how much support and tooling it once had) is IMO:

              • The consistency with which the tree structure is defined. I don’t know why “modern” markups are all obsessed with the idea that the end of a node should be implied by what’s around it, rather than clearly marked, but I can’t stand it.
              • A clear separation of attributes and children.
              • Consistency in results, in that there are no “clever” re-interpretations of text.
              1. 2

                Consider this made up XML:

                <?xml version="1.0" encoding="UTF-8"?>
                <something>
                  <thing xmlns="mynamespace">
                    <item>An item.</item>
                  </thing>
                </something>
                

                Now, let’s query element item using XPath:

                //something/*[namespace-uri()='mynamespace' and local-name()='thing']/*[namespace-uri()='mynamespace' and local-name()='item']

                🤯

                And now imagine querying some element from a deeply nested XML that might contain more than one custom namespace.

                In my opinion XML namespaces just make it harder to work with the documents.

                1. 1

                  Dear XPath, please adopt Clark notation so we can do /something/{mynamespace}thing/item

                2. 1

                  Yeah that’s rough as guts 🤣 I’ve never seen somebody override the current namespace in the middle of the document, I never even considered that as something you could do. Nobody should have done this, ever.

                  1. 2

                    As a real-world use case, <svg> elements within HTML documents often set the namespace.

                  2. 1

                    Probably not specifically in this way but I am sure you’ve worked with documents which use different namespaces with nested elements.

              2. 2

                It’s almost 20 years since I did anything serious with XML, but I seem to remember the namespace things let you define alternative names for tags to avoid conflicts, so you had to parse tags as their qualified name, their unqualified name in the current namespace (or the parents?) or their aliased name.

                A lot of the security issues of libxml2 were due to the inherent complexity of the format. There are a lot of JSON parsers because the format is simple. You can write a JSON parser in a couple of hundred lines of code if you have a decent string library. A compliant XML parser is at least one, probably two, orders of magnitude more complex. That significantly increases the probability that it will have bugs.

                I’m also not sure I agree on the ‘clear separation of attributes and children’ thing. XML formats that I’ve worked with have never managed to be completely consistent here. Attributes are unstructured key-value pairs, children are trees, but there are a lot of cases where it’s unclear whether you should put something in an attribute or a child. Things using XML for text markup have to follow the rule that cdata is text that is marked up by surrounding text, but things using XML as a structured data transport often end up accidentally leaking implementation details of their first implementation’s data structures into this decision.

          3. 10

            If you’re creating XML by hand, you’re doing it wrong.

      2. 20

        I have zero real world issues with YAML, honestly. I’ll take YAML over XML every day of the week for config files I have to edit manually. Do I prefer a subset like StrictYAML? Yep. Do I still prefer YAML over anything else? Also yep.

        1. 11

          The problem with YAML is that you believe you have no real world issues until you find out you do.

          1. 5

            This sounds like all the folks who have “no real issues” with MySQL, or PHP (or dare I say it, JavaScript). Somehow the issues with YAML seem more commonly accepted as “objectively” bad, whereas the others tend to get defended more fiercely. I wonder why!

          2. 1

            What is an example of a problem with YAML that syntax highlighting won’t immediately warn you about?

            1. 11

              At a previous employer we had a crazy bug that took a while to track down, and when we did it turned out the root cause was YAML parsing something as a float over a string yet the syntax highlighting parsed it as a string.

              I wasn’t the developer on the case, so I don’t remember the exact specifics, but it boiled down to something like we used a object ID which was a hex string, and the ID in question was something along the lines of:

              oid: 123E456
              

              Which according to the YAML spec allows scientific notation. Of course this could be chalked up to a bug in the syntax highlighting or failure on our part to not use quotation marks but he results were the same; a difficult to track down bug downstream.

        2. 6

          real-world problems i’ve had with yaml:

          • multiline strings (used the wrong kind)
          • the norway problem
          • no block end delimiter (chop a file in half arbitrarily and it still parses without error)
    7. 4

      Excellent article, worth reading for the benchmarking instructions alone. I did not knew about the -cpu flag before, which is pretty handy for running a benchmark with different core counts! It probably is a convenience thing that sets GOMAXPROCS, but this is just an assumption since I did not look up how the flag is actually implemented.

      Update

      My assumption was correct (godoc):

      -cpu 1,2,4
      	    Specify a list of GOMAXPROCS values for which the tests, benchmarks or
      	    fuzz tests should be executed. The default is the current value
      	    of GOMAXPROCS. -cpu does not apply to fuzz tests matched by -fuzz.
      
      1. 2

        That’s right and also *testing.B.RunParallel keys off of GOMAXPROCS to determine how many goroutines to run in parallel https://pkg.go.dev/testing#B.RunParallel

    8. 4

      Fascinating article, there is so much to quote. But what stood out the most to me is this part about verification:

      From a verification perspective, we built a simplified model of ShardStore’s logic, (also in Rust), and checked into the same repository alongside the real production ShardStore implementation. This model … allowed us to perform testing at a level that would have been completely impractical to do against a hard drive with 120 available IOPS. From here, we’ve been able to build tools and use existing techniques, like property-based testing, to generate test cases that verify that the behaviour of the implementation matches that of the specification. … we managed to kind of “industrialize” verification, taking really cool, but kind of research-y techniques for program correctness, and get them into code where normal engineers who don’t have PhDs in formal verification can contribute to maintaining the specification, and that we could continue to apply our tools with every single commit to the software.

      I’d love to read/hear more about this executable specification model.

      1. 3

        I’m not sure if it’s the same teams but AWS has published about their use of PlusCal/TLA+. https://www.amazon.science/publications/how-amazon-web-services-uses-formal-methods

        On page 69 of the journal (page 4 of the downloaded pdf) there’s a nice little table of products and things verified. A couple reproduced here:

        S3

        Fault tolerant, low-level network algorithm -> Found two bugs, then others in proposed optimizations

        DynamoDB

        Replication and group-membership system -> Found three bugs requiring traces of up to 35 steps

        1. 1

          Thanks! Here is another relevant talk that is actually about the formal model implemented for ShardStore.

    9. 15

      If this is going to be the proposed solution for iterating over proper containers in Go, this is very disappointing. This is something that every other language has built-in, and this feels terribly bolted on.

      1. 7

        I agree. The proposal feels too barebones to me. I’d rather have seen an API similar to Java’s streams or Rust’s iter trait that are far more powerful. Instead we still have to write plain if conditions to end a iteration, and transforming or collecting one iterator into another collection still requires too much boilerplate.

        1. 1

          I don’t think Go could pull off Rust’s approach. Rust relies on a heavy optimizer that can inline everything and flip the loops inside-out from external iteration to internal.

          If Go just went for an interface with Next(), it’d be an inefficient interface on top of inefficient iterator implementations.

    10. 1

      Apologies for not watching the video, but IMO there’s no reason not to use the WAL journal mode. It’s faster and it enables better concurrency by allowing readers to read during a transaction. Just enable it when first opening the DB, before you define the schema. End of story, no need to watch a whole video about it :)

      1. 1

        I agree that WAL should be the way to go for the majority of applications. If someone’s curious about the potential disadvantages compared to rollback mode then see https://www.sqlite.org/wal.html.

    11. 52

      My spicy take is that there’s no reason to use dot for any of these. Dot hides the files sometimes in some systems, but not others, but why should project config be hidden at all? I want .git hidden because I never mess with the ref log and only rarely mess with hooks. But everything else should just non-hidden. If it’s important enough to be part of the project, it’s important enough to be visible!

      1. 12

        It’s not about using a dot nor not using a dot, it’s about putting things in a well-known subdirectory of your home directory, rather than scattered across the home directory. For example, .config/vim, not .vim. The location .config itself is defined as the default by XDG but users can override it by specifying the right environment variable.

        On macOS, this problem was solved back when the system was called NeXTSTEP: you have a set of directories that exist in multiple scopes (system-controlled, per-machine, and per-user, I seem to recall OPENSTEP had one more related to network shares but I don’t remember exactly), and these include Library, which is where configuration and support files live. On macOS, your config should live in ~/Library/Application Support/{your app name / FQDN}, but you don’t actually care about it because Cocoa gives you APIs for looking up directories for specific uses and you just do whatever it tells you.

        BSD systems have /etc/ for global configuration and /usr/local/etc for package configuration and it’s a shame that they never standardised ~/etc for per-user configuration.

        EDIT: No, I’ve misunderstood. They want to put everything in .config in a project repo, which I guess is better than having a load of dot files per config, but doesn’t seem like such a big win.]

        1. 3

          To complete the pattern, system/machine/user/project. This proposal, and the dot meta proclamation that I take this as a response to, aim to address project-specific configuration in the same kind of standardized way. Looks like your trailing edit addresses this.

          As an excuse to try mapping out a debate in some depth, I set up a Discussion, “Should software projects adopt a standard subdirectory for files for configuration, tools, & metadata?” on kialo.com, that others can explore or add to.

      2. 3

        The dot also means that it sorts at the top of a list which can be handy to have a way to distinguish two types of files but I’d argue that this functionality is more useful when you don’t consolidate all the config files into one location (the dot makes almost no sense with a consolidated folder layout).

      3. 3

        I guess it’s .config so it doesn’t conflict with existing config directories[^1]. But I share your point that it should not be hidden. Someone on the 🍊 site suggested to use toolconfig instead, which I think has less potential for naming conflict and is also pretty descriptive.

        [^1]: it’s because it’s inspired by the XDG config directory.

        1. 8

          I would argue that systems shouldn’t make dot-files hidden at all by default, but I’d also argue that .config is a good convention for a directory that holds all configs — it looks distinctive, it carries the connotation of config files (due to long-standing dot-file tradition), and it’s indeed better than lots of dot-files in the root dir.

          Also, the XDG convention is that tools should have subdirs, like .config/git.

          1. 2

            to be really consistent, it should be .local/etc

          2. 1

            IIRC the XDG convention advises using directories when a program has multiple files, but otherwise does not mandate their usage.

    12. 2

      Thanks to work in Go 1.21 cycle, the execution tracer’s run-time overhead was reduced from about -10% throughput and +10% request latency in web services to about 1% in both for most applications.

      This is huge!

    13. 1
    14. 17

      This seems a little weird to me compared to having both the 768p laptop screen and the UHD monitor plugged in, and using the small screen as the target for design work while allowing yourself the luxury of the big screen to get things done.

      But my first principle of keeping users happy is de gustibus non disputandum.

      1. 2

        de gustibus non disputandum

        For those who don’t know Latin, like myself, this translates to:

        In matters of taste, there can be no disputes

    15. 55

      The standard library in Go.

      1. 30

        I have done lots and lots of Java and Python in my career before I used go. I honestly find the go stdlib just okay. There is usually something that does the trick, but I am not a super fan. I am also not buying this consistency thing. I deal a lot with strings unfortunately and this mix of fmt, strings, strconv,bytes to do anything with strings is not intuitive. I understand where go is coming from historically and from a design philosophy yet I don’t find it that superior.

        (personally I would love to see a language that is a bit more high level/expressive like python, but with the go runtime/deployment model)

      2. 7

        I started a security project specifically because of high quality cryptography code in the standard library like no other language.

      3. 13

        You mean the one where you can’t multiply a duration by a number, but you can multiply a duration by a duration?

        1. 3

          Maybe I’m missing something, but dur := time.Hour * 2, as well as dur := 2 * time.Hour compile just fine.

          1. 6

            The literal is being implicitly converted to a duration. Try it with a variable instead of 2.

            1. 5

              Got it. However, that’s not really a limitation of the standard library, but rather a limitation of the language that prevents implicit type casting.

              1. 7

                The point is that mathematically, multiplying a number with a duration should work, whereas multiplying a duration with a duration should not.

                1. 2

                  It never occurred to me that people would expect to be able to multiply an int by a duration and not multiply two durations together. Personally I’m grateful that Go doesn’t implicitly convert ints to durations or vice versa–I suspect this has prevented quite a few bugs.

                  1. -5

                    Have you ever had physics in school? You might want to repeat it.

                    I’m not talking about implicit conversions.

                    1. 3

                      I think the physics repeat remark might be a little heated for this context: we can all take a breath here and try to understand each other.

                      I’m personally of the opinion that multiplying an int by a duration implicitly is a bit of an anti-feature: I expect it to work in loosey-goosey languages like Python or Ruby, I even expect it to work in languages like Rust where the Into trait lets someone, somewhere, define explicitly how the conversion should occur (this starts getting into the realm of the newtype pattern from eg. Haskell), but I don’t expect two disparate types to multiply or add together, no, regardless of what those are.

                      To be extra clear: I think Into is the correct way to solve for the expected ergonomics here, and wish more languages had this type of explicit control.

                      1. 12

                        Well, thing is:

                        • Adding two durations is obviously okay.
                        • So is subtracting two durations.
                        • Negative durations are okay too.
                        • Adding a duration to itself n times is okay.
                        • We just defined multiplication of durations by natural numbers. Therefore it is okay.
                        • Since negative durations are a thing, we can extend this to relative numbers too.
                        • Actually, multiplication can be extended to real numbers as well.
                        • All real numbers except zero have an inverse, so it’s okay to divide durations by any non-zero number.

                        On the other hand:

                        • It is not okay to add (or subtract) a duration and a number together.
                        • It is not okay to multiply (or divide) a duration by another duration.

                        So if I want to be super-strict with my operations and allow zero implicit conversions, I would have the following functions:

                        seconds s_add(seconds, seconds);
                        seconds s_sub(seconds, seconds);
                        seconds s_mul(seconds, double);
                        seconds s_div(seconds, double);
                        

                        Or if we’re in something like ML or Haskell:

                        s_add : seconds -> seconds -> seconds
                        s_sub : seconds -> seconds -> seconds
                        s_mul : seconds -> real -> seconds
                        s_div : seconds -> real -> seconds
                        

                        Now the binary operators +, -, *, and / are functions just like any other. We can just overload them so they accept the right operands. We have such an overloading even in C: adding two floats together is not the same as adding two integer together at all, but the compiler knows which one you want by looking at the type of the operands. (It also has the evil implicit conversions, but that’s different.)

                        So while a language that allows multiplying a duration by a number looks like it is implicitly converting the number to a duration before performing the multiplication, it absolutely does not. That’s just operator overloading: because you really want to multiply durations by ordinary numbers. And since multiplying two durations together makes no sense, you should get an error from your compiler if you try it.

                      2. 7

                        Again, multiplying a duration by a number is not “loosey-gooey”. Multiplying a duration by a duration is “loosey-gooey”, unless the result is a duration squared, which it isn’t.

                        1. 1

                          I think it depends on what you believe types are for—are they exactly units, or are they constraints (or both)?

                          1. 3

                            No matter if you treat types as units or constraints, you want to have operations that make sense. Multiplying 3 seconds by 5 hours doesn’t mean anything (except in the context of physics, where it can be an intermediate value).

                            1. 1

                              Agreed that you want operations that make sense, but if you think of types as units, then you probably want to be able to multiply ints and other types. If you think of them as constraints (especially for avoiding bugs) you probably don’t want to be able to multiply ints and arbitrary types. Personally, I’m more concerned with avoiding bugs rather than a strict adherence to mathematicians’ semantic preferences. There’s nothing fundamentally wrong with the latter, but it seems likely to produce more bugs.

                              1. 3

                                How exactly does allowing durations to be multiplied with each other, while not allowing them to be multiplied by integers, allow you to prevent bugs? If anything, I’d say it can introduce bugs.

                                you probably don’t want to be able to multiply ints and arbitrary types.

                                Where did I say anything about multiplying integers with arbitrary types?

                                1. 1

                                  How exactly does allowing durations to be multiplied with each other, while not allowing them to be multiplied by integers, allow you to prevent bugs?

                                  It means we can’t accidentally multiply a duration field by some integer ID field (or a field of some other integer type) by accident. In general it stands to reason that the more precise you are about your types, the less likely you are to have bugs in which you mixed two things that ought not have been mixed, and Duration is a more precise type than int. I’m not familiar with any bugs arising from being too precise with types, and even if they exist I suspect they are rarer than the inverse.

                                  Where did I say anything about multiplying integers with arbitrary types?

                                  Presumably you aren’t advocating a type system that makes a special exception for durations and ints, right? Feel free to elaborate about what exactly you’re advocating rather than making us guess. :)

                                  1. 3

                                    It means we can’t accidentally multiply a duration field by some integer ID field

                                    That’s why you use a different type for the ID.

                                    In general it stands to reason that the more precise you are about your types, the less likely you are to have bugs in which you mixed two things that ought not have been mixed, and Duration is a more precise type than int.

                                    Preciseness is only good when the typing is correct.

                                    Presumably you aren’t advocating a type system that makes a special exception for durations and ints, right?

                                    No, I’m advocation for a system that allows you to define multiplication however it makes sense. Like in Python. Or Nim. Or even C++, though C++ is partially weakly typed because of the C heritage.

                                    1. 1

                                      That’s why you use a different type for the ID.

                                      I agree, I’m advocating for precise types. But in any case you seem to be okay with “untyped” ints for quantities/coefficients so we can use the example of mixing up coefficients of durations with coefficients of some other concept.

                                      Preciseness is only good when the typing is correct.

                                      Agreed, and Go gets the typing correct, because types aren’t units. 👍

                                      No, I’m advocation for a system that allows you to define multiplication however it makes sense. Like in Python. Or Nim. Or even C++, though C++ is partially weakly typed because of the C heritage.

                                      My background is in C++ and Python. Very little good comes out of operator overloading but it opens the door for all kinds of clever stuff. For example, Sqlalchemy overloads operators (such as ==) to allow for a cutesy DSL, but a bug was introduced when someone tried to use a variable of that type in a conditional. I’ve never heard of bugs resulting from a lack of overloading, and it’s easy to workaround by defining a Multiply() function that takes your preferred type. No surprises, precise, and correct. 💯

                                      Moreover, the canonical datetime libraries for C++ and Python don’t give you back “DurationSquared” when you multiply two durations, nor do they allow you to divide a distance by a duration to get a Speed because types aren’t units–you could overload the multiplication operator to support duration * duration or overload the division operator to support distance / miles, but you have to model that for every combination of types (at least in mainstream languages like C++, Python, Go, etc) and for no benefit that I’m able to discern (apart from “to allow certain types to behave sort of like units”, which doesn’t seem like a meaningful goal unto itself).

                      3. 3

                        In rust

                        • The Into trait does not do automatic coercion. The Deref trait does under the right circumstances, but you shouldn’t use it to do so here (it’s really just meant for “smart pointers”, though there is no good definition of what a smart pointer is other than “something that implements Deref”).

                        • Traits like Add and Mul take both a lhs and a rhs type. For Add those would both be Duration. For Mul I would strongly expect it to take a Duration on one side and an int/float on the other.

                        Multiplying two duration’s together makes little sense. What is “2 seconds * 10 seconds”? Units wise I get “20 seconds^2” (which unfortunately most type systems don’t represent well). Physical interpretation wise time^2 is mostly just an intermediate value, but you could visualize time as distances (e.g. with light seconds), in which case it would be an area. Or alternatively you might notice that you divide distance by it you get an acceleration (m/s^2). What it definitely isn’t is a duration.

                        Multiplying a duration by a unit-less quantity (like an integer) on the other hand makes perfect sense “2 seconds * 10” is an amount of time 10 times as long. Hence why I would Duration to implement Mul with the lhs/rhs as ints.

                    2. 1

                      Sorry, my remark wasn’t meant to be provocative. I’ve just spent so much more time in the programming world than the math or physics worlds, hence “it never occurred to me”.

        2. 2

          You can find rough edges in every language and every (standard) library. This is unfortunately a fact of developer life.

          1. 2

            I wouldn’t call this a rough edge, but a fundamental flaw of the type system.

        3. 2

          In retrospect, time.Duration should have been an opaque type and not a named version of int64 (as the person who added Duration.Abs(), I’m well aware of the pitfalls), but there are no mainstream languages with the ability to multiply variables of type feet by feet and get a type square feet result, so I wouldn’t blame Go for that particularly.

          1. 2

            there are no mainstream languages with the ability to multiply variables of type feet by feet and get a type square feet result, so I wouldn’t blame Go for that particularly.

            Well yes, but it should be an error.

            1. 1

              There are also no mainstream languages where it’s an error. I agree though that it should have been type Duration struct { int64 } which would have prevented all arithmetic on durations.

              1. 2

                It’s an error in rust

                fn main() {
                    let dt = std::time::Duration::from_secs(1);
                    dt * 2; // ok ("warning: unused arithmetic operation" technically)
                    dt * dt; // error: mismatched types. Expected `u32` found struct `Duration` (pointing at second operand)
                }
                

                https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=bfd82c951af32f237ff3fcd568be5f75

                1. 1

                  In Rust, Duration is a struct, not a bare int. The multiplication works through operator overloading, which allows u32 but not another Duration. I take the point that this is better than Go.

                  As I said above, it would be better if in Go Duration were type Duration struct { int64 }. Go doesn’t have operator overloading, so you wouldn’t be able to multiply at all, but you’d have to use methods, like d.Mul(3) etc. It would be worth it though because then those could saturate when they overflow instead of wrapping around. It’s a minor language wart.

              2. 2

                Python does it correctly.

                from datetime import timedelta
                hour = timedelta(hours=1)
                print(hour)
                print(3*hour)
                print(hour*hour)
                

                Output:

                1:00:00
                3:00:00
                TypeError: unsupported operand type(s) for *: 'datetime.timedelta' and 'datetime.timedelta'
                

                Attempt This Online!

                1. 2

                  Yes, but Python itself does not use timedelta, which sucks.

                  >>> time.sleep(datetime.timedelta(0,0,1))
                  TypeError: 'datetime.timedelta' object cannot be interpreted as an integer
                  
      4. 1

        I come from python world but would love to hear more about what’s so special about Go standard library

        1. 4

          I would say the quality is more consistent across modules than Python’s, which feels like it evolved more organically. There’s also some foundational concepts like a stream of bytes that’s simpler and more composable than file-like objects in Python

          1. 2

            Python is 3 times the age of Go. 20 yrs from now I hope we say the Go stdlib is as consistent as it is now. I also hope to be retired by then and hope terminated before then.

            1. 3

              I remember when people were saying this about Go 5 and 10 years ago. It’s been almost 15 years and Go has done very well at consistency in its stdlib and elsewhere. When python was nearly 15 it’s standard library was already a mess—Go had the benefit of hindsight, specifically with respect to how “kitchen sink” languages like C++ turned out.

              1. 1

                Some of the worst things in Go’s standard library are old RPC formats and container types, but there’s not too much of it.

        2. 2

          It’s better organized and more cohesive. For example, what’s the difference between import os and import sys? I really couldn’t tell you. You just have to memorize it.

          The json module has load and loads, but I’m not aware of any other packages that follow that convention for file-like vs string input. Anyway, why not just have one function take both and branch on the type? Go is more consistent about using their file-like types (io.Reader/Writer) everywhere.

          Time in Python is 💩. No one ever wants a time without a time zone! Go libraries all take a time.Duration instead of this one taking seconds and that one taking milliseconds. Python has a timedelta type but no one uses it.

          The various urllibs are all strictly worse than their Go equivalents.

          Python does have better itertools and collections though. Now that Go has generics, hopefully, those will get ported over too.

          1. 2

            Honestly I hate the whole datetime.datetime.now() thing. I also don’t love that Python is so inconsistent with its casing conventions (e.g., testing uses assertEqual, datetime uses timedelta, and most other things use UpperCamelCase for types and snake_case for functions and methods). Also I’ve done a lot of subprocessing in Python and I still have to consult the docs every single time for subprocess.run() and friends–the arguments are just dizzying. Also, despite being a very dynamic language, Python doesn’t have anything as convenient as Go’s json.Marshal()–you have to write to_json() methods on every class that return a JSON-like dict structure (and to be quite clear, I have grievances with Go’s JSON package). Similarly, Python’s standard HTTP libraries are more tedious than those in Go–the canonical advice is just to import requests or similar, but this is a pain for simple scripts (e.g., I now have a build step for my AWS Lambda function which pretty much erases the benefit of using Python over Go for Lambda in the first place). These are just a few examples of issues with the Python stdlib off the top of my head, but there are lots more :)

    16. 1

      I vaguely remember a microservice conglomerate at a previous employer, where every service was named after some character from the Marvel universe. I’ll take auth-service any time over Kratos.

    17. 18

      As a lisper, I would disagree with the basic premise that “shells” or programming environments cannot have a good REPL and be a good programming language.

      The solution is that we have one tool, but there are two things, and so there should be two tools.

      At risk of being obvious; “Things are the way they are for reasons.” These two tools will exist on day 1 and on day 2 somebody will make sure both of them are scriptable, because being scriptable is the killer feature of shells. It’s what makes shells useful. Even DOS had a scriptable shell, it’s that great a feature.

      1. 13

        As a lisper, I would disagree with the basic premise that “shells” or programming environments cannot have a good REPL and be a good programming language.

        The original article says that (good?) programming languages require “readable and maintainable syntax, static types, modules, visibility, declarations, explicit configuration rather than implicit conventions.”

        As a non-lisper, I’ve never found Lisp syntax readable or maintainable. The books I’ve read, and the Lispers I’ve known, all swore that it would all click for me at some point, but nope. I get lost in all the parentheses and identical looking code.

        As a random example, here’s a function I found online that merges two lists:

        (defun myappend (L1 L2)
           (cond
              ; error checking
              ((not (listp L1)) (format t "cannot append, first argument is not a list~%" L1))
              ((not (listp L2)) (format t "cannot append, second argument is not a list~%" L2))
              ; base cases
              ((null L1) L2)
              ((null L2) L1)
              ; general case, neither list is empty
              (t (cons (car L1) (myappend (cdr L1) L2)))))
        

        I would put out my eyes if I had to read code like that for more than a day. The last line with its crystal clear ))))) is the kicker. I know that this may be (largely? entirely?) subjective, but I think it’s a relatively common point of view.

        1. 24

          Admittedly there are a lot more people who routinely see

                  f(g(h(a)));
                }
              }
            }
          }
          

          and feel everything is fine.

          Yes, feeling is subjective and is fine.

          1. 20

            you are kind. It is often:

                  };])))();}}])
            
            1. 4

              You have an extra ; in there!

          2. 3

            I’d not let that pass in a code review 😜

        2. 11

          Completely subjective and entirely valid.

        3. 7

          My problem with that code isn’t the parens per se. It’s that you as the reader have to already know or be able to infer what is evaluated and what is not. defun is a primitive function and it’s really being called, but myappend and L1 and L2 are just symbols being defined. cond is a function and is really called, but the arguments to cond are not evaluated. However, cond does execute the first item of each list it is past and evaluate that to figure out which branch is true. Presumably cond is lazy only evaluates until it reaches its first true condition. format I guess is a function, but I have no idea what t is or where it comes from. (Maybe it’s just true?) null is a primitive symbol, but in this case, maybe it’s not a value, it’s a function that tests for equality? Or maybe cond handles elements with two items differently than elements with one item? cons, car, and cdr are functions and they are really evaluated when their case is true…

          Anyhow, you can work it out with some guesswork and domain knowledge, but having syntax that distinguishes these things would be much more clear:

          defun :myappend [:L1 :L2] {
             cond [
                [{not (listp L1)} {format t "cannot append, first argument is not a list~%" L1}]
                [{isNull L1} {L2}]
                [{isNull L2} {L1}]
                [{true} {cons (car L1) (myappend (cdr L1) L2)}]
             ]
          }
          
          1. 10

            This is something Clojure improves somewhat upon traditional Lisps, it uses [] rather than () for grouping (see defn]).

          2. 7

            I don’t love the parentheses, but they’re not my biggest stumbling block with Lisp either. I find it hard to read Lisp because everything seems the same—flat somehow. Maybe that’s because there’s so little visual, explicit syntax. In that case, (maybe?) we’re back to parentheses. I’m not sure.

            1. 7

              It’s because you’re reading the abstract syntax tree almost literally. In most languages, syntactic patterns create structure, emphasis, guidance. What that structure costs is that it can be a wall in your way when you’d prefer to go against the grain. In Lisp, there is almost no grain or limitation, so you have surprising power. In exchange, structure is your job.

              1. 3

                Thanks: you’ve described the flatness (and its cause) much better than I could have. (I also appreciate your adding a benefit that you get from the lack of given or required structure.)

          3. 3
            def myappend(l1, l2):
                if not isinstance(list, l1):
                    pass
                elif …
            

            nothing surprising with “myappend”, “l1” and “l2” and no need for a different syntax^^

            cond is a macro. Rightly, it’s better to know that, but we can infer it given the syntax. So, indeed, its arguments are not evaluated. They are processed in order as you described.

            (format t "~a arg) (for true indeed) prints arg to standard output. (format nil …) creates a string.

            we can replace car with first and cdr with rest.

        4. 4

          So, I don’t know that Lisp would necessarily click for any given person, but I did find that Janet clicked things in my brain in a way few languages have since.

          I built a lot of fluency with it pretty quick, in part because it was rather small, but had most of what I wanted I’m a scripting language.

          That doesn’t apply to Common Lisp, though, which is both large and with more than a few archaic practices.

          That being said, a lot of folks bounce off, and a lot of the things that used to be near exclusive to Lisp can be found elsewhere. For me, the simplicity of a smaller, postmodern parens language is the appeal at this point (I happen to be making my own, heh)

          1. 3

            things that used to be near exclusive to Lisp can be found elsewhere.

            some things yes, but never all together, let alone the interactivity of the image-based development!

            1. 2

              I mean, I think Factor is an example of of all of that coming together in a something distinctly not a Lisp

        5. 3

          subjective. BTW Lisp ticks the other boxes. Personal focus on “maintainable”.

          To merge two lists: use append.

          Here’s the code formatted with more indentation (the right way):

          (defun myappend (L1 L2)
             (cond
               ;; error checking
               ((not (listp L1))
                (format t "cannot append, first argument is not a list~%" L1))
               ((not (listp L2))
                (format t "cannot append, second argument is not a list~%" L2))
               ;; base cases
               ((null L1)
                L2)
               ((null L2)
                L1)
               ;; general case, neither list is empty
               (t
                (cons (car L1) (myappend (cdr L1) L2)))))
          

          Did you learn the cond macro? Minimal knowledge is required to read Lisp as with any other language.

          1. 6

            To merge two lists: use append.

            Sure, but what I posted is (pretty obviously) teaching code. The function is called myappend because (presumably) the teacher has told students, “Here is how we might write append if it didn’t exist.”

            Here’s the code formatted with more indentation (the right way)

            Thank you, but that did nothing to make the code more readable to me. (See below on “to me.”)

            subjective…Did you learn the cond macro? Minimal knowledge is required to read Lisp as with any other language.

            I’m not sure, but you seem to be using “subjective” as a way of saying “wrong.” Or, to put this in another way, you seem to want to correct my (and Carl’s) subjective views. As a general rule, I don’t recommend that, but it’s your choice.

            I’m glad you enjoy Lisp and find it productive. But you may have missed my point. My point was that I—and many people—do not find List readable or maintainable. (It’s hard to maintain what you find unpleasant and difficult to read, after all.) I wasn’t saying you or anyone else should change your subjective view; I was just stating mine. To quote jyx, “Yes, feeling is subjective and is fine.” I didn’t mean to pick on something you love. I was questioning how widely Lisp can play the role of great REPL plus readable maintainable programming language.

            1. 5

              hey, right, I now find my wording a bit rigid. My comment was more for other readers. We read a lot of lisp FUD, so sometimes I try to show that the Lisp world is… normal, once you know a few rules (which some busy people expect to know when they know a C-like language).

              To merge two lists: use append.

              rewording: “dear newcomer, be aware that Lisp also has a built-in for this”. I really mean to say it, because too often we see weird, teaching code, that does basic things. Before, these examples always bugged me. Now I give hints to my past self.

              My point was that I—and many people—do not find List readable

              OK, no pb! However I want to encourage newcomers to learn and practice a little before judging or dismissing the language. For me too it was weird to see lisp code at the beginning. But with a little practice the syntax goes away. It’s only syntax, there is so much more to judge a language. I wish we talked less about parens, but this holds for any other language when we stop at the superficial syntax.

              or maintainable

              but truly, despite one’s tastes, Lisp is maintainable! The language and the ecosystem are stable, some language features and tooling explicitly help, etc.

    18. 14

      Storing your Dropbox folder on an external drive is no longer supported by macOS.

      As someone who has used Windows, macOS, Linux, and FreeBSD extensively as professional desktop OSs I still don’t understand the love so many hackers have for Apple kit.

      1. 48

        Because not everyone needs the same features as you. I like that MacOS behaves close enough to a Linux shell, but with a quality GUI. I particularly like emacs bindings in all GUI text fields, and the ctrl/cmd key separation that makes terminals so much nicer to use. I like the out-of-the-box working drivers, without having to consult Wikis about which brands have working Linux drivers. I like the hardware, which is best in class by all metrics that matter to me, especially with Apple Silicon. I like the iPhone integration, because I use my iPhone a lot. I like AppleScript, and never bothered to learn AutoHotKey. I like that my MacBook wakes up from sleep before I can even see the display. I like the massive trackpad, which gives me plenty of space to move around the mouse. I like Apple Music and Apple Photos and Apple TV, which work flawlessly, and stream to my sound system running shairport-sync. I like Dash for docs, which has an okay-ish Linux port but definitely not the first class experience you get on MacOS. I like working gestures and consistent hotkeys, tightly controlled by Apple’s app design guidelines. I like that I can configure caps lock -> escape in the native keyboard settings, without remembering the X command or figuring out Wayland or installing some Windows thing that deeply penetrates my kernel.

        I use Linux for servers. I have a Windows gaming PC that hangs on restart or shut down indefinitely until you cut power, and I don’t care enough to fix it because it technically still functions as a gaming PC. But for everything else I use MacOS, and I flat out refuse to do anything else.

      2. 25

        As someone who ran various Linux distros as my main desktop OS for many years, I understand exactly why so many developers use Apple products: the quality of life improvement is staggeringly huge.

        And to be honest, the longer I work as a programmer the more I find myself not caring about this stuff. Apple has demonstrated, in my opinion, pretty good judgment for what really matters and what’s an ignorable edge case, and for walking back when they make a mistake (like fixing the MBP keyboards and bringing back some of the removed ports).

        You can still boot Linux or FreeBSD or whatever you want and spend your life customizing everything down to the tiniest detail. I don’t want to do that anymore, and Apple is a vendor which caters to my use case.

      3. 14

        I am a macOS desktop user and I like this change. Sure, it comes with more limitations, but I think it is a large improvement over having companies like Dropbox and Microsoft (Onedrive) running code in kernel-land to support on-demand access.

        That said, I use Maestral, the Dropbox client has become too bloated, shipping a browser engine, etc.

        1. 6

          I don’t follow why the - good! - move to eliminate the need for kernel extensions necessitates the deprecation of external drives though.

          1. 23

            I’m not a Dropbox user so I’ve never bothered to analyse how their kext worked. But based on my own development experience in this area, I assume it probably used the kauth kernel API (or perhaps the never-officially-public-in-the-first-place MAC framework API) to hook file accesses before they happened, download file contents in the background, then allow the file operation to proceed. I expect OneDrive and Dropbox got special permission to use those APIs for longer than the rest of us.

            As I understand it, Apple’s issue with such APIs is twofold:

            • Flaws in kernel code generally tend to lead to higher-severity security vulnerabilities, so they don’t want 3rd party developers introducing them. (They keep adding plenty of their own kernel space drivers though, presumably because of the limitations of the user space APIs they’ve provided. And because Apple’s own developers can of course be trusted to write flawless kernel code.)
            • (Ab)uses of kernel APIs like Kauth for round-tripping kernel hooks to user space lead to priority inversion, which in turn can lead to poor performance or hangs.

            These aren’t unreasonable concerns, although the fact they’re still writing large amounts of kernel vulnerabilities panic bugs code themselves somewhat weakens their argument.

            So far, they’ve been deprecating (and shortly after, hard-disabling) kernel APIs and replacing them with user-space based APIs which only implement a small subset of what’s possible with the kernel API. To an extent, that’s to be expected. Unrestricted kernel code is always going to be more powerful than a user space API. However, one gets the impression the kernel API deprecations happen at a faster pace than the user space replacements have time to mature for.

            In this specific case, NSFileProvider has a long and chequered history. Kauth was one of the very first kernel APIs Apple deprecated, back on macOS 10.15. It became entirely unavailable for us plebs on macOS 11, the very next major release. Kauth was never designed to be a virtual file system API, but rather an authorisation API: kexts could determine if a process should be allowed to perform certain actions, mainly file operations. This happened in the form of callback functions into the kext, in the kernel context of the thread of the user process performing the operation.

            Unfortunately it wasn’t very good at being an authorisation system, as it was (a) not very granular and (b) leaving a few gaping holes because certain accesses simply didn’t trigger a kauth callback. (Many years ago, around the 10.7-10.9 days, I was hired to work on some security software that transparently spawned sandboxed micro VMs for opening potentially-suspect files, and denied access to such files to regular host processes; for this, we actually tried to use kauth for its intended purpose, but it just wasn’t a very well thought-out API. I don’t think any of Apple’s own software uses it, which really is all you need to know - all of that, sandboxing, AMFI (code signing entitlements), file quarantine, etc. uses the MAC framework, which we eventually ended up using too, although the Mac version of the product was eventually discontinued.)

            Kauth also isn’t a good virtual file system API (lazily providing file content on access atop the regular file system) but it was the only public API that could be (ab)used for this purpose. So as long as the callback into the kext didn’t return, the user process did not make progress. During this time, the kext (or more commonly a helper process in user space) could do other things, such as filling the placeholder file with its true content, thus implementing a virtual file system. The vfs kernel API on the other hand, at least its publicly exported subset, is only suitable for implementing pure “classic” file systems atop block devices or network-like mounts. NSFileProvider was around for a few years on iOS before macOS and used for the Usual File Cloud Suspects. Reports of problems with Google Drive or MS OneDrive on iOS continue to this day. With the 10.15 beta SDK, at the same as deprecating kauth, everyone was supposed to switch over to EndpointSecurity or NSFileProvider on macOS too. NSFileProvider dropped out of the public release of macOS 10.15 because it was so shoddy though. Apple still went ahead and disabled kauth based kexts on macOS 11 though. (EndpointSecurity was also not exactly a smooth transition: you have to ask Apple for special code signing entitlements to use the framework, and they basically ignored a load of developers who did apply for them. Some persevered and eventually got access to the entitlement after more than a year. I assume many just didn’t bother. I assume this is Apple’s idea of driving innovation on their platforms.)

            Anyway, NSFileProvider did eventually ship on macOS too (in a slightly different form than during the 10.15 betas) but it works very differently than kauth did. It is an approximation of an actual virtual file system API. Because it originally came from iOS, where the UNIXy file system is not user-visible, it doesn’t really match the way power users use the file system on macOS: all of its “mount points” are squirrelled away somewhere in a hidden directory. At least back on the 10.15 betas it had massive performance problems. (Around the 10.14 timeframe I was hired to help out with a Mac port of VFSforGit, which originally used kauth. (successfully) With that API being deprecated, we investigated using NSFileProvider, but aside from the mount point location issue, it couldn’t get anywhere near the performance required for VFSforGit’s intended purpose: lazily cloning git repos with hundreds of thousands of files, unlike the kauth API. The Mac port of VFSforGit was subsequently cancelled, as there was no reasonable forward-looking API with which to implement it.)

            So to come back to your point: these limitations aren’t in any way a technical necessity. Apple’s culture of how they build their platforms has become a very two-tier affair: Apple’s internal developers get the shiny high performance, powerful APIs. 3rd party developers get access to some afterthought bolt-on chicken feed that’s not been dogfooded and that you’re somehow supposed to plan and implement a product around during a 3-4 month beta phase, the first 1-2 months of which the only window in which you stand any sort of slim chance of getting huge problems with these APIs fixed. Even tech behemoths like Microsoft don’t seem to be able to influence public APIs much via Apple’s Developer Relations.

            At least on the file system front, an improvement might be on the horizon. As of macOS 13, Apple has implemented some file systems (FAT, ExFAT and NTFS I think) in user space, via a new user space file system mechanism. That mechanism is not a public API at this time. Perhaps it one day will be. If it does, the questions will of course be whether

            1. 3rd party developers actually get to use it without jumping through opaque hoops.
            2. The public API actually is what Apple’s own file system implementations get to use, or whether it’s once again a poor quality knock-off.
            3. It can actually get close to competing in terms of features and performance compared to in-kernel vfs. I’ve not looked for comparative benchmarks (or performed any of my own) on how the new user space file systems compare to their previous kernel-based implementations, or the (more or less) highly-optimised first-tier file systems APFS and HFS+. (FAT and ExFAT are hardly first-tier in macOS, and NTFS support is even read-only.)

            (The vfs subsystem could be used for implementing a virtual file system if you had access to some private APIs - indeed, macOS contains a union file system which is used in the recovery environment/OS installer - so there’s no reason Apple couldn’t export features for implementing a virtual file system to user space, even if they don’t do so in the current kernel vfs API.)

      4. 13

        Part of that may also be the hardware, not the software.

        My datapoint: I’ve never really liked macOS, and tried to upgrade away from a MacBook to a “PC” laptop (to run KDE on Linux) two years ago. But after some research, I concluded that - I still can’t believe I’m saying this - the M1 MacBook Air had the best value for money. All “PC” laptops at the same price are inferior in terms of both performance and battery life, and usually build quality too (but that’s somewhat subjective).

        I believe the hardware situation is largely the same today, and will remain the same before “PC” laptops are able to move to ARM.

        macOS itself is… tolerable. It has exactly one clear advantage over Linux desktop environments, which is that it has working fonts and HiDPI everywhere - you may think these are just niceties, but they are quite important for me as a Chinese speaker as Chinese on a pre-hiDPI screen is either ugly or entirely unreadable. My pet peeve is that the dock doesn’t allow you to easily switch between windows [1] but I fixed that with Contexts. There are more solutions today since 2 years ago.

        [1] macOS’s Dock only switches between apps, so if you have multiple windows of the same app you have to click multiple times. It also shows all docked apps, so you have to carefully find open apps among them. I know there’s Expose, but dancing with the Trackpad to just switch to a window gets old really fast.

        1. 13

          [macOS] has exactly one clear advantage over Linux desktop environments, which is that it has working fonts and HiDPI everywhere

          Exactly one? I count a bunch, including (but not limited to) better power management, better support for external displays, better support for Bluetooth accessories, better and more user-friendly wifi/network setup… is your experience with Linux better in these areas?

          My pet peeve is that the dock doesn’t allow you to easily switch between windows

          Command+~ switches between windows of the active application.

          1. 3

            better power management

            This is one area that I’d concede macOS is better for most people, but not for me. I’m familiar enough with how to configure power management on Linux, and it offers much more options (sometimes depending on driver support). Mac does have good power management out of the box, but it requires third party tools to do what I consider basic functions like limiting the maximal charge.

            The M1 MBA I have now has superior battery life but that comes from the hardware.

            better support for external displays

            I’ve not had issues with support for external monitors using KDE on my work laptop.

            The MBA supports exactly one external display, and Apple removed font anti aliasing so I have to live with super big fonts in external displays. I know the Apple solution is to buy a more expensive laptop and a more expensive monitor, so it’s my problem.

            better support for Bluetooth accessories

            Bluetooth seems suck the same everywhere, I haven’t notice any difference on Mac - ages to connect, random dropping of inputs. Maybe it works better for Apple’s accessories which I don’t have any of, so it’s probably also my problem.

            better and more user-friendly wifi/network setup

            KDE’s wifi and network management is as intuitive. GNOME’s NetworkManager GUI used to suck, but even that has got better these days.

            Command+~ switches between windows of the active application.

            I know, but

            • Having to think about whether I’m switching to a different app or in the same app is really not how my brain works.
            • It’s still tedious when you have 3 or more windows that look similar (think terminals) and you have to pause after switching to ensure you’re in the correct window.
            • I really just want to use my mouse (or trackpad) really to do this kind of GUI tasks.

            I’ve used a Mac for 6 years as my personal laptop and have been using Linux on my work laptop.

            Back then I would agree that macOS (still OS X then) was much nicer than any DE on Linux. But Linux DEs have caught up (I’ve mainly used KDE but even GNOME is decent today), while to an end user like me, all macOS seems to have done are (1) look more like iOS (nice in some cases, terrible in others) and (2) gets really buggy every few releases and returns to an acceptable level over the next few versions. I only chose to stay on a Mac because of hardware, their OS has lost its appeal to me except for font rendering and HiDPI.

            1. 3

              better support for Bluetooth accessories

              A bluetooth headset can be connected in two modes (or more), one is A2DP with high quality stereo audio, but no microphone channel, and the other one is the headset mode which has low quality audio but a microphone channel. On macOS the mode will be switched automatically whenever I join or leave a meeting, on Linux this was always a manual task that most of the time didn’t even work, e.g. because the headset was stuck in one of the modes. I can’t remember having a single issue with a bluetooth headset on macOS, but I can remember many hours of debugging pulseaudio or pipewire just to get some sound over bluetooth.

        2. 3

          My pet peeve is that the dock doesn’t allow you to easily switch between windows

          It sounds like you found a third-party app you like, but for anyone else who’s annoyed by this, you may find this keyboard shortcut helpful: when you’re in the Command-Tab app switcher, you can type Command-Down Arrow to see the individual windows of the selected app. Then you can use the Left and Right Arrow keys to select a window, and press Return to switch to that window.

          This is a little fiddly mechanically, so here’s a more detailed explanation:

          1. Press the Command key and keep it held down for the rest of this process.
          2. Type Tab to enter the app switcher.
          3. Use the Tab and Backtick keys to select an app. (Tab moves the selection to the right; Backtick moves it to the left.)
          4. Press the Down Arrow key to enter “app windows” mode.
          5. Press any of the arrow keys to activate the focus ring. I’m not really sure why this step is necessary.
          6. Use Left and Right Arrow keys to select the window you’re interested in.
          7. Press Return to open the window.
          8. Now you can release the Command key. (You can actually do this any time after step 4.)

          (On my U.S. keyboard, the Backtick key is directly above Tab. I’m not sure how and whether these shortcuts are different on different keyboard layouts.)

          This seems ridiculous when I write it all out like this, but once you get it in your muscle memory it’s pretty quick, and it definitely feels faster than moving your hand to your mouse or trackpad to switch windows. (Who knows whether it’s actually faster.)

          1. 2

            Thanks, I’ve tried this before but my issue is that this process involves a lot of hand-eye loop (look at something, decide what to do, do it). On the other hand, if I have a list of open windows, there is exactly one loop - find my window, move the mouse and click.

            I hope people whose brain doesn’t work like mine find this useful though :)

            1. 2

              It’s kind of nutty, but my fix for window switching has been to set up a window switcher in Hammerspoon: https://gist.github.com/jyc/fdf5962977943ccc69e44f8ddc00a168

              I press alt-tab to get a list of windows by name, and can switch to window #n in the list using cmd-n. Looks like this: https://jyc-static.com/9526b5866bb195e636061ffd625b4be4093a929115c2a0b6ed3125eebe00ef20

              1. 2

                Thanks for posting this! I have an old macbook that I never really used OSX on because I hated the window management. I gave it another serious try after seeing your post and I’m finding it much easier this time around.

                I ended up using https://alt-tab-macos.netlify.app over your alt tab, but I am using Hammerspoon for other stuff. In particular hs.application.launchOrFocus is pretty much the win+1 etc hotkeys on Windows.

        3. 3

          Once you factor in the longevity of mac laptops vs pcs, the value proposition becomes even more striking. I think this is particularly true at the pro level.

      5. 4

        I use both. But to be honest, on the Linux side I use KDE plasma and disable everything and use a thin taskbar at the top and drop all the other stuff out of it and use mostly the same tools I use on macOS (neovim, IntelliJ, Firefox, etc…).

        Which is two extremes. I’m willing to use Linux so stripped down in terms of GUI that I don’t have to deal with most GUIs at all other than ones that are consistent because they’re not using the OS GUI framework or macOS.

        There’s no in between. I don’t like Ubuntu desktop, or gnome, or any of the other systems. macOS I am happy to use the guis. They’re consistent and for the most part. Just work. And I’ve been using Linux since they started mailing it out on discs.

        I can’t tell you exactly why I’m happy to use macOS GUIs but not Linux based GUIs, but there is something clearly not right (specifically for me to be clear; everyone’s different) that causes me to tend to shun Linux GUIs altogether.

      6. 4

        If I cared about hacking around with the OS (at any level up to the desktop) or the hardware, I wouldn’t do it on Apple kit, but I also wouldn’t do it on what I use every day to enable me to get stuff done, so I’d still have the Apple kit for that.

      7. 1

        Go ahead. Kick the hornets’ nest.

    19. 2

      One downside of ULID and UUID v7 is that you also can get bottlenecked in the database, because multiple entries may be right beside each other (many entries with at the same time). Fully random IDs may be on very different memory pages (or the disk backed equivalent), so the concurrency can be better.

      Still I personally prefer v7, because for my workload the random memory layout of UUID v4 would slow me down.

      1. 5

        For anyone curious, this percona article discusses the performance implications of using UUIDs as row keys.

        1. 1

          If you are using random UUID values as primary keys, your performance is limited by the amount of memory you can afford

          Their performance benchmark is definitely something

    20. 2

      Creator of https://www.ulidtools.com here - nice site! ULIDS ftw.

      1. 2

        Are you sure that these strings are UUIDv4?

        01859DB9-6B25-D56C-588A-F72D574F5A18
        01859DBA-7172-6F88-78CE-52F5D447B79F
        01859DBA-8F2C-D57A-8ED8-F1519161A51C
        01859DBA-BCDD-F392-6FF9-66A47252644B
        
        1. 2

          For anyone who’s wondering what makes them not UUIDv4s, I guess it’s because the the first part, 01859DB9-, does not look very random. A UUIDv4 however is completely random, like:

          $ for x in $(seq 5); do uuidgen; done
          F61DCF92-863B-4250-98EA-16E417311539
          F6411104-AA7F-47D9-B4F2-D6A2B96834B5
          A5D25324-E8B6-478F-B626-28ACC7683988
          27A8CB5A-2307-467E-A076-8EF5D7C05ED8
          D1F034B1-C2B2-43C7-B4C1-DE7550E86FEF
          
          1. 7

            This issue is that the version/variant aren’t correct for UUIDv4. You can see when you decode them: https://www.uuidtools.com/decode

            1. 1

              Thanks! Didn’t knew that the version is encoded in the UUID.

      2. 1

        Wow, that’s great! Is it open source?