Threads for Foxboron

  1. 1

    Imagine the cool shit we might have had by now if all this “trusted” computing was built with the user as the customer, rather than the primary threat.

    1. 2

      How is the TPM built with the user as the primary threat? We are very much capable of using the TPM for cool shit already.

    1. 1

      Currently weighing whether I want to set up secure boot on my lin/win dual boot UEFI setup… the arch wiki is thorough but it seems pretty darn complicated and easy to screw up. I guess the benefit is malware can’t write anything to the boot partition to be inserted during the next boot?

      As an aside: a pox upon windows for only allocating 100 MB for the ESP on install. It isn’t nearly enough space if you’re dual-booting. I tried to resize it but ended up wrecking everything and had to reinstall windows.

      One thing the author doesn’t mention in their post (not that they need to document all windows wonkiness, but this burned me before) is that, despite recommending installing dual-booted OSs on separate disks, and mentioning you can have as many EFI partitions as you want (according to the UEFI spec) windows very much does not like this and will probably refuse to boot if it finds more than one EFI partition (https://learn.microsoft.com/en-US/troubleshoot/windows-client/windows-security/cannot-boot-windows-on-primary-hard-disk-uefi). Despite this a lot of linux distributions will happily create a new EFI partition on whatever blank disk they’re being installed onto instead of using an existing EFI partition on a different disk. So I feel like you really have to understand UEFI if you want to get dual-booting running well. This might move lin/win dual-booting beyond the reach of non-technical users.

      Overall, great post.

      1. 3

        Currently weighing whether I want to set up secure boot on my lin/win dual boot UEFI setup… the arch wiki is thorough but it seems pretty darn complicated and easy to screw up.

        That is why I wrote sbctl which is mentioned at the end of the Arch Wiki entry.

        https://wiki.archlinux.org/title/Unified_Extensible_Firmware_Interface/Secure_Boot#sbctl

        1. 1

          Wow, this is neat, thanks for writing this! I will certainly try it out. The only thing I’m confused by here is proper key storage. It seems like if I just have all the keys sitting around in my root filesystem then it would be easy for some linux malware to find those key files then modify the boot files and also sign them, thus obviating most of the protection granted by secure boot in the first place, right? Should the keys be stored in a TPM or derived from a U2F dongle or something?

          1. 2

            Should the keys be stored in a TPM or derived from a U2F dongle or something?

            Yes, they should. I just haven’t had the time to sit down and understand enough of the TPM libraries in Go to actually implement this properly. Same with HW tokens.

            Storing these files as passord-less files on the root file system isn’t more terrible then what most guides instruct you to do, especially when you have disk encryption.

            1. 2

              The successfully protected threat model here is that if someone steals your laptop, they can’t access those key files without a measured-good OS unlocking the FDE keys.

              Secure Boot starts getting a lot more useful once you have multiple computers and you can transmit PCR measurements from one to the other.

              1. 1

                Like in corporate IT setups or something? This sounds very interesting, do you know of some good reading on the topic? The secure boot threat model, mitigations against various stages of attacks, integrations with TPM and FDE, communicating secure boot state with other computers, etc.

          2. 2

            I tried to resize it but ended up wrecking everything and had to reinstall windows.

            Very much this. Gparted can’t resize FAT32 volumes under 250MB, but doesn’t tell you that until after it’s trashed it.

            Everyone thinks they love UEFI these days. Systemd, of course, has its own special bootloader for loading the kernel from an ESP. Yeah, well, if you love it so much, put your money where your mouth is and make sure that the standard FOSS OS management tools and resize an existing ESP to make it compliant with these needs. No it is not enough to demand a new one, or destroy the current one, or support one per distro.

            1. 1

              I have a theory (which I will shortly test) that windows does support there being multiple ESPs on the computer. Gonna make a completely new partition near the end of the drive and copy things over (including the windows boot files, which were missing in the Ubuntu-created one) then make the new ESP first in the boot order. If windows still complains I’ll change the partition type of the original ESP so it can’t even find it. But yes I can guess it would irritate people to have that residual 100 MB sitting uselessly at the start of the drive.

          1. 3

            The second code I got was a package.json file which made it a bit too easy :)

            Should probably have a small deny list for certain files?

            1. 2

              So the cool thing with PEP302 is that you could in theory just implement a random language in Python that compiles using the ast module. Then when you load your library you can hook into the import system and on-the-fly compile your custom language before giving it to Python.

              Imagine if you had a LISP with macro support! You could just generate a bunch of Python code and hook into it transparently.

              https://youtu.be/1vui-LupKJI?t=978

              :)

              1. 2

                I don’t know about python, but perl has a similar mechanism. Which led to things like: https://metacpan.org/dist/Lingua-Romana-Perligata/view/lib/Lingua/Romana/Perligata.pm

                1. 1

                  Author: Damian Conway.

                  Figures.

                  To simplify the mind-numbingly complex rules of declension and conjugation that govern inflexions in Latin, Perligata treats all user-defined scalar and array variables as neuter nouns of the second declension – singular for scalars, plural for arrays.

                  Hashes represent something of a difficulty in Perligata, as Latin lacks an obvious way of distinguishing these “plural” variables from arrays. The solution that has been adopted is to depart from the second declension and represent hashes as masculine plural nouns of the fourth declension.

                2. 1

                  I know of one module that utilizes this. pcapnp loads up Cap’n’Proto schema files and creates appropriate Python classes. Pretty neat IMO.

                  1. 1

                    Neat! I’m curious to find other similar (ab)uses of Python’s import mechanism.

                  2. 1

                    You can add macros to Python directly with this mechanism.

                    (Hy is very cool too.)

                  1. 5

                    I agree with almost all of this, just not:

                    A package repository might deem your software “critical”, adding requirements to publishing updates that you might not want to or be able to comply with.

                    Then don’t publish your package to that repository. No one is forcing you to publish there. That repository is a (free) service that is provided to the community by its maintainers and they have the right to enforce what gets published there and how. If you disagree with the conditions it’s simple–just don’t use it.

                    1. 3

                      This assumes that you need to intentionally publish something to a repository though. This doesn’t hold true for multiple language ecosystems. Python allows you to pull and build git repos. Go doesn’t give you any choice either.

                      So you can’t decide to truly opt-out of this. The only option would be to never publish the code in the first place.

                      1. 6

                        If there is no package repository, who is then deeming your software critical?

                        There is nothing to opt-out of, since code hosting != (managed) package repository.

                        1. 1

                          The community and/or the people running the ecosystem. Not having an explicit package repository doesn’t mean there is a lack of ways to deem projects ciritcal.

                          1. 1

                            If you’re providing my package through your package repository, you get to act as a gatekeeper/maintainer.

                            When I publish my go code on my own domain, who would (and how) implement that same role?

                            I think of ‘deeming critical’ as only existing within that package repository. Two competing repositories could have different ideas and commitments. What they try to convey seems to be a promise to the users of the repository & package combo - use us, we vetted things and commit to keep them running (something something analogy between rolling releases and distros maintaining LTS releases).

                    1. 19

                      Nothing?

                      Widely deploying remote attestation as described in the doomsday scenario here is probably not possible, even if Microsoft wanted it. The blogpost largely just pose questions about potential danger without really describing how any of this would be achiveable. It’s lazy, really.

                      It’s simply too brittle outside of tightly controlled environments and would break far too often to actually give any consumer value.

                      Can a school run some WPA2 endpoint and restrict access based off on some Chromebook the school issues and validate it with an attestation protocol? Sure. It’s tightly controlled.

                      Is this going to be achiveable with everything from my self-built desktop running Windows and my random consumer-grade laptops? It would be an engineering marvel. Microsoft would need to be supplying tightly controlled hardware configurations to all consumers and uh….I don’t see how that would happen. The infrastructure needed to even begin validating this would be an interesting problem on it’s own.

                      I still think Matthews take on this is the better one. Pluton is not (currently) a threat to software freedom

                      If you also care about the opinion of the FSF/Richard Stallman: They went back on their stance about TPMs in 2015. https://www.gnu.org/philosophy/can-you-trust.en.html

                      The TPM has proved a total failure for the goal of providing a platform for remote attestation to verify Digital Restrictions Management. […] The only current uses of the “Trusted Platform Modules” are the innocent secondary uses—for instance, to verify that no one has surreptitiously changed the system in a computer. […] Therefore, we conclude that the “Trusted Platform Modules” available for PCs are not dangerous, and there is no reason not to include one in a computer or support it in system software.

                      1. 25

                        Widely deploying remote attestation as described in the doomsday scenario here is probably not possible

                        It already happened on Android with many apps. There are no modern Android phones that aren’t shipped with Google certified keys inside the TPM. The claim that a large-scale deployment of such systems is not possible is quite foolish, given that it already happened to various ecosystems and such trends are only accelerating. The threat is real and serious. May I ask, do you have a smartphone? Android, iPhone? Do you have a banking app? Did you try to assert your ownership of the device by installing an operating system of your choice? How many apps were you unable to use afterwards?

                        1. 12

                          Do you have a banking app? Did you try to assert your ownership of the device by installing an operating system of your choice? How many apps were you unable to use afterwards?

                          What you are demanding here is not “ownership of the device”, but ownership or ownership-like rights to third-party services and hardware. The bank will, I am certain, still serve you via their web site, via phone call, probably these days via SMS, and certainly if you just show up in-person at a branch. They are free to set out the terms on which you get access to their systems, and for some banks and some apps, one of the terms is “app, but only if we can verify you haven’t messed with it or with crucial system libraries it depends on”.

                          You’re free to dislike that. But you don’t have an inherent moral right to access their systems without their consent. They have the same ownership rights to their systems and networks and devices that you have to yours, and your unfettered right to tinker with your stuff ends where their stuff begins.

                          Also, in my experience most people have completely wrong ideas about the incentives that lead to things like this – it’s not that the bank hates your freedom, or that Google hates your freedom, or that either of them wants to take your freedom or your “ownership of the device” from you. That’s the mustache-twirling hyperbolic strawman I’ve already pointed out in my other comment.

                          Instead, it’s that large corporate environments have weird incentive systems to begin with, and for large corporate environments in heavily-regulated industries that weirdness is generally at least squared if not cubed. So, say, the website might get declared a low-trust environment and they check some boxes to say they’re properly securing it, while the app gets declared a high-trust environment and they just forbid access from a modified version or from a version where they can’t reliably detect if modification has occurred, and that checks the required corporate and regulatory boxes to ship the app, which is less effort than they had to put in for the site. Even if the app literally just embeds a web view. Even if it makes no sense right now, it’s often the cumulative result of a bunch of seemed-reasonable-at-the-time decisions that most people go through life unaware of.

                          (my personal favorite example of this, from healthcare – the highly-regulated industry with which I’m most familiar – is that in the US many entities clung to faxes well past the time when it made any technical sense, largely because of a weird quirk where US health-care regulations treated fax systems differently than basically all other electronic transmission methods, for reasons that make no sense whatsoever today but presumably did multiple decades ago when the decision was made)

                          Meanwhile, the nightmare dystopian scenario that, for years, people have been asserting would be the endgame for the deployment of all this stuff… has still not materialized. If it had, you wouldn’t have been able to “assert ownership of the device” in the first place, remember.

                          1. 20

                            You’re free to dislike that.

                            Although I don’t disagree with the main argument, saying things like “you are free not to use an iPhone or Android” is like saying “you are free to become a monk or live secluded in a jungle”. It is unrealistic and perpetrates the poor argument that most “people have a choice”, when it comes to social median, online services, identity etc. They don’t. Not even geeks. Try creating an eshop without integrating with Google (ads, analytics), Facebook, Twitter, Stripe, PayPal, Amazon… if you are Walmart you MIGHT pull it off, but otherwise good luck.

                            1. 2

                              All of the things you mention in your example of setting up an online shop are pushed on you by social forces, not by technological handcuffs.

                              There is no technological solution to the social forces.

                              1. 6

                                This is why it’s important to keep the door open to compatible third party implementations. Because without that, social forces become technological handcuffs.

                                1. 3

                                  Technology forms part of the social fabric, and therefore can interact with social forces. The classic example of this is copyleft and the free software movement. I’m not saying that FOSS was a great success, but it’s certainly true that it influenced the direction of software for 20 years or more.

                                  As technologists, we should remember more often that technology does not exist outside of society and morality.

                              2. 8

                                What you are demanding here is not “ownership of the device”, but ownership or ownership-like rights to third-party services and hardware

                                I wasn’t aware that my phone was a third party service or hardware.

                                1. 8

                                  I already explained this in a way that makes it hard to take your reply as being in good faith, but I’ll explain it again: you’re free to modify the things you own. You’re not free to demand unlimited/unrestricted access to the things other people own.

                                  So the bank is free to set hours when their branch is open and say they won’t provide in-person service at the branch outside of those hours, no matter how much some people might insist this infringes their freedom to come in when they want to. They’re free to set a “shirt and shoes required” policy for receiving service at the branch, no matter how much some people might insist this infringes their freedom to come in dressed as they please.

                                  And they’re free to set rules for accessing their systems via network connection.

                                  Sometimes those rules include “mobile app, but only if we can verify it hasn’t been tampered with”, and it’s their right to do that no matter how much some people might insist this infringes their freedom to tinker with their devices.

                                  As I said already, that freedom to tinker ends where someone else’s systems and devices begin. You don’t own the bank’s systems. Therefore you don’t get to dictate to them how and on what terms you’ll access those systems, no matter how much you might like to, because they have the same ownership rights to their systems and devices that you have to yours.

                                  1. 5

                                    So the bank is free to set hours when their branch is open and say they won’t provide in-person service at the branch outside of those hours

                                    But it isn’t free to demand complete control of the contents of cars on nearby roads. No matter how much the ability to inspect them may reduce bank robberies.

                                    The bank may want the ability to inspect your car, but society doesn’t need to say yes to every misguided request.

                                    1. 5

                                      But it isn’t free to demand complete control of the contents of cars on nearby roads.

                                      That analogy doesn’t work, because “nearby roads” aren’t the bank’s property.

                                      So if you want to go with that analogy and make it work: the bank branch may have drive-through facilities, and they may not accommodate all vehicle types. Say, due to lane width, a huge pickup truck or SUV might not fit, or due to the height of the covering over the lane, a very tall vehicle might not fit.

                                      You still have the freedom to buy and drive a vehicle that doesn’t fit in the drive-through lane. But you don’t have the right to demand the bank rebuild the drive-through lane to accommodate you. They’re free to tell you to park and come inside, or use the ATM, or bank online, or any of the other methods they offer.

                                      And, again, there is no situation in which “ownership of your device” creates a moral right to demand access to systems owned by others on terms you dictate. If the bank doesn’t want to grant you access on your preferred terms, they don’t have to; they can set their own terms for access to their systems and (subject to local laws about accessibility, etc.) enforce those terms.

                                      (also, in some jurisdictions the bank absolutely could regulate the “contents of cars” on the bank’s property – for example, the bank could post a sign saying no firearms are permitted on the bank’s property, and that would apply equally to one stored in a car as it would to one brought inside the branch)

                                      1. 4

                                        That analogy doesn’t work, because “nearby roads” aren’t the bank’s property.

                                        And my phone is?

                                        My phone is an access method, and I have neither sold nor rented it to the bank.

                                        1. 4

                                          Your car is your property. But when you want to use your car on someone else’s property they can make rules about it. For example, where you can park, how fast you can drive, which direction you can drive, and so on.

                                          Your networked device is your property. But when you want to use your networked device to access someone else’s devices/systems, which are their property and not yours, they can make rules about it.

                                          I’ve explained this now multiple times, and I don’t see how any legitimate difficulty could still exist in understanding the point I’m making.

                                          1. 3

                                            Yes, I understand that it is technically legal for them to do this. Technically legal is not the same as desirable. It’s a horrifyingly dystopian future being described here, and painted as desirable because it is possible.

                                            I want a way off this ride, and I don’t see one.

                                            No, “stop keeping your money in banks” is not a serious option.

                                            No, I do not use Linux, Windows, or OSX.

                                            Yes, I already refuse to install apps for this on my phone – I use my phone exclusively for tethering, maps, and getting paged when I am on call for work. I do not trust it to act in my best interests, and I do not want enforced software that I dislike spread to the rest of my computing devices.

                                            Your networked device is your property. But when you want to use your networked device to access someone else’s devices/systems, which are their property and not yours, they can make rules about it.

                                            It would probably be legal for a bank to require you to install a GPS tracker on your car to gain access to the bank. It would be safer for the bank if they could track the location of possible getaway cars. It would be safer for the bank to ensure that you didn’t go into sketchy neighborhoods where you could get mugged and have your bank cards stolen.

                                            But I don’t think a future where banks remotely enforcing what you do with your car is a good one. It’s not worth the safety.

                                            1. 4

                                              Every time I point out that the analogy falls apart when you try to extend control past the bank’s property line, you propose another analogy which extends control past the bank’s property line.

                                              I cannot engage further with this.

                                              1. 2

                                                What do you mean, “extend control past the bank’s property line?”.

                                                In this analogy, the bank allows you to drive cars without GPS trackers; They just require you to have one installed to engage with them. They’re not controlling your property –you’re voluntarily complying with their business requirements. It’s just them choosing how you engage with their business. You can avoid getting a GPS tracker so long as you don’t set foot on a bank’s property.

                                                This is less hypothetical than it sounds. While I’m not aware of banks pushing for GPS information, insurance companies already want this information in order to dynamically adjust rates based on driving habits, and to attribute blame more accurately in collisions.

                                                I’ve interviewed for an offshoot of State Farm that was established to explore exactly this. The interviewer was very excited about the increased safety you’d get because drivers would know they’re being watched. This was a few years ago – today, of course, you’d need to do some remote attestation to ensure that the system wasn’t tampered with and the data was transmitted with full integrity.

                                                Once this pool of data is established for analysis, it becomes very tempting for law enforcement, less pleasant regimes, and three letter agencies to access it.

                                  2. 2

                                    For a bank or hospital? It absolutely is.

                                  3. 7

                                    Also, in my experience most people have completely wrong ideas about the incentives that lead to things like this – it’s not that the bank hates your freedom, or that Google hates your freedom, or that either of them wants to take your freedom or your “ownership of the device” from you.

                                    Let’s ignore the fact that large corporations have a long and well-documented history of nefarious behavior. I mean, one of the first corporations in the west was the British East India Company. Calling it nefarious is a huge understatement. But that’s all not quite relevant to the point I’m making.

                                    Instead, it’s that large corporate environments have weird incentive systems to begin with, and for large corporate environments in heavily-regulated industries that weirdness is generally at least squared if not cubed.

                                    Fine. Does it truly matter if the reason is maliciousness or ignorant apathy combined with perverse incentives, if the end result is still the same? A difference which makes no difference is no difference at all. What I’m seeing is gradual disempowerment of people, not some quick power grab. And I don’t care what the reasons are, if the results are still the same.

                                    Every time this discussion comes up on Lobsters, you trot out the comic-book villain trope as a way to belittle the people you disagree with. A box-ticking technocrat can be just as harmful as a villain.

                                    1. 10

                                      Does it truly matter if the reason is maliciousness or ignorant apathy combined with perverse incentives, if the end result is still the same?

                                      Except the end result is not the same. The freedom-hating cartoon villain would not give you a way out. Yet out here in the real world you do get a way out. And as I pointed out, it’s been getting finer-grained over time so that you actually have even more control over which security features you want on and which ones you want off.

                                      This is not how an actual “war on general-purpose computing” would be waged!

                                      Every time this discussion comes up on Lobsters, you trot out the comic-book villain trope as a way to belittle the people you disagree with. A box-ticking technocrat can be just as harmful as a villain.

                                      My central assertion is that the Free Software movement and its adherents are actively hostile to security measures that are A) reasonable, B) desired by and C) accepted by much of the market, and that this hostility goes all the way back to the early days with Stallman writing purple prose about how he was standing up for “the masses” by having GNU su refuse to support a “wheel” or equivalent group. Today that manifests itself as reflexive hyperbolic opposition to fairly banal system security enhancements, which inspire yet more reams of purple prose.

                                      I further note that this opposition relies on appeals to emotion, especially fear (they’re coming for your freedom!), and on erecting straw-man opponents to knock down, neither of which is a particularly honest rhetorical tactic.

                                      And finally, this opposition also doesn’t stand up to even the slightest bit of actual scrutiny or comparison to what’s occurring in the real world, and on that theme I note you yourself largely refused to actually engage with any of the points I made, and instead went meta and tried to tone-police how I made the points, or the fact that I was making them at all.

                                  4. 4

                                    Android is an example of tightly controlled devices where this is completely feasible though.

                                    1. 2

                                      Yes, and the industry trend is to slowly extend that to more computing devices, including PCs. And, in this very thread, we have someone who is arguing that not only is it a company’s right, it’s effectively their duty, to ensure users aren’t tampering with their computing devices so that bad actors can’t compromise them.

                                  5. 9

                                    Yup.

                                    The hyperbole around this stuff runs into the inconvenient fact that all the horrible things have been technically possible for a very long time, using only features that already exist on consumer hardware and consumer operating systems, and yet the predicted dystopia… has not arrived.

                                    Microsoft has been theoretically able to fully lock down laptops and desktops for years. Apple has been theoretically able to fully lock down laptops and desktops for years. The reason they haven’t is not that they lack the one final piece of the freedom-destroying superweapon that will finally let them power it on, and it is not that scrappy freedom-warriors on the internet have pushed back too hard. The reason they haven’t is that destroying freedom is not, and never has been, their goal.

                                    So, so much of the argumentation around this stuff relies on building up strawmen and knocking them down. In the real world, there are no mustache-twirling executives cackling about how this time, finally, they will succeed at destroying freedom forever and bringing an end to general-purpose computing. There are just ordinary, usually really tired, people doing things for honestly pretty ordinary and banal reasons, and we generally will do best by taking their statements at face value: that they’re doing it for security, which is something both corporate and consumer users are loudly demanding.

                                    A lot of people are tired of living in constant fear. Fear that looking at, or in some cases just being a recipient of, the wrong email or the wrong text message or the wrong PDF or the wrong link will silently and completely compromise their systems. Fear of malicious actors, both remote and intimate. Fear of being fired if they slip up and make even the tiniest mistake instead of being a perfect “human firewall”. Fear of all manner of things that we can prevent by default if we just choose to.

                                    So we get more and more systems that have those protections by default. That let you just use it without being afraid. And if you want to live dangerously, you still can. The “I know what I’m doing” escape hatches are there! They’re even getting finer-grained over time, so that you can choose just how dangerously you want to live. You can turn off bits and pieces, or go all-in and replace the OS entirely. This is not the progression we would see in a world where the vendors were waging a “war on general-purpose computing”, and the actual observed state of the world is the strongest possible counterargument against the existence of such a “war”.

                                    And yet we still get hyperbole like this article. I’m so, so tired of it at this point.

                                    1. 3

                                      I don’t think it is not possible. Obviously game publishers already want this solution, windows 11 requires the chip (or will require the new one too, doesn’t make a difference), and if you ever owned an android device, you’ll know how many apps break when you install your own android copy.

                                      I can definitely imagine banks and other services rolling this out as a requirement. Developing a new browser is already hard enough, but if only 10% of the services under cloudflare require this in the future, you’re basically locked out of using linux/owned android/new browsers. We have a regular inspection requirement in germany for all vehicles on the street, for the safety of others. Maybe this will come one day for the internet, simply because that could reduce the amount of spammers and bots.

                                      Let’s spin this idea further: There is a law in germany that you’re responsible for your network connection. What if because of that you’ll not be allowed on public wifi anymore, without such an attestation ? No more headaches due to compromised devices.

                                    1. 2

                                      This seems really useful and makes me want to see a comparison between this and Container Toolbox which also seems like it’s under active development.

                                      1. 2

                                        Timothee, me and a few other recently also did community toolbox images since upstream hasn’t been very responsive supporting anything beyond Fedora :)

                                        https://github.com/toolbx-images/images

                                      1. 6

                                        Posting for a couple of reasons:

                                        • These monthly updates are a great move for Arch (they’ve been posted regularly for almost 12 months now).
                                        • The babysteps towards enabling buildbots are exciting. Although the need for maintainers to have final approval is obvious, it may come as a surprise just how centralised the current package update process is. Hopefully in the future fixes or updates can be submitted for PKGBUILDs via pull requests, with CI creating the package and testing, pending final approval from the maintainer.
                                        1. 5

                                          The babysteps towards enabling buildbots are exciting. Although the need for maintainers to have final approval is obvious, it may come as a surprise just how centralised the current package update process is. Hopefully in the future fixes or updates can be submitted for PKGBUILDs via pull requests, with CI creating the package and testing, pending final approval from the maintainer.

                                          I do actually need a lot of help to pull this off though :)

                                          The latter point is interesting, but I’m of the opinion that the gitlab CI is good enough for linting and testing merge requests. The issue with gitlab CI is how you do something more abstract like a complete rebuild of readline dependencies, or allow maintainers to build towards temporary staging repositories before doing the main integration work towards testing.

                                          Currently I’m just messing around hoping people get inspired.

                                          1. 2

                                            Yes, I’d imagine allowing such a flow for the “easy” things first would be the low-hanging fruit. Minor tweaks like version bumps, cherry-picking an upstream bugfix, adding a missing dependency. I don’t think this kind of thing is support by PRs/MRs right now but I’d love to be wrong.

                                            I’d guess tooling beyond standard GitLab CI would be needed for larger scale rebuilds - if nothing else, I imagine you’d probably want to a bit more gating on the ability to burn that much CPU time. And perhaps those larger rebuilds are too interactive by nature…

                                          2. 4

                                            It’s weird that this is not mentioned in the Archlinux news, or on Planet Archlinux. I would love these to appear in my RSS feeds.

                                            1. 3

                                              That’s a good point - it looks like there are some issues with the RSS feed right now. Once that’s fixed, it would be great if it showed up on Planet ArchLinux.

                                          1. 2

                                            Is there any larger FOSS projects that is currently using Fossile for their version control? I’m curious to see how this works for external contributors.

                                            1. 5

                                              The obvious FOSS project using Fossil is SQLite. And Fossil itself. :-)

                                              1. 4

                                                TCL/TK uses Fossil too.

                                                1. 4

                                                  Perhaps not “larger,” but Retro Forth uses Fossil.

                                                  1. 2

                                                    I’ve been happy with Fossil, but my workflow has me as a gatekeeper. I accept patches (and/or modified source files directly), review, revise if necessary, and then merge them. At this point I’m the only one with access to push changes to the primary repository.

                                                1. 6

                                                  Personally I prefer to use the git aliases instead of a lot of bash aliases. It gives you the bonus of included tab completion which lists the command itself.

                                                  To take git-worktree I personally have two aliases.

                                                  wt = "!f() { git worktree add -b morten/$1 ${PWD}-$1 $(git symbolic-ref --short HEAD); cd ${PWD}-$1; }; f"
                                                  wl = "!f() { git worktree list | fzf --height 40% --reverse; }; f
                                                  

                                                  I usually like naming my branches $firstname/feature as it becomes a good habbit in shared codebases over time. If I have the project named project and use git wt something, it would create the branch morten/something and checkout the repository in the parent directory named project-something and cd into the directory.

                                                  git-wl should probably cd into the directory, but I don’t think I wound up using it a lot.

                                                  It’s quite handy and similar to what the author does.

                                                  1. 2

                                                    Personally I prefer to use the git aliases instead of a lot of bash aliases. It gives you the bonus of included tab completion which lists the command itself.

                                                    I prefer bash aliases, only for worktrees, because the completions for git-worktree aren’t all that useful IMO. For git-checkout, I find that the completions are a lot more useful, and I use git co in accordance. That being said, the difference between something like gwls and git wt l<tab> isn’t all that game-changing, so I would totally consider using a git alias, even if just to keep my bashrc clean.

                                                    To take git-worktree I personally have two aliases.

                                                    Thanks for these. Creating project-something is an interesting flow. My worktrees are typically arranged like so:

                                                    rust-compiler
                                                        ├── improve-std-char-docs
                                                        ├── is-ascii-octdigit
                                                        ├── master
                                                        └── rustfmt-range-docs
                                                    

                                                    I might also throw in some “project related” files in the same directory, if need be:

                                                    rust-compiler
                                                        ├── improve-std-char-docs
                                                        ├── is-ascii-octdigit
                                                        ├── master
                                                        ├── rustfmt-range-docs
                                                        ├── notes     # my notes
                                                        ├── .cargo    # my shared cargo config
                                                        └── shell.nix # my environment
                                                    
                                                    1. 2

                                                      I want the git w<tab> completion though :)

                                                  1. 6

                                                    I’m currently starting book three of The Expanse. I’m also rewatching the TV series which I have never done.

                                                    Ty Franck, co-author of the books, sceenwriter and producer for the show, and Wes Chatham, the actor playing Amos Burton from the show and general movie nerd, started a podcast where they comment on each of the episodes: Ty and That Guy. They also talk about movies, tv-series, tropes, general production and actor stuff.

                                                    It’s a bit interesting as you get to hear and learn all the sides of the adaptaion process from the screenwriting to the acting parts of it all. While getting some insight into the show and the books which you normally wouldn’t catch.

                                                    1. 5

                                                      Funny that a package I maintain is listed… :)

                                                      But yes, Licensing Is Hard™.

                                                      Arch doesn’t have that great policies. I don’t think we have anything explicitly written down except for how we should include the relevant license in the package. Usually we just list and include the top-level dependency as you would with dynamically built packages. For statically built packages this doesn’t really hold up. But at the same time I think traversing the licenses of dependencies are going to put you as a downstream distributor in an awkward spot because few, if any, developers check for license incompatibilities before pulling new dependencies.

                                                      When was the last time you though about this issue before you pulled a new dependency? I personally never do this and I package this stuff :)

                                                      Ensuring we have SPDX License Identifiers, or support Expression, would come a long way keeping these things more manageable for Arch. But generally this entire issue boils down to the fact that people usually don’t care this much about licenses.

                                                      1. 6

                                                        But at the same time I think traversing the licenses of dependencies are going to put you as a downstream distributor in an awkward spot because few, if any, developers check for license incompatibilities before pulling new dependencies.

                                                        Which puts the distribution as whole under risk, e .g., as someone could bring the mirrors down for distributing “illegal” packages. Modern programming language ecosystems make it easy to vet the licenses, including the transitive ones in case of static-linked only languages, by providing tools like go-license (I believe a similar tool exists for Rust too). There is really no excuse to “just state the top-level license” here. And there is, for statically linked binaries, no other option to listing all licenses. It is pretty simple actually.

                                                        1. 2

                                                          Modern programming language ecosystems make it easy to vet the licenses, including the transitive ones in case of static-linked only languages, by providing tools like go-license (I believe a similar tool exists for Rust too). There is really no excuse to “just state the top-level license” here. And there is, for statically linked binaries, no other option to listing all licenses. It is pretty simple actually.

                                                          It’s not really that simple as I tried to illustrate. Arch isn’t capable of listing licenses correctly nor specific enough so adding more nonsense license identifiers isn’t really going to help the situation.

                                                          EDIT: I also see burntsushi is pointing out how the automated tools can’t be trusted. So I don’t think this solution is automagically solved by including ecosystem specific tooling to the problem.

                                                          https://news.ycombinator.com/item?id=32546791

                                                          Eventually this problems boils down to the entier “Software Bill of Materials” issue people have been working on the past couple of years.

                                                          1. 5

                                                            Most automated tools are not 100% correct. But they, especially in this case, provide a good starting point to take a step in the right direction. Which seems important if you are currently standing on a field which is labelled “declare incomplete licensing information that may gets you into legal trouble.”

                                                            Arch isn’t capable of listing licenses correctly

                                                            That sounds like a serious problem for arch.

                                                            ….nor specific enough so adding more nonsense license identifiers isn’t really going to help the situation.

                                                            Nobody is suggesting to add nonsense license identifiers. But you should state the correct licensing information.

                                                            1. 2

                                                              Arch isn’t capable of listing licenses correctly nor specific enough

                                                              Can you elaborate on this? The license variable is an array, so you can list out all licenses which apply, or use ‘custom’ and put the applicable licenses in /usr/share/licenses/$pkgname/. The only scenario I can think of where this wouldn’t apply would be dual-licensed software (OR instead of AND).

                                                              1. 1

                                                                Can you elaborate on this? The license variable is an array, so you can list out all licenses which apply, or use ‘custom’ and put the applicable licenses in /usr/share/licenses/$pkgname/.

                                                                Which is not being followed nor checked to any large degree. The BSD 2 Clause license can be listed as any form of BSD, custom:BSD, BSD2, custom:BSD-2-clause (and so on) Which is.. confusing and not great.

                                                                And that is because what you should use for license identifier when the license is not part of the common license package isn’t specified nor clarified by anyone.

                                                                The only scenario I can think of where this wouldn’t apply would be dual-licensed software (OR instead of AND).

                                                                That is why I want support for SPDX License expressions :)

                                                                1. 2

                                                                  OK, so when you say “support” you mean… tooling? You can use SPDX identifiers right now. I’m guessing namcap would complain, but is that it? Given the variety of ways packages specify their license, presumably nothing can depend on the exact format right now. Do you have something in mind which would depend on the format?

                                                                  1. 1

                                                                    Tooling and the “hows” of it really.

                                                                    We have a licenses package we need to adapt, and we need to decide if the format is spdx:GPL-3.0-or-later, or implicit in the string. We also need to decide if and how we are suppose to support SPDX license expressions. All of this mostly boils down to an RFC and figure out if we need to improve the pacman support for more complicated license fields.

                                                                    I think both me and Allan have written up a half-way draft on this really :p Me with mockups for expressions and Allan with only identifiers.

                                                          2. 3

                                                            When was the last time you though about this issue before you pulled a new dependency?

                                                            Every time there’s a chance I’d use something licensed as agpl by accident for things that are not explicitly Foss.

                                                            1. 1

                                                              I applaud your rigor :)

                                                              1. 4

                                                                Wait, how is this rigor? This is like… the most basic, 101-level common sense.

                                                                1. 1

                                                                  This entire discourse evolves around how this isn’t the most basic, 101-level common sense though.

                                                                  1. 4

                                                                    That’s one interpretation.

                                                                    Another interpretation is that many packagers don’t have even a basic understanding of how to package things without breaking the law.

                                                                    1. 1

                                                                      That’s a sad and dim view of packagers.

                                                          1. 3

                                                            I’d love to see some more insights here.

                                                            You’re making the argument that the common wisdom of Arch being unstable is incorrect and you’re positing that running the same install over a couple of machines over the course of a decade proves that.

                                                            The thing is, there are people who can say the same thing with virtually any operating system, including the much maligned Windows! :) (There are definitely people out there who’ve been upgrading the same install since god knows when).

                                                            What makes your experience with Arch’s stability unique? How does Arch in particular lend itself to this kind of longevity and stability?

                                                            1. 9

                                                              I really just meant to say that Arch Linux doesn’t break unusually much compared to other desktop operating systems I’ve used. At least that’s been my experience. The other operating systems I’ve used are mainly Ubuntu, Fedora, and Windows.

                                                              1. 1

                                                                Try using arch without updating it for a year or two, then update it all at once. And then try this with Windows, Fedora, Ubuntu again. That’s honestly arch’s primary issue, that you can’t easily update after you’ve missed a few months of updates.

                                                                1. 2

                                                                  I don’t quite see how that is a primary issue when this is the nature of rolling release distros though. Comparing to point release Fedora and Ubuntu doesn’t fit well, and I’d somewhat expect the same to happen on the rolling/testing portion of Debian and Ubuntu, along with Fedora Rawhide or OpenSUSE Tumbleweed? Am I wrong here? Do they break less often if you don’t update for a year+?

                                                                  Personally I keep around Arch installs from several years ago and I’ll happily update them without a lot of issues when I realize they exist. Usually everything just works with the cursed command invocation of pacman -Sy archlinux-keyring && pacman --overwrite=* -Su

                                                                  1. 3

                                                                    I don’t quite see how that is a primary issue when this is the nature of rolling release distros though

                                                                    It’s not necessarily – a rolling release distro could also do something like releasing a manifest of all current package versions per day, which is guaranteed to work together, and the package manager could then incrementially run through these snapshots to upgrade to any given point in time.

                                                                    This would also easily allow rolling back to any point in time.

                                                                    It’s actually a similar idea to how coreos used to do (and maybe still does?) it.

                                                                    1. 4

                                                                      That would limit a package release to once a day? Else you need a pr hour/minute manifest. This doesn’t scale when we are talking about not updating for years though as we can’t have mirrors storing that many packages. I also think this glosses over the challenge of making any guarantee that packages work together. This is hard and few (if any?) distros are that rigorous today even.

                                                                      It is interesting to note that storing transactions was an intended feature of pacman. But the current developer thinks this belongs in an ostree or btrfs snapshot functionality.

                                                                      1. 4

                                                                        I’m confused by this thread. Maybe I’m missing something?

                                                                        It seems like it should be really simple to keep a database with a table containing a (updated_at, package_name, new_package_version) triple with an index on (updated_at, package_name) and allow arbitrary point in time queries to get a package manifest from that point in time.

                                                                        No need to materialize manifests every hour/minute/ever except when they’re queried. No need to make any new guarantees, the packages for any given point in time query should work together iff they worked together in that actual point of time in real life. No need to make the package manger transactional (that would be useful for rolling back to bespoke sets of packages someone installed on their system, but not for installing a set of packages that were in the repo at a given time).

                                                                        Actually storing the contents of the packages would take up quite a bit of disk space, but it sounds like there is an archive already doing that? Other than making that store it sounds like just a bit of grunt work to make the db and make the package manger capable of using it?

                                                              2. 5

                                                                I didn’t read it as some grandiose statement of how awesome Arch is, just one user’s experience.

                                                                My equally unscientific experience is that I can usually upgrade Ubuntu once, but after a second upgrade it’s usually easier and quicker to just start fresh because of all the cruft that has accumulated. I fully acknowledge that one could combat that more easily, but I also typically get new hardware after X years, for X lower than 10.

                                                                I do have an x230 from 2013 with a continually updated Debian install on it, so 10 more months and I also managed to get to 10 years.

                                                                1. 1

                                                                  Really though? While it’s true that you can upgrade Windows I have yet to see someone who managed to pull that off with their primary system and things like drivers, etc. accumulating and biting you. This is especially true if you update too early and later realize incompatibility with drivers or software which really sucks if that’s your main system.

                                                                  Upgrading usually works when everything because there’s upgrade paths but having a usable system sadly is a less common theme.

                                                                  And Linux distributons that aren’t rolling release tend to be way worse than Windows. And while I don’t know MacOS at every company I’ve been so far a bigger update of MacOS always means that everyone is expected to not have a functional system for at least a day, which always is a bit shocking in an IT company. But I have to say I have really no clue what is going on during that time. So not sure what’s really happening. I know from my limited exposure that updates rent to simply be big in download and long installation processes.

                                                                  I probably only stuck with arch all that time compared precisely because it didn’t give me a time to consider switching so it’s really just laziness.

                                                                  When I think about other OSs and distributions I’ve used scary updates of OS and software are a big junk of why I don’t use them in one way or another. I used to be a fan of source based distributions because they gave you something similar. Back when I could spend the time. I should check whether Gentoo has something like FreeBSD’s poudriere to pre-build stuff. Does anyone happen to know?

                                                                  1. 3

                                                                    I would have agreed with you from ca. 1995-2017, but I saw several Win 7 -> Win 10 -> Win 10 migrations that have at least been flawlessly running for 5+ years, if you accept upgrades between different Win 10 versions as valid.

                                                                    I’ve had many qualms and bad things to say about Windows in my life, but I only had a single installation self-destruct since Win 7 launched, so I guess it’s now on par with Linux here for this criterion.

                                                                    Changing the hardware and reusing the Windows installation was still hit or miss whenever I tried, with the same motherboard chipset I’ve never seen a problem, and otherwise.. not sure.. I guess I only remember reinstalls.

                                                                    1. 1

                                                                      I was doing tech support for a local business around the time the forced Windows 10 roll out happened. I had to reinstall quite a few machines because they just had funky issues. Things were working before then suddenly they weren’t. I couldn’t tell you what the issues were at the time but I just remember it being an utter pain in the ass.

                                                                      1. 2

                                                                        Yeah I’m not claiming authority on it being flawless, but it has changed to “I would bet on this windows installation breaking down in X months” to “hey, it seems to work”, based on my few machines.

                                                                    2. 2

                                                                      (Long-time Gentoo user here.) I’m not sure if this answers your question, but I often use Gentoo’s feature to use more powerful machines to compile stuff, then install the built binaries on weaker systems. You just have to generally match the architecture (Intel with Intel, AMD with AMD).

                                                                      1. 2

                                                                        I will need to look into this. I mostly wrote that sort of to keep it at the back of my head. I heard it was possible but there was some reason that I didn’t up trying that out . Maybe I should do it at my next vacation or something.

                                                                    3. 1

                                                                      It isn’t stable in the sense that ‘things don’t change too much’, but it is stable (in my experience) in that ‘things generally work’. I recall maybe 10 years ago I would need to frequently check the website homepage in case of breaking upgrades requiring manual intervention, but it hasn’t been like that for a long time. In fact I can’t remember the last time I checked the homepage. Upgrades just work now, and if an upgrade doesn’t go through in the future, I know where to look. Past experience tells me it will likely just be a couple of manual commands.

                                                                      On the other hand, what has given me far more grief lately is Ubuntu LTS (server). I was probably far too eager to upgrade to 22.04, and too trusting, but I never thought they’d upgrade to OpenSSL 3 with absolutely no fallback for software still reliant on OpenSSL 1.x… in an LTS release (isn’t this kind of major change what non-LTS releases are for?). So far, for me this has broken OpenSMTPD, Ruby 2.7, and MongoDB (a mistake from a very old project – never again). For now I’ve had to reinstall 20.04. I’ll be more cautious in future, but I’m still surprised by how this was handled.

                                                                      Even Arch Linux plans to handle OpenSSL3 with more grace (with an openssl-1.1 package).

                                                                    1. 4

                                                                      This is a bit overcomplicated, frankly.

                                                                      For Unified Kernel Images, both dracut and mkinitcpio support this out of the box. This prevents the need for any homewritten scripts to make one.

                                                                      dracut also supports signing these images out-of-the box. mkinitcpio will probably support this in the future with pre/post hooks.

                                                                      There are multiple caveats which is not mentioned. Binding your LUKS key to PCR 0 is not a good idea as you would have potential trouble every time you do firmware updates from LVFS as an example. PCR 7 should be fine though.

                                                                      It also fails to mention how Optional ROM code is loaded on a lot of PCIe drives these days. Failure to include the Microsoft signing keys into the bootchain would resolve in potentially soft-bricking of the device. You could in theory also enroll the OpROM checksums from the TPM eventlog instead.

                                                                      I have also spent quite a bit of work trying to make all of this better! I wrote sbctl which tries to make key creation and signing easier for people.

                                                                      I spoke about these issues at last years Open Source Firmware Conference as well.

                                                                      https://www.osfc.io/2021/talks/improving-the-secure-boot-landscape-sbctl-go-uefi/

                                                                      Luckily for the author, sbctl was included into Gentoo just this week :)

                                                                      1. 2

                                                                        Also,

                                                                        The shim they write about is sadly something which gatekeeps kernel functionality. There is no way to have a personal key to sign kernel modules unless you include the shim into your bootchain! This makes enabling lockdown mode with dkms modules a lot harder.

                                                                        Effectively enrolling the Micrsoft 3rd party key doesn’t really decrease the security of your bootchain in any way as long as your bind LUKS to PCR 7. It would prevent attacks against the shim+grub as you decide what you want to seal towards anyway. If someone tries to downgrade the shim, or boot an old vulnerable grub, this counter the issue.

                                                                        Thus it’s a bit weird to care about self-enrolled keys, rejecting the 3rd party certificate but insist on sealing towards PCR7.

                                                                      1. 11

                                                                        One thing that’s missing from this analysis of distro packaging tools is that the distro model generally wants to avoid upgrades whenever possible. So this step:

                                                                        crossbeam-utils just got an update! Distro™’s build automation queues all binaries which include crossbeam-utils in their dependency tree to be repackaged.

                                                                        would not apply, or would apply only to “crossbeam-utils just got a security update, or other bug of sufficient severity to be covered by the distro’s policies”.

                                                                        So, suppose package bar depends on libfoo, and at the time the distro rolled its current edition, that was bar 2.1 and libfoo 1.3.5. If libfoo 1.4 comes out fixing a critical security issue (or other bug of sufficient severity, etc.), the distro does not switch to packaging libfoo 1.4. They backport the fix into a forked tree of 1.3.5, and package it as libfoo 1.3.5-1 or whatever.

                                                                        And that leads into the real mismatch between the distro’s packaging and any language-specific package manager – cargo or npm or pip or gem or literally any of them – which is that the distro wants stability, while the language-specific ecosystem wants to grow and evolve.

                                                                        I suspect that the static versus dynamic linking arguments are really a red herring here: the deep issue is the fact that distro package managers simply are not set up for the pace of change in the typical language package ecosystem.

                                                                        1. 3

                                                                          On the other hand, that backporting for stability is mostly required because it’s a shared library whose interface is exposed to the user and other applications, so bumping to 1.4 could cause unrelated breakage. If you allowed library dependency versions to float as needed per binary then stability wouldn’t be affected, provided this didn’t substantially change the user-facing behaviour of the binary that incorporated it. (This could still require a fork to address.)

                                                                          It’s worth pointing out that having multiple versions of a library simultaneously is nearly unavoidable - having multiple versions of the same library already occurs easily in the context of a single Rust binary, let alone between multiple Rust binaries.

                                                                          1. 2

                                                                            So, to extend the example: if Distro™ packaged bar 2.1 using libfoo 1.3.5, they’re generally going to stay on bar 2.1 for the entire life of that distro release. Which means they’re going to stay on the dependencies specified by that version, and maintain those as-is.

                                                                            Which then does end up being a lot more work for the distro since they need one distro-local fork per library per depended-on version of the library, because again: they’re not going to do in-place upgrades. And that gets back to the fact that the language package ecosystem moves more quickly than the distro package ecosystem, and in ways that make the distro packager’s life hard.

                                                                            Personally I think distros should more or less give up on trying to ship most things that are accessible from language-specific ecosystems, because hardly anybody even uses the distro versions these days anyway. But I also think it’s important to point out that this is the real disconnect, not the type of linkage.

                                                                            1. 4

                                                                              because hardly anybody even uses the distro versions these days anyway

                                                                              [citation needed]? I suspect there’s a large, silent swath of people who prefer distros’ stability guarantees.

                                                                              You’ve hit the nail on the head with this disconnect though. I think upstreams just generally don’t see the value in doing an LTS-type release, which distros would prefer to use if possible. So this results in an awkward situation where distros who don’t have upstream expertise are basically maintaining an unofficial LTS release instead.

                                                                              1. 5

                                                                                I suspect there’s a large, silent swath of people who prefer distros’ stability guarantees.

                                                                                Hi!

                                                                                As a person who used to do a lot of programming and now does some programming and a lot of ops, my perception is that software developers tend to want to develop with the latest version of everything, and for everyone to use the latest version of everything, so they don’t have to keep doing boring backporting.

                                                                                Mere users would generally like the ability to upgrade for features if they want, but don’t want to have to keep upgrading everything all the time just to get security fixes and so on. Backward compatibility is, as far as I’m concerned with my end-user hat on, a binary property. It doesn’t matter how well-telegraphed a breaking change was.

                                                                                The backward compatibility model I need for software that’s not my specific focus is simple: security bugs are fixed, nothing else ever changes. Some (usually big) projects do provide this, but for the most part it’s not what software developers like to do. I can’t complain about that—I’m not paying them to care about my requirements—but I am grateful for stable distros’ turning software developers’ output into something I can rely on.

                                                                                You tend not to hear from, e.g., Python people who feel this way about Python software. Python people will get hold of the Python version they want to use. But any successful language has a lot of transitive users who don’t care about that language, or even know what it is. Distro packaging is for them.

                                                                                1. 2

                                                                                  As a person who used to do a lot of programming and now does some programming and a lot of ops, my perception is that software developers tend to want to develop with the latest version of everything, and for everyone to use the latest version of everything, so they don’t have to keep doing boring backporting.

                                                                                  [citation needed]

                                                                                  It depends on what ecosystem you’re in, of course, but I actually hate developing with javascript/css/html/etc because of how fucking fragile the versioning is. Every single day there’s a new version, who knows what it will break and what I will have to redo. And every time there’s a major update something fucking breaks because of how fragile and incoherently-built the technology is.

                                                                                  All of that is wasted time.

                                                                                2. 3

                                                                                  In my own world (Python), I don’t know of anyone who installs distro-packaged versions of their Python dependencies. Everyone uses a language-specific package manager: either pip for most networked services/apps, or conda for data-science/ML stuff. And all the tutorials people copy/paste from are setting you up that way, too, so it’s how people get onboarded.

                                                                                  And generally that’s how you have to do it in Python, because the distros can’t and don’t package enough of PyPI to really handle all those use cases. They package only a subset of the most popular things, but many projects will have at least one or two dependencies that didn’t quite make the cut, and since you really don’t want to mix package management systems, you end up all-in on the language’s package manager.

                                                                                  I will mention, though, that some projects do put out LTS releases. Django, for example, does an LTS every third release and uses them as an explicit part of the compatibility policy (which, summarized, is that if you’re running on an LTS and raising no deprecation warnings, you can upgrade to the next LTS with no further code changes).

                                                                                  1. 2

                                                                                    In my own world (Python), I don’t know of anyone who installs distro-packaged versions of their Python dependencies.

                                                                                    Sure, this is because distro packages of Python libraries aren’t really for you. They’re only there as support for installable applications.

                                                                                    Your world doesn’t have the same constraints as everyone. An ops person is much more likely to prefer distro packaging when possible because it reduces hassle, both at install time and at maintenance/upgrade time. This is where the stability guarantees of distributions, which upstreams often refuse to provide, add value.

                                                                                    1. 1

                                                                                      Sure, this is because distro packages of Python libraries aren’t really for you. They’re only there as support for installable applications.

                                                                                      Except my point is that for extremely common Python use cases like deploying a web application, the distro packages are inherently insufficient. Distros simply aren’t able to package all of PyPI, and getting mostly there for a given project doesn’t count, because you absolutely never want to mix distro packages and language-package-manager packages.

                                                                                      An ops person is much more likely to prefer distro packaging when possible

                                                                                      I understand that the traditional sysadmin position has always been to favor the system package manager and discourage language-specific package managers, but again: the distros can’t and don’t package enough stuff to make that feasible. If someone wants to deploy something that has 20 dependencies and 19 of them are in the distro package repository, the 20th will force the project back to the language-specific package manager.

                                                                                      1. 1

                                                                                        Ah, I think I was unclear - my bad. By “installable applications” I really meant, installable from the distro repository. I’m totally with you that you probably don’t want to mix e.g. Pip packages coming from PyPI and from the distro in the same Python application process. Distros can’t package every Python application out there but if it’s available in the repository, I’m installing from there instead of from Pip.

                                                                                        To be clear I’m not talking about in-house developed webapps. I’m talking about existing projects that I might want to use for whatever reason that have been open source for a while and are being packaged by distros. For deploying a proprietary webapps, I agree that using Python libraries from the distro repository is probably a bad idea. That’s what I meant by this:

                                                                                        distro packages of Python libraries aren’t really for you. They’re only there as support for [applications that the distro is packaging].

                                                                                        Hopefully that clears things up.

                                                                                        1. 1

                                                                                          I think the number of deployments of things that fit your definition is much lower, relative to the number of language-package-manager-backed deployments, than you think it is.

                                                                                          1. 1

                                                                                            And I obviously disagree :P but really I think we’re both biased by our worlds. Of course someone who spends all day in a language ecosystem would think that’s how most people deploy software. Likewise, of course someone sysadmin-y who really leans into distros’ advantages would think there’s a lot of silent people out there who feel the same. We’re both just guessing, and neither of us are making particularly educated guesses.

                                                                            2. 1

                                                                              but the frustrations in OP exist with archlinux just as well as with debian. what you describe, a general reluctance to upgrade is also typical in distros (though there’s a pretty significant long tail to how out of date various distros are), but IMO not inherently connected to the build system and packaging model this post is talking about.

                                                                              1. 1

                                                                                I’m confused. Arch Linux and Debian does not share the same model of packaging Rust/Go/Javascript.

                                                                                We have mostly given up and leave library updates and security issues up for the upstream, as that is what the tooling is currently encouraging.

                                                                                1. 1

                                                                                  you’re right, i must have misremembered how e.g. python vs python-venv is packaged there. i do remember some early attempts at packaging rust applications in AUR that looked much more like Debian’s attempt.

                                                                            1. 7

                                                                              Heading for May Contain Hackers in the Netherlands on Thursday :) Packing and hoping Schiphol is not going to be super painfull.

                                                                              1. 16

                                                                                The irony of complaining about restrictions imposed on an open source project by a freely-run package index is just to much.

                                                                                On the other hand, holy shit, PyPI let’s you publish packages without MFA? I knew python packaging was bad but I didn’t know it was that bad.

                                                                                1. 19

                                                                                  On the other hand, holy shit, PyPI let’s you publish packages without MFA? I knew python packaging was bad but I didn’t know it was that bad.

                                                                                  I don’t really understand this part. Several ecosystems does not enforce MFA and this has been common for years, why are we suddenly regarding this as an absolute requirement for publishing?

                                                                                  Several package managers read directly from git repos (pip included), and github does not enforce anything either.

                                                                                  1. 6

                                                                                    Several ecosystems does not enforce MFA

                                                                                    Oh dear. Ohhhhhh dear.

                                                                                    I guess it’s been a long time since I used any package repository other than Clojars. That one is relatively small but it’s required MFA for a few years now so I just assumed the more professional ones had been … well, professional about it. I certainly wouldn’t trust a package repository that treated it as optional.

                                                                                    1. 2

                                                                                      Which professional ones are these that you’re thinking of?

                                                                                      1. 5

                                                                                        In my head, npm, Maven Central, Rubygems, and PyPi are all big enough to have multiple full-time professional staff employed to run them, but I’ll admit I haven’t looked into any of them in detail since I’m not a user of those languages. On the other hand I am a user of Debian, and I know that they don’t accept any packages that aren’t signed by their maintainers using GPG, which IMO is a big step up over just using MFA for login.

                                                                                        1. 8

                                                                                          npm currently requires MFA for maintainers of the top 500 most popular packages and will be expanding that. Sounds similar to PyPi. More info here: https://github.blog/2022-05-10-enhanced-2fa-experience-for-your-npm-account/

                                                                                          1. 7

                                                                                            Debian gets to do what they do because they’re not an open-to-the-public repository for publishing. To get a package into Debian you specifically have to go through a small set of trusted gatekeepers who use their credentials, and have been vetted by the Debian project.

                                                                                            A public repository that lets anyone with a package sign up and start distributing simply cannot do that level of verification and vetting on every single package uploader, which is why they don’t do it.

                                                                                            1. 3

                                                                                              Yeah, I understand the situation isn’t applicable to PyPi; it’s just an example of how having higher standards earns you a lot more trust.

                                                                                              1. 3

                                                                                                Maybe Debian-level packaging curation is the solution to the recent high-profile open-source supply chain problems. Drew DeVault certainly thinks so. Unlike him, though, I think language-specific package managers have their place, particularly because they support platforms that aren’t free Unix-likes. Perhaps PyPI, npm, cargo, etc. should have subsets of their packages that are heavily curated as in the distro package managers, and some of us could start restricting our dependencies to those subsets. I would contribute some money to help make this happen.

                                                                                              2. 4

                                                                                                In my head, npm, Maven Central, Rubygems, and PyPi are all big enough to have multiple full-time professional staff employed to run them, but I’ll admit I haven’t looked into any of them in detail since I’m not a user of those languages.

                                                                                                I’m unsure about Maven, but the rest of them are underfunded infra mostly driven by volunteers.

                                                                                                1. 3

                                                                                                  Huh. I thought I remember reading a while back about npm, inc being a thing, and how they took venture capital or something.

                                                                                                  1. 6

                                                                                                    NPM Inc is owned by GitHub now.

                                                                                                    1. 1

                                                                                                      Ah, I forgot about this actually :)

                                                                                        2. 6

                                                                                          As much as I disagree with the post, I think it’s pretty fair and not ironic. They acknowledge that the change is in the index and its users’ interest. They just want a different solution and make their opinion public. If more people agree, maybe someone will be motivated enough to run it.

                                                                                        1. 1

                                                                                          Starting in 2022 for Secured-core PCs it is a Microsoft requirement for the 3rd Party Certificate to be disabled by default. This means that for any of these Lenovo platforms shipped with Windows preinstalled an extra step is needed to allow Linux to boot with secure boot enabled.

                                                                                          This is actually wrong. I’m installing Arch on a new Thinkpad T14 Gen 3 with no WIndows preinstalled. The option to enabled the 3rd party UEFI certificate is there, but it’s disabled by default. However Secure Boot is also disabled.

                                                                                          1. 2

                                                                                            Which part exactly are you claiming is wrong?

                                                                                            1. 1

                                                                                              They claim it’s only done for machines with WIndows pre-installed. This is not the case.

                                                                                              1. 2

                                                                                                That’s not really what the text is saying though, but I agree that it’s a bit uncarefully worded. It’s just mentioning an extra step for the specific case of Windows being preinstalled, it’s not actually saying anything about the non-preinstalled case.

                                                                                                1. 1

                                                                                                  It’s fair to read the text as inclusive. If this was the default for every laptop there would be no need to specify “with Windows preinstalled” in the text.

                                                                                                  1. 1

                                                                                                    I disagree, the first sentence says that it applies to all “Secured-core PCs”. The second sentence gives additional information of the specific sub-case of computers with Windows preinstalled where you need to perform an extra strep to boot into Linux while leaving secure boot enabled.

                                                                                                    So again, while I think it could have been worded more carefully, it’s entirely self-consistent.

                                                                                          1. 17

                                                                                            Add that on top of many Thinkpad being bricked when a user installs their own secureboot key, and some reports of having their warranty denied because this was “user-induced”.

                                                                                            1. 3

                                                                                              Oof. That’s bad. I actually preordered a Lenovo device recently (due to it being the only device with the specs I needed). I’ve never bothered with my own keys–and definitely won’t after reading this.

                                                                                              1. 2

                                                                                                This isn’t actually the same kind of issue though. The issue is that there is some OpROM from some PCI device on most modern Thinkpads which is signed by Microsoft. As these OpROM are loaded and validated as part of the UEFI boot chain, failing to verify that file would resolve in that piece of hardware failing to load.

                                                                                                In most cases this would apply to external GPUs, NVMes and stuff. I’m not sure what piece of hardware is the issue in the X13 and T14 case though.

                                                                                                The solution to this issue is to enroll the appropriate hash from the TPM eventlog into the db, or keep the Microsoft 3rd party UEFI CA along with you self-signed keys.

                                                                                              1. 6

                                                                                                I read this yesterday and wrote a new shell command later the same evening. Today I tried that command and my laptop subsequently froze after a few minutes.

                                                                                                $ cat /home/fox/.local/bin/grobi
                                                                                                #!/usr/bin/bash
                                                                                                grobi -C "/home/fox/.config/grobi/$(uname -n).conf" $@
                                                                                                

                                                                                                You can probably guess the issue and how adding a comma to it would have helped :)

                                                                                                1. 1

                                                                                                  Took me a sec… oh no

                                                                                                  1. 1

                                                                                                    Been using Linux for 10+ years. Still can’t avoid that semi-annual fork bomb from time to time