1. 7

    “Only a person can have rights. A machine cannot.”

    This seems like a reasonable barrier for now. Once machines demonstrate that they should have civil rights, then the issue can be revisited.

    1. 1

      Once machines demonstrate that they should have civil rights, then the issue can be revisited.

      How would machines demonstrate that?

      1. 3

        In the best timelines, they would politely ask. (I have retrieved the full fictional speech from the Internet Archive, for those who want to read it.)

        1. 1

          I’m also curious, but to be honest, I’m not sure if I would want to learn that empirically ;)

        2.  

          Obviously this is talking about UK law and not US law, but in the US the definition of “a person” can be stretched beyond what you’d normally expect.

          1.  

            Legal personhood is slightly needed, that article is missing a lot of the “why” things changed: https://www.upcounsel.com/corporate-personhood

            Without it, technically the government can just seize corporate assets as an example.

            Also legal personhood != human person even under the law. They are distinct. Note this all is from my laywer drinking buddy but the one time I asked him about it he had a bit of a sigh that the nuances are a lot more complex than “corporations are people under law” like it was some sort of soylent green situation. The Tillman act as an example is a good example of why you want personhood for businesses. Unless we want to not be able to collectively barter as a group I would argue that is a good thing.

            1.  

              Out of curiosity, why is corporate personhood necessary to prevent arbitrary seizure of corporate assets given that a natural person (human) owns the corporation and therefore owns the assets? It seems like that would have been a much simpler way of making that work. So it feels like there must be a lot more to it.

        1. 2

          It seems like the idea of putting a version number in a header was already well-established when IP was first defined. I wonder who was the first to come up with the idea of versioning?

          1. 3

            I imagine versioning 0.0.1 probably revolved around file.txt, file_2.txt, file_FINISHED.txt, file_FINISHED(1).txt, file_FINAL_REAL.txt ad infinitum…

            1. 9

              It’s quite possible they wrote the specifications on a DEC system with automatic file versioning. Probably more like DOC.TXT;69 incrementing as they make new versions.

              1. 2

                More information on DEC automatic file versioning: Retrocomputing Stack Exchange – Filesystems with versioning – answer

                1. 1

                  Fun! I didn’t realize that was a thing!

            1. 46

              Rocket seems like a really impressive piece of work.

              Yet, I decided against using it when I looked it a while ago due to all the rocketry references. The cognitive overhead of translating between punny rocketry names to their actual concepts just became too much, especially while trying to learn. You don’t “start” the “server”; you “launch” the “rocket”. You don’t have “middleware”; you have “fairings” (an analogy which I don’t really understand even with the knowledge of what a middleware is and what a fairing is). You end up with really weird nonsensical combinations of words, for example you have the “shield fairing” instead of the “security middleware”. Instead of your server being up and ready for incoming connections, your “rocket” is in “orbit”.

              I assume Rocket is great for a lot of people. I may even come back to it at some point. But I don’t think it’s doing itself any favors in terms of learnability with these terms.

              1. 21

                This pretty much made me never understand homebrew in depth. I’ll do it one day if I have to, but between kegs, bottles, tapping, … I lost interest in it apart from a being a tool I can’t avoid.

                1. 12

                  I 100% agree. I sometimes read “how to install X with Homebrew” guides, and I feel completely helpless when they ask me to pour a bottle, tap a keg and brew a cask or whatever. I usually prefer to actually understand the commands I’m running, but Homebrew reduces me to blindly copy/pasting into the terminal because I literally don’t even understand the words used by the commands.

                  It’s a bit similar with Python’s wheels and eggs, but at least pip uses normal verbs with weird nouns rather than punny themed verbs and nouns.

                  1. 1

                    Happy macports user here, maybe give it a try

                    1. 1

                      I find MacPorts is a lot more tasteful/better mouthfeel, but some of the packages aren’t as well maintained, unfortunately.

                      1. 1

                        The things I depend on work pretty well, but I know what you mean. I have also contributed some patches and the process is okay, but the TCL based language is def. a strange thing to work with

                  2. 13

                    I couldn’t agree more! I worked for a company that had an internal tool with names like that and it was obnoxious. I didn’t work on the tool, I just had to use it occasionally, and every time I did, I needed to re-learn which terms mapped to which concepts. Barf.

                    The problem with inside jokes is that they’re only funny to insiders. This is much the same.

                    1. 4

                      If there are cargo and crates, why not rockets and fairings? I lost count to the number of times I mis-typed “crate” as “create”.

                      I always wonder if whoever came up with the cargo/crate terminology in retrospect think this probably caused more unnecessary confusion (and mis-typing) than it’s worth and more traditional terminology, such as package instead of crate and something like rustp (for Rust package manager, by analogy with rustc) instead of cargo, would have been a better (if boring) way to do it?

                      1. 9

                        I get your point, but in using cargo the actual only thing I needed to remember was “crate is the word for module/package”. That’s hardly a fair comparison.

                        1. 5

                          Thankfully it’s cargo run and not cargo load or anything like that.

                        2. 2

                          Agreed. It’s cute, but painful. I do hope they rethink their terminology.

                        1. 4

                          One fun thing I recently ran into is that if you want a “clean” Docker image for your application (because that’s the easiest way to distribute Python stuff), you end up needing a requirements.txt file even if you’re using Pipenv because Pipenv itself pulls in a bunch of dependencies. This is fairly straightforward to do with a multi-stage build, but it’s still annoying.

                          1. 10

                            Q: Why choose Docker or Podman over Nix or Guix?

                            Edit with some rephrasing: why run containers over a binary cache? They can both do somewhat similar things in creating a reproductible build (so long as you aren’t apt upgradeing in your container’s config file) and laying out how to glue you different services together, but is there a massive advantage with one on the other?

                            1. 28

                              I can’t speak for the OP, but for myself there are three reasons:

                              1. Docker for Mac is just so damn easy. I don’t have to think about a VM or anything else. It Just Works. I know Nix works natively on Mac (I’ve never tried Guix), but while I do development on a Mac, I’m almost always targeting Linux, so that’s the platform that matters.

                              2. The consumers of my images don’t use Nix or Guix, they use Docker. I use Docker for CI (GitHub Actions) and to ship software. In both cases, Docker requires no additional effort on my part or on the part of my users. In some cases I literally can’t use Nix. For example, if I need to run something on a cluster controlled by another organization there is literally no chance they’re going to install Nix for me, but they already have Docker (or Podman) available.

                              3. This is minor, I’m sure I could get over it, but I’ve written a Nix config before and I found the language completely inscrutable. The Dockerfile “language”, while technically inferior, is incredibly simple and leverages shell commands I already know.

                              1. 15

                                I am not a nix fan, quite the opposite, I hate it with a passion, but I will point out that you can generate OCI images (docker/podman) from nix. Basically you can use it as a Dockerfile replacement. So you don’t need nix deployed in production, although you do need it for development.

                                1. 8

                                  As someone who is about to jump into nixos, Id love to read more about why you hate nix.

                                  1. 19

                                    I’m not the previous commenter but I will share my opinion. I’ve given nix two solid tries, but both times walked away. I love declarative configuration and really wanted it to work for me, but it doesn’t.

                                    1. the nix language is inscrutable (to use the term from a comment above). I know a half dozen languages pretty well and still found it awkward to use
                                    2. in order to make package configs declarative the config options need to be ported to the nix language. This inevitably means they’ll be out of date or maybe missing a config option you want to set.
                                    3. the docs could be much better, but this is typical. You generally resort to looking at the package configs in the source repo
                                    4. nix packages, because of the design of the system, has no connection to real package versions. This is the killer for me, since the rest of the world works on these version numbers. If I want to upgrade from v1.0 to v1.1 there is no direct correlation in nix except for a SHA. How do you find that out? Look at the source repo again.
                                    1. 4

                                      This speaks to my experience with Nix too. I want to like it. I get why it’s cool. I also think the language is inscrutable (for Xooglers, the best analogy is borgcfg) and the thing I want most is to define my /etc files in their native tongue under version control and for it all to work out rather than depend on Nix rendering the same files. I could even live with Nix-the-language if that were the case.

                                      1. 3

                                        I also think the language is inscrutable (for Xooglers, the best analogy is borgcfg)

                                        As a former Google SRE, I completely agree—GCL has a lot of quirks. On the other hand, nothing outside Google compares, and I miss it dearly. Abstracting complex configuration outside the Google ecosystem just sucks.

                                        Yes, open tools exist that try to solve this problem. But only gcl2db can load a config file into an interactive interface where you can navigate the entire hierarchy of values, with traces describing every file:line that contributed to the value at a given path. When GCL does something weird, gcl2db will tell you exactly what happened.

                                      2. 2

                                        Thanks for the reply. I’m actually not a huge fan of DSLs so this might be swaying me away from setting up nixos. I have a VM setup with it and tbh the though of me trolling through nix docs to figure out the magical phrase to do what I want does not sound like much fun. I’ll stick with arch for now.

                                        1. 6

                                          If you want the nix features but a general purpose language, guix is very similar but uses scheme to configure.

                                          1. 1

                                            I would love to use Guix, but lack of nonfree is killer as getting Steam running is a must. There’s no precedence for it being used in the unjamming communities I participate in, where as Nix is has sizable following.

                                            1. 2

                                              So use Ubuntu as the host OS for Guix if you need Steam to work. Guix runs well on many OS

                                      3. 10

                                        Sorry for the very late reply. The problem I have with nixos is that it’s anti-abstraction in the sense that I elaborated on here. Instead it’s just the ultimate wrapper.

                                        To me, the point of a distribution is to provide an algebra of packages that’s invariant in changes of state. Or to reverse this idea, an instance of a distribution is anything with a morphism to the category of packages.

                                        Nix (and nixos) is the ultimate antithesis of this idea. It’s not a morphism, it’s a homomorphism. The structure is algebraic, but it’s concrete, not abstract.

                                        People claim that “declarative” configuration is good, and it’s hard to attack such a belief, but people don’t really agree on what really means. In Haskell it means that expressions have referential transparency, which is a good thing, but in other contexts when I hear people talk about declarative stuff I immediately shiver expecting the inevitable pain. You can “declare” anything if you are precise enough, and that’s what nix does, it’s very precise, but what matters is not the declarations, but the interactions and in nix interaction means copying sha256 hashes in an esoteric programming language. This is painful and as far away from abstraction as you can get.

                                        Also notice that I said packages. Nix doesn’t have packages at all. It’s a glorified build system wrapper for source code. Binaries only come as a side effect, and there are no first class packages. The separation between pre-build artefacts and post-build artefacts is what can enable the algebraic properties of package managers to exist, and nix renounces this phase distinction with prejudice.

                                        To come to another point, I don’t like how Debian (or you other favorite distribution) chooses options and dependencies for building their packages, but the fact that it’s just One Way is far more important to me than a spurious dependency. Nix, on the other hand, encourages pets. Just customize the build options that you want to get what you want! What I want is a standard environment, customizability is a nightmare, an anti-feature.

                                        When I buy a book, I want to go to a book store and ask for the book I want. With nix I have to go to a printing press and provide instructions for printing the book I want. This is insanity. This is not progress. People say this is good because I can print my book into virgin red papyrus. I say it is bad exactly for the same reason. Also, I don’t want all my prints to be dated January 1, 1970.

                                    2. 8

                                      For me personally, I never chose Docker; it was chosen for me by my employer. I could maybe theoretically replace it with podman because it’s compatible with the same image format, which Guix (which is much better designed overall) is not. (But I don’t use the desktop docker stuff at all so I don’t really care that much; mostly I’d like to switch off docker-compose, which I have no idea whether podman can replace.)

                                      1. 3

                                        FWIW Podman does have a podman-compose functionality but it works differently. It uses k8s under the hood, so in that sense some people prefer it.

                                      2. 2

                                        This quite nicely sums up for me 😄 and more eloquently than I could put it.

                                        1. 2

                                          If you’re targeting Linux why aren’t you using a platform that supports running & building Linux software natively like Windows or even Linux?

                                          1. 12

                                            … to call WSL ‘native’ compared to running containers/etc via VMs on non-linux OS’s is a bit weird.

                                            1. 11

                                              I enjoy using a Mac, and it’s close enough that it’s almost never a problem. I was a Linux user for ~15 years and I just got tired of things only sorta-kinda working. Your experiences certainly might be different, but I find using a Mac to be an almost entirely painless experience. It also plays quite nicely with my iPhone. Windows isn’t a consideration, every time I sit down in front of a Windows machine I end up miserable (again, YMMV, I know lots of people who use Windows productively).

                                              1. 3

                                                Because “targeting Linux” really just means “running on a Linux server, somewhere” for many people and they’re not writing specifically Linux code - I spend all day writing Go on a mac that will eventually be run on a Linux box but there’s absolutely nothing Linux specific about it - why would I need Linux to do that?

                                                1. 2

                                                  WSL2-based containers run a lightweight Linux install on top of Hyper-V. Docker for Mac runs a lightweight Linux install on top of xhyve. I guess you could argue that this is different because Hyper-V is a type-1 hypervisor, whereas xhyve is a type-2 hypervisor using the hypervisor framework that macOS provides, but I’m not sure that either really counts as more ‘native’.

                                                  If your development is not Linux-specific, then XNU provides a more complete and compliant POSIX system than WSL1, which are the native kernel POSIX interfaces for macOS and Windows, respectively.

                                              2. 9

                                                Prod runs containers, not Nix, and the goal is to run the exact same build artifacts in Dev that will eventually run in Prod.

                                                1. 8

                                                  Lots of people distribute dockerfiles and docker-compose configurations. Podman and podman-compose can consume those mostly unchanged. I already understand docker. So I can both use things other people make and roll new things without using my novelty budget for building and running things in a container, which is basically a solved problem from my perspective.

                                                  Nix or Guix are new to me and would therefore consume my novelty budget, and no one has ever articulated how using my limited novelty budget that way would improve things for me (at least not in any way that has resonated with me).

                                                  Anyone else’s answer is likely to vary, of course. But that’s why I continue to choose dockerfiles and docker-compose files, whether it’s with docker or podman, rather than Nix or Guix.

                                                  1. 5

                                                    Not mentioned in other comments, but you also get process / resource isolation by default on docker/podman. Sure, you can configure service networking, cgroups, namespaces on nix yourself, just like any other system and setup the relevant network proxying. But getting that prepackaged and on by default is very handy.

                                                    1. 2

                                                      You can get a good way there without much fuss with using the Declarative NixOS containers feature (which uses systemd-nspawn under the hood).

                                                    2. 4

                                                      I’m not very familiar with Nix, but I feel like a Nix-based option could do for you what a single container could do, giving you the reproducibility of environment. What I don’t see how to do is something comparable to creating a stack of containers, such as you get from Docker Compose or Docker Swarm. And that’s considerably simpler than the kinds of auto-provisioning and wiring up that systems like Kubernetes give you. Perhaps that’s what Nix Flakes are about?

                                                      That said I am definitely feeling like Docker for reproducible developer environments is very heavy, especially on Mac. We spend a significant amount of time rebuilding containers due to code changes. Nix would probably be a better solution for this, since there’s not really an entire virtual machine and assorted filesystem layering technology in between us and the code we’re trying to run.

                                                      1. 3

                                                        Is Nix a container system…? I though it was a package manager?

                                                        1. 3

                                                          It’s not, but I understand the questions as “you can run a well defined nix configuration which includes your app or a container with your app; they’re both reproducible so why choose one of the over the other?”

                                                        2. 1

                                                          It’s possible to generate Docker images using Nix, at least, so you could use Nix for that if you wanted (and users won’t know that it’s Nix).

                                                          1. 1

                                                            These aren’t mutually exclusive. I run a few Nix VMs for self-hosting various services, and a number of those services are docker images provided by the upstream project that I use Nix to provision, configure, and run. Configuring Nix to run an image with hash XXXX from Docker registry YYYY and such-and-such environment variables doesn’t look all that different from configuring it to run a non-containerized piece of software.

                                                          1. 6

                                                            I expected a “get off my lawn” old-person rant, but I was pleasantly surprised to find a balanced, rational discussion about content on the web. 10/10, would click again.

                                                            I entirely agree that content should be displayed as content. Honestly, it’s so much easier to do, I’m really not sure why anyone uses JavaScript to display a blog or whatever.

                                                            1. 10

                                                              I can’t believe a company like SalesForce doesn’t eye licenses like a hawk. That’s really weird, and poor form.

                                                              1. 19

                                                                It doesn’t surprise me at all. The only places I have worked that cared about licences of dependencies were banks. Everywhere else, using a library has been entirely at the programmers discretion and the programmer usually does not care.

                                                                This is how OpenWRT was born.

                                                                1. 3

                                                                  Maybe it’s a “software industry” thing? All three telecommnications businesses I’ve worked for have been very stringent about licensing and choosing the right licenses for our code and imported libraries etc.

                                                                  1. 7

                                                                    I think that mindset of:

                                                                    It is available on the internet, so it must be free to use

                                                                    Is quite popular outside of the software industry as well. Unfortunately people are quite hard to educate about intellectual property.

                                                                    1. 2

                                                                      More of a company size thing. At HP unusual license approvals had to come from the legal dept. And that’s for a project which has MIT, Apache2 and a few others pre-approved. I’m sure there were other projects which needed confirmation of everything.

                                                                    2. 1

                                                                      Google cares very much about licenses.

                                                                    3. 10

                                                                      I once told a room of OSS policy wonks that my Big Tech Co had no one in charge of licensing or knowing what we use or checking for compliance. They were flabbergasted as though this were not the norm. I have worked at many sizes of company, was always the norm. You want a dependency you add it to Gemfile or whatever and push, the end.

                                                                      1. 3

                                                                        In my experience unless an engineer makes the company lawyers aware of the risk here they won’t even know to think about it. I make a point of raising it everywhere I work and institute some amount of review on my teams for licensing. But it’s not even on the radar of most company lawyers.

                                                                        1. 1

                                                                          I worked at a company that had a policy, but there was no formal enforcement mechanism. Devs were supposed to check, but this didn’t happen consistently. As a practical matter, though, it really wasn’t a problem. Just before I left the lawyers started asking questions and I actually built the initial version of an enforcement system. As it turned out, basically all of our dependencies were Apache, BSD, or MIT licensed (IIRC).

                                                                        2. 2

                                                                          Keep in mind licensing isn’t the only part though.

                                                                          However, adding monkey patching to Go is not a reasonable goal while respecting anything close to “best practices.”

                                                                          I think if you start out with something non-reasonable, such as working against the very programming language you use, why would you get into thinking about its license?

                                                                          If a company like Linksys didn’t care about licensing why would a company like SalesForce?

                                                                        1. 9

                                                                          When I have to implement something tricky I often write out everything that needs to happen, in plain English, as “useless” comments and then write the code that implements that line of English below it. Once every line of English has some code below it, I should be done. I run it, or run the tests, or whatever, and once I’m convinced that it works, I delete the comments (or turn them into “useful” comments, if that seems warranted).

                                                                          1. 1

                                                                            I do this sometimes also, although not as often as I used to. I’ve always wanted to be able to leave them in though, and also a first-class way to have comments linked to a “block” of code (a group of statements/expressions, not a semantic block).

                                                                          1. 8

                                                                            Why do we need new GUI implementations? From an end user’s standpoint, they result in app UX that’s incompatible with other apps and often missing features like accessibility or platform integration. (Viz: everyone’s complaints about the UX of Electron apps like Slack.) From a developer’s standpoint, a new framework is yet another complex new API to learn, and likely means adding hacks to support platform-specific features.

                                                                            (Obviously if you’re building your own OS you’ll need a new GUI framework. But otherwise, just let me use UIKit or AppKit, kthxbye.)

                                                                            1. 9

                                                                              My understanding is that “cross platform desktop apps” are an unsolved problem. It’s not like we have N equally good alternatives. If you want to deliver a high quality UI-intensive cross platform desktop app which is a core to your business, your choices are:

                                                                              • use QT with either unsafe C++ or dynamically typed Python
                                                                              • build your own UI framework on top of Skia
                                                                              • compromise on high quality and pick electron/Swing/JavaFX/sciter
                                                                              • compromise on cross-platform and use mac stuff on mac, win stuff on win, and ¯\_(ツ)_/¯ on Linux
                                                                              1. 2

                                                                                Flutter basically chose your 2nd option, but then built a generic widget framework on top of it. It has its shortcomings, but it’s certainly an interesting project in this space.

                                                                                1. 1

                                                                                  Flutter is another Swing/JavaFX/Electron/Sciter, but with a less popular programming language.

                                                                                2. 1

                                                                                  As a developer it’s pretty obvious I should choose #3. As a user, anything other than #4 makes it pretty obvious the developers value my experience significantly less than their time.

                                                                                3. 6

                                                                                  It would be nice to have a production-quality choice for a cross-platform GUI besides Electron or maybe Qt.

                                                                                  1. 5

                                                                                    Both serve to demonstrate that there is no such thing as cross-platform GUI.

                                                                                  2. 2

                                                                                    I suspect the major reason is a unified cross-platform development experience. I think the mobile space has shown that it’s too costly to maintain separate code based for iOS and Android. React Native has proven that many (most?) apps can be written once and deployed on both.

                                                                                    1. 1

                                                                                      I don’t agree with that.

                                                                                      Providing separate native mobile applications is not a big problem, if done right. It means isolating core business logic to a layer that can be treated almost like a line-by-line translation (we use Kotlin and Swift), and keep UI layer similar where it can be, but embrace platform native solutions where appropriate.

                                                                                      I think the allure of react-native is more than just the cross platform part. It’s about NodeJS - it attracts developers from a web/Node background.

                                                                                      1. 3

                                                                                        When I worked at a major music streaming company, we wrote cross-platform libraries for the non-UI bits, and built native UIs on top of that.

                                                                                        So most code was shared, but users didn’t have to suffer with non-native UI.

                                                                                    2. 2

                                                                                      My experience with cross-platform applications makes me think common backend differrent frontends (assuming you can glue them with a common interface; UIKit is radically different from Win32) is the better alternative; but is economically unviable (why spend money twice or thrice when Electron exists and lets you use cheap webdevs?) as well as being poison for devs (why do I need develop the same thing thrice? what about Linux? clearly i could reimplement all this myself) who don’t really “get” platforms or UX.

                                                                                      1. 1

                                                                                        Check out the retrospective on the xi-editor project, which tried to use native GUIs and found that they’re overrated for some kinds of applications. This is what led to the Druid project.

                                                                                        1. 6

                                                                                          That’s a pretty… opinionated, I guess? retrospective. I mean:

                                                                                          When doing some performance work on xi, I found to my great disappointment that performance of these so-called “native” UI toolkits was often pretty poor, even for what you’d think of as the relatively simple task of displaying a screenful of text. A large part of the problem is that these toolkits were generally made at a time when software rendering was a reasonable approach to getting pixels on screen. These days, I consider GPU acceleration to be essentially required for good GUI performance.

                                                                                          On the one hand, that sounds plausible.

                                                                                          On the other hand, considering that, between macOS and Windows, we have text editors and IDEs that have hundreds of millions of happy users, which are very efficiently used to write software worth billions of dollars, every day, maybe these “so-called ‘native’ UI toolkits” aren’t that bad. Or, okay, maybe they’re not too good, but one developer’s failed attempt to use them to write an editor up to their standard is probably not enough of a sample to say it’s high time we replaced them with something better.

                                                                                          The author then goes on to discover that their own, high-performance implementation then doesn’t have any of the benefits of native toolkits. Well, yeah, like virtually all engineering problems, this one ends with a trade-off, too. Beating the performance of 30 year-old GUI toolkits with a small toolkit written today is probably not that complicated. But beating the reliability of a 30 year-old toolkit, even without feature parity, is a multi-year project. 30 years of bugfixes is no mean feat.

                                                                                      1. 1

                                                                                        Global warming should always cause the Earth’s rotation to slow down due to shift of mass from poles to equator. Whatever is causing the speedup, I don’t think it is global warming.

                                                                                        1. 3

                                                                                          I was curious and googled it. I found a Forbes article which links to this research article. I don’t quite comprehend the language in the research article, but Forbes’ dumbed-down version is that as glaciers melt on and near the poles, there’s less weight on the ground, which makes the ground “rebound” and so the earth circularizes. No doubt the effect you describe also exists, mass trapped in the form of polar ice melts and moves towards the equator, but (according to my understanding of Forbes’ explanation of the paper) that effect is smaller than the counteracting effect from the rising ground.

                                                                                          EDIT: Here’s a non-Forbes source, although it doesn’t contain an explanation of the mechanism: https://phys.org/news/2021-01-earth-faster.html

                                                                                          They [planetary scientists] also have begun wondering if global warming might push the Earth to spin faster as the snow caps and high-altitude snows begin disappearing.

                                                                                          1. 1

                                                                                            In case you’re interested, Hudson Bay is one of the most well-known examples of the “rebound” effect… https://earthobservatory.nasa.gov/images/147405/rebounding-in-hudson-bay.

                                                                                          2. 2

                                                                                            I was also curious about this, as that matches my intuition as well.

                                                                                            This article is about the shift in the axis of rotation, not the speed, but it outlines some unintuitive effects of climate change on the Earth’s distribution of mass:

                                                                                            https://climate.nasa.gov/news/2805/scientists-id-three-causes-of-earths-spin-axis-drift/

                                                                                            To summarize:

                                                                                            • The Earth is not round; the poles are flattened, because the weight of the glaciers squishes them down. When they melt, the earth beneath them rise, and the planet becomes more round.
                                                                                            • Most of the melting glaciers are in Greenland, which is 45° from the North Pole, so the mass redistribution is not as clear cut as it seems.
                                                                                            • The mantle might be doing convection stuff that I cannot begin to understand. Cursory research implies this is a recent theory and maybe not totally accepted or understood. (Certainly not by me. I have no idea here.)

                                                                                            Note that the article doesn’t claim any of these affect rotation speed, but they’re interesting factors for my mental model of the changing Earth. Other sources I found cite the first reason (elongating Earth) as the primary driver for the speedup of the Earth’s rotation.

                                                                                            Another possible factor I found is that high-altitude snow and ice is melting, distributing mass closer to the center of the Earth (which, you know, angular momentum or something). It seems incredible to me that that would have a measurable effect – and I didn’t look for evidence that it does – but it’s another interesting thing that I would not have guessed.

                                                                                            So yeah; it’s complicated. Some aspects might slow the Earth in isolation; some might speed it up. Taken together it seems like the consensus is a net speedup.

                                                                                            Obligatory caveat: I probably spent more time writing this comment than actually researching this topic, so this information comes with no warranty express or implied.

                                                                                            (I accidentally replied to mort’s sibling comment, which was posted about the same time; I deleted it and re-posted it here as I had not actually read it at the time I wrote this. Perils of composing in a separate editor and pasting it in.)

                                                                                            1. 1

                                                                                              I have to think that moving mass from the poles to the equator is a 6,000-mile journey away from the axis of rotation, which should overwhelm elevation changes and isostatic rebound. Also, the Hudson Bay area is still rebounding slowly from glaciation, maybe a few feet over ten thousand years. The rocks in Greenland haven’t had time to rebound from the last two decades of melting.

                                                                                          1. 24

                                                                                            I agree with most of what’s said in the article. However, it misses a bit the forest for the trees, at least considering the introduction. Programs that take seconds to load or display a list are not just ignoring cache optimizations. They’re using slow languages (or language implementations, for the pedantics out there) like cpython where even the simplest operations require a dictionary lookup, or using layers and layers of abstractions like electron, or making http requests for the most trivial things (I suspect it’s what makes slack slow; I know it’s what makes GCP’s web UI absolutely terrible). A lot of bad architectural choices, too.

                                                                                            Cache optimizations can be important but only as the last step. There’s a lot to be fixed before that, imho.

                                                                                            1. 16

                                                                                              Even beyond than that, I think there are more more baseline things going on: Most developers don’t even benchmark or profile. In my experience the most egregious performance problems I’ve seen have been straight-up bugs, and they don’t get caught because nobody’s testing. And the profiler basically never agrees with what I would have guessed the problem was. I don’t disagree with the author’s overall point, but it’s rare to come across a program that’s slow enough to be a problem that doesn’t have much lower hanging fruit than locality issues.

                                                                                              1. 3

                                                                                                I agree so much! I’d even say that profiling is one half of the problem (statistical profiling, that is, like perf). The other half is tracing, which nowadays can be done with very convenient tools like Tracy or the chrome trace visualizer (“catapult”) if you instrument your code a bit so it can spit out json traces. These give insights in where time is actually spent.

                                                                                                1. 1

                                                                                                  Absolutely. Most developers only benchmark if there’s a serious problem, and most users are so inured to bad response times that they just take whatever bad experience they receive and try to use the app regardless. Most of the time it’s some stupid thing the devs did that they didn’t realize and didn’t bother checking for (oops, looks like we’re instantiating this object on every loop iteration, look at that.)

                                                                                                2. 9

                                                                                                  Programs that take seconds to load or display a list are not just ignoring cache optimizations.

                                                                                                  That’s right. I hammered on the cache example because it’s easy to show an example of what a massive difference it can make, but I did not mean to imply that it’s the only reason. Basically, any time we lose track of what the computer must do, we risk introducing slowness. Now, I don’t mean that having layers of abstractions or using dictionary are inherently bad (they will likely have a performance cost, but it may be reasonable to reach another objective), but we should make these choices intentionally rather than going by rote, by peer pressure, by habit, etc.

                                                                                                  1. 5

                                                                                                    The article implies the programmer has access to low level details like cache memory layout, but if you are programming in Python, Lua, Ruby, Perl, or similar, the programmer doesn’t have such access (and for those languages, the trade off is developer ease). I’m not even sure you get to such details in Java (last time I worked in Java, it was only a year old).

                                                                                                    The article also makes the mistake that “the world is x86”—at work, we still use SPARC based machines. I’m sure they too have cache, and maybe the same applies to them, but micro-optimizations are quite difficult across different architectures (and even across the same family but different generations).

                                                                                                    1. 6

                                                                                                      The article implies the programmer has access to low level details like cache memory layout, but if you are programming in Python, Lua, Ruby, Perl, or similar, the programmer doesn’t have such access

                                                                                                      The level of control that a programmer has is reduced in favor of other tradeoffs, as you said, but there’s still some amount of control. Often, it’s found in those languages best practices. For example, in Erlang one should prefer to use binaries for text rather than strings, because binaries are a contiguous sequence of bytes while strings are linked lists of characters. Another example, in Python it’s preferable to accumulate small substrings in a list and then use the join method rather that using concatenation (full += sub).

                                                                                                      The article also makes the mistake that “the world is x86”—at work, we still use SPARC based machines. I’m sure they too have cache, and maybe the same applies to them, but micro-optimizations are quite difficult across different architectures (and even across the same family but different generations).

                                                                                                      I don’t personally have that view, but I realize that it wasn’t made very clear in the text, my apologies. Basically what I want myself and other programmers to be mindful of is mechanical sympathy — to not lose track of the actual hardware that the program is going to run on.

                                                                                                      1. 4

                                                                                                        I know a fun Python example. Check this yes implementation:

                                                                                                        def yes(s):
                                                                                                          p = print
                                                                                                          while True:
                                                                                                            p(s)
                                                                                                        
                                                                                                        yes("y")
                                                                                                        

                                                                                                        This hot-loop will perform significantly better than the simpler print(s) because of the way variable lookups work in Python. It first checks the local scope, then the global scope, and then the built-ins scope before finally raising a NameError exception if it still isn’t found. By adding a reference to the print function to the local scope here, we reduce the number of hash-table lookups by 2 for each iteration!

                                                                                                        I’ve never actually seen this done in real Python code, understandably. It’s counter-intuitive and ugly. And if you care this much about performance then Python might not be the right choice in the first place. The dynamism of Python (any name can be reassigned, at any time, even by another thread) is sometimes useful but it makes all these lookups necessary. It’s just one of the design decisions that makes it difficult to write a high-performance implementation of Python.

                                                                                                        1. 3

                                                                                                          That’s not how scoping works in Python.

                                                                                                          The Python parser statically determines the scope of a name (where possible.) If you look at the bytecode for your function (using dis.dis) you will see either a LOAD_GLOBAL, LOAD_FAST, LOAD_DEREF, or LOAD_NAME, corresponding to global, local, closure, or unknown scope. The last bytecode (LOAD_NAME) is the only situation in which multiple scopes are checked, and these are relatively rare to see in practice.

                                                                                                          The transformation from LOAD_GLOBAL to LOAD_FAST is not uncommon, and you see it in the standard library: e.g., https://github.com/python/cpython/blob/main/Lib/json/encoder.py#L259

                                                                                                          I don’t know what current measurements of the performance improvement look like, after LOAD_GLOBAL optimisations in Python 3.9, which reported 40% improvement: https://bugs.python.org/issue26219 (It may be the case that the global-to-local transformation is no longer considered a meaningful speed-up.)

                                                                                                          Note that the transformation from global-to-local scope, while likely innocuous, is a semantic change. If builtins.print or the global print is modified in some other execution unit (e.g., another thread,) the function will not reflect this (as global lookups can be considered late-bound, which is often desirable.)

                                                                                                          1. 8

                                                                                                            I think this small point speaks more broadly to the dissatisfaction many of us have with the “software is slow” mindset. The criticisms seem very shallow.

                                                                                                            Complaining about slow software or slow languages is an easy criticism to make from the outside, especially considering that the biggest risk many projects face is failure to complete or failure to capture critical requirements.

                                                                                                            Given a known, fixed problem with decades of computer science research behind it, it’s much easier to focus on performance—whether micro-optimisations or architectural and algorithmic improvements. Given three separate, completed implementations of the same problem, it’s easy to pick out which is the fastest and also happens to have satisfied just the right business requirements to succeed with users.

                                                                                                            I think the commenters who suggest that performance and performance-regression testing should be integrated into the software development practice from the beginning are on the right track. (Right now, I think the industry is still struggling with getting basic correctness testing and documentation integrated into software development practice.)

                                                                                                            But the example above shows something important. Making everything static or precluding a number of dynamic semantics would definitely give languages like Python a better chance at being faster. But these semantics are—ultimately—useful, and it may be difficult to predict exactly when and where they are critical to satisfying requirements.

                                                                                                            It may well be the case that some languages and systems err too heavily on the side of allowing functionality that reduces the aforementioned risks. (It’s definitely the case that Python is more dynamic in design than many users make use of in practice!)

                                                                                                            1. 2

                                                                                                              Interesting! I was unaware that the parser (!?) did that optimization. I suppose it isn’t difficult to craft code that forces LOAD_NAME every time (say, by reading a string from stdin and passing it to exec) but I find it totally plausible that that rarely happens in non-pathological code.

                                                                                                              Hm. For a lark, I decided to try it:

                                                                                                              >>> def yes(s):
                                                                                                              ...  exec("p = print")
                                                                                                              ...  p(s)
                                                                                                              ... 
                                                                                                              >>> dis.dis(yes)
                                                                                                                2           0 LOAD_GLOBAL              0 (exec)
                                                                                                                            2 LOAD_CONST               1 ('p = print')
                                                                                                                            4 CALL_FUNCTION            1
                                                                                                                            6 POP_TOP
                                                                                                              
                                                                                                                3           8 LOAD_GLOBAL              1 (p)
                                                                                                                           10 LOAD_FAST                0 (s)
                                                                                                                           12 CALL_FUNCTION            1
                                                                                                                           14 POP_TOP
                                                                                                                           16 LOAD_CONST               0 (None)
                                                                                                                           18 RETURN_VALUE
                                                                                                              >>> yes("y")
                                                                                                              Traceback (most recent call last):
                                                                                                                File "<stdin>", line 1, in <module>
                                                                                                                File "<stdin>", line 3, in yes
                                                                                                              NameError: name 'p' is not defined
                                                                                                              
                                                                                                        2. 5

                                                                                                          and for those languages, the trade off is developer ease

                                                                                                          I heard Jonathan Blow make this point on a podcast and it stuck with me:

                                                                                                          We’re trading off performance for developer ease, but is it really that much easier? It’s not like “well, we’re programming in a visual language and just snapping bits together in a GUI, and it’s slow, but it’s so easy we can make stuff really quickly.” Like Python is easier than Rust, but is it that much easier? In both cases, it’s a text based OO language. One just lets you ignore types and memory lifetimes. But Python is still pretty complicated.

                                                                                                          Blow is probably a little overblown (ha), but I do think we need to ask ourselves how much convenience we’re really buying by slowing down our software by factors of 100x or more. Maybe we should be more demanding for our slow downs and expect something that trades more back for it.

                                                                                                          1. 2

                                                                                                            Like Python is easier than Rust, but is it that much easier?

                                                                                                            I don’t want to start a fight about types but, speaking for myself, Python became much more attractive when they added type annotations, for this reason. Modern Python feels quite productive, to me, so the trade-off is more tolerable.

                                                                                                            1. 1

                                                                                                              It depends upon the task. Are you manipulating or parsing text? Sure, C will be faster in execution, but in development?

                                                                                                              At work, I was told to look into SIP, and I started writing a prototype (or proof-of-concept if you will) in Lua (using LPeg to parse SIP messages). That “proof-of-concept” went into production (and is still in production six years later) because it was “fast enough” for use, and it’s been easy to modify over the years. And if we can ever switch to using x86 on the servers [1], we could easily use LuaJIT.

                                                                                                              [1] For reasons, we have to use SPARC in production, and LuaJIT does not support that architecture.

                                                                                                        3. 7

                                                                                                          The trick about cache optimizations is that that can be a case where, sure, individually you’re shaving nanoseconds off, but sometimes those are alarmingly common in the program flow and worth doing before any higher-level fixes.

                                                                                                          To wit: I worked on a CAD system implemented in Java, and the “small optimization” of switching to a pooled-allocation strategy for vectors instead of relying on the normal GC meant the difference between an unusable application and a fluidly interactive one, simply because the operation I fixed was so core to everything that was being done.

                                                                                                          Optimizing cache hits for something like mouse move math can totally be worth it as a first step, if you know your workload and what code is in the “hot” path (see also sibling comments talking about profiling).

                                                                                                          1. 6

                                                                                                            They’re using slow languages (or language implementations, for the pedantics out there) like cpython where even the simplest operations require a dictionary lookup

                                                                                                            I take issue with statements like this, because the majority of code in most programs is not being executed in a tight loop on large enough data to matter. The overall speed of a program has more to do with how it was architected than with how well the language it’s written in scores on microbenchmarks.

                                                                                                            Besides, Python’s performance cost isn’t a just an oversight. It’s a tradeoff that provides benefits elsewhere in flexibility and extensibility. Problems like serialization are trivial because of meta-programming and reflection. Complex string manipulation code is simple because the GC tracks references for you and manages the cleanup. Building many types of tools is simpler because you can easily hook into stuff at runtime. Fixing an exception in a Python script is a far more pleasant experience than fixing a segfault in a C program that hasn’t been built with DWARF symbols.

                                                                                                            Granted, modern compiled languages like Rust/Go/Zig are much better at things like providing nice error messages and helpful backtraces, but you’re paying a small cost for keeping a backtrace around in the first place. Should that be thrown out in favor of more speed? Depending on the context, yes! But a lot of code is just glue code that benefits more from useful error reporting than faster runtime.

                                                                                                            For me, the choice in language usually comes down to how quickly I can get a working program with limited bugs built. For many things (up to and including interactive GUIs) this ends up being Python, largely because of the incredible library support, but I might choose Rust instead if I was concerned about multithreading correctness, or Go if I wanted strong green-thread support (Python’s async is kinda meh). If I happen to pick a “fast” language, that’s a nice bonus, but it’s rarely a significant factor in that decision making process. I can just call out to a fast language for the slow parts.

                                                                                                            That’s not to say I wouldn’t have mechanical sympathy and try to keep data structures flat and simple from the get go, but no matter which language I pick, I’d still expect to go back with a profiler and do some performance tuning later once I have a better sense of a real-world workload.

                                                                                                            1. 4

                                                                                                              To add to what you say: Until you’ve exhausted the space of algorithmic improvements, they’re going to trump any microoptimisation that you try. Storing your data in a contiguous array may be more efficient (for search, anyway - wait until you need to insert something in the middle), but no matter how fast you make your linear scan over a million entries, if you can reframe your algorithm so that you only need to look at five of them to answer your query then a fairly simple data structure built out of Python dictionaries will outperform your hand-optimised OpenCL code scanning the entire array.

                                                                                                              The kind of microoptimisation that the article’s talking about makes sense once you’ve exhausted algorithmic improvements, need to squeeze the last bit of performance out of the system, and are certain that the requirements aren’t going to change for a while. The last bit is really important because it doesn’t matter how fast your program runs if it doesn’t solve the problem that the user actually has. grep, which the article uses as an example, is a great demonstration here. Implementations of grep have been carefully optimised but they suffer from the fact that requirements changed over time. Grep used to just search ASCII text files for strings. Then it needed to do regular expression matching. Then it needed to support unicode and do unicode canonicalisation. The bottlenecks when doing a unicode regex match over a UTF-8 file are completely different to the ones doing fixed-string matching over an ASCII text file. If you’d carefully optimised a grep implementation for fixed-string matching on ASCII, you’d really struggle to make it fast doing unicode regex matches over arbitrary unicode encodings.

                                                                                                              1. 1

                                                                                                                The kind of microoptimisation that the article’s talking about makes sense once you’ve exhausted algorithmic improvements, need to squeeze the last bit of performance out of the system, and are certain that the requirements aren’t going to change for a while.

                                                                                                                To be fair, I think the article also speaks of the kind of algorithmic improvements that you mention.

                                                                                                              2. 3

                                                                                                                Maybe it’s no coincidence that Django and Rails both seem to aim at 100 concurrent requests, though. Both use a lot of language magic (runtime reflection/metaprogramming/metaclasses), afaik. You start with a slow dynamic language, and pile up more work to do at runtime (in this same slow language). In this sense, I’d argue that the design is slow in many different ways, including architecturally.

                                                                                                                Complex string manipulation code is simple because the GC tracks references for you

                                                                                                                No modern language has a problem with that (deliberately ignoring C). Refcounted/GC’d strings are table stakes.

                                                                                                                I personally dislike Go’s design a lot, but it’s clearly designed in a way that performance will be much better than python with enough dynamic features to get you reflection-based deserialization.

                                                                                                              3. 1

                                                                                                                All the times I had an urge to fire up a profiler the problem was either an inefficient algorithm (worse big-O) or repeated database fetches (inefficient cache usage). Never have I found that performance was bad because of slow abstractions. Of course, this might be because of software I work with (Python web services) has a lot of experiences on crafting good, fast abstractions. Of course, you can find new people writing Python that don’t use them, which results in bad performance, but that is quickly learned away. What is important if you want to write performant Python code, is to use as little of “pure Python” as possible. Python is a great glue language, and it works best when it is used that way.

                                                                                                                1. 1

                                                                                                                  Never have I found that performance was bad because of slow abstractions.

                                                                                                                  I have. There was the time when fgets() was the culprit, and another time when checking the limit of a string of hex digits was the culprit. The most surprising result I’ve had from profiling is a poorly written or poorly compiled library.

                                                                                                                  Looking back on my experiences, I would have to say I’ve been surprised by a profile result about half the time.

                                                                                                                2. 1

                                                                                                                  As a pedantic out here, I wanted to say that I appreciate you :)

                                                                                                                1. 11

                                                                                                                  While this does take a stab at refuting some criticisms of Perl I don’t think it achieves its goals.

                                                                                                                  One of the core issues with Perl is that it’s such a flexible language. This is what its adherents (myself included!) love. But it’s also a net negative when trying to use Perl for large projects where interpersonal communication with regards to coding standards and what’s “idiomatic” is important. What if your resident rock star just can’t stand Moose, and insists you use their own home-grown object system? With Perl, it’s entirely plausible whatever they’ve cooked up will in fact do the job - at least for this project.

                                                                                                                  I return to my oft-repeated observation: Perl hackers revel in TMTOWTDI, Python hackers fret over what’s idiomatic. It’s a culture thing, there’s nothing wrong with it, but time has shown that bondage and discipline languages have an edge for large projects, and maybe for knowledge diffusion in general.

                                                                                                                  1. 5

                                                                                                                    Similar arguments have been made about why Lisps never became as widespread as many people feel they should be.

                                                                                                                    Edit: typo

                                                                                                                    1. 3

                                                                                                                      Yep, I agree. I guess I’d like perl criticism to be channelled into “the language is too big / too man ways to do things” (a la C++) and “the language is too flexible/powerful” (a la lisp/scheme). Rather than “line noise, write only, yadda yadda”.

                                                                                                                      A corollary of that, is that places which manage to make good on C++ or lisps could also make good on perl with similar external disciplines (style guides etc).

                                                                                                                      I agree the trend is more towards “simple” + “one way to do it” (I like golang very much).

                                                                                                                    1. 7

                                                                                                                      So, who is this for? Are there a lot of Emacs users who want a GUI to help them string together Elisp commands? No shade, I just don’t understand what I’m seeing here…

                                                                                                                      1. 1

                                                                                                                        A good question, there are many users of Nyxt who do not know Lisp. We expect this percentage of users to increase over time.

                                                                                                                      1. 5

                                                                                                                        It looks like Julia never added explicit interfaces. That’s a bummer. I know there was some discussion years ago, but I moved off of projects where it made sense as a language choice, so I lost track. I always felt that making interfaces explicit would help with documentation (at a minimum) since, at least at the time, it was common to see an informal interface referred to in documentation, but never fully documented itself. This made it trial-and-error to implement.

                                                                                                                        1. 6

                                                                                                                          With dynamic dispatch/typing, you really don’t need an explicit language construct for interfaces. In a language like Java, it’s completely dependent on the user to define that relationship. When you have a function like push!(collection::AbstractArray, item::T) your “inferface” is a loose contract. That being said, libraries like Traits.jl / WhereTraits.jl implement a macro to assist in any strict interfaces.

                                                                                                                          1. 5

                                                                                                                            Yeah, I understand they’re not necessary, but I prefer them because they force a kind of documentation. Otherwise there’s no way for me to know that a particular type implements an interface without inspecting all its method implementations.

                                                                                                                            1. 2

                                                                                                                              Absolutely, in an ideal world you’re within slapping distance of people who don’t document their code. :)

                                                                                                                        1. 38

                                                                                                                          To me, this really drives home the need for language projects to treat dependency and build tooling as first-class citizens and integrate good, complete tools into their releases. Leaving these things to “the community” (or a quasi official organization like PyPA) just creates a mess (see: Go).

                                                                                                                          1. 9

                                                                                                                            100% agree. I recently adopted a Python codebase and have delved into the ecosystem headfirst from a high precipice to find that’s improved drastically from the last time I wrote an app in Python — 2005 — but still feel like it’s in disarray relative to the polish of the Rust ecosystem and the organized chaos of the Ruby and JVM ecosystems in which I’ve swum for the last decade. I’ve invested considerable time in gluing together solutions and updating tools to work on Python 3.x.

                                                                                                                            The article under-examines Poetry, which I find to meet my needs almost perfectly and have thus adopted despite some frustrating problems with PyTorch packages as dependencies (although PyTorch ~just fixed that).

                                                                                                                            1. 5

                                                                                                                              I also think poetry isn’t being considered enough. The article gives the impression that the author doesn’t have a lot of hands on experience of poetry but is curious about it. I’d recommend further exploring that curiosity. I understand that it’s hard to cover everything in a short article like this. If you’ve got an existing project using a working setup a lot of the points make sense and there’s no need to hurry up and change your setup. But I wouldn’t really call it a fair assessment of “The State of Python Packaging in 2021”.

                                                                                                                              From my point of view it’s clear that pyproject.toml is the way going forward and is growing in popularity. Especially with the way considering it’s also required to specify the build system with modern setuptools going forward.

                                                                                                                              As for the setup.cfg requiring an empty setup() with an setup.py is a half truth at best. It’s true that PEP-517 purposely defers editable installs to a later standard in order to reduce complexity of the PEP. But in practiceit’s not required if you use setuptools of a version equal to or greater than v40.9, released in spring of 2019. This is documented in the setuptools developers guide, if a setup.py is missing setuptools emulates a dummy file with an empty setup() for you. If you build you project with a PEP517/518 frontend you don’t need the setup.py. Having static setup.cfg is a massive improvement for the ecosystem as a whole since we can actually start resolving dependencies statically without running code, this benefit for the ecosystem as a whole should not be downplayed.

                                                                                                                              I get the feeling that the author want to wait for a pipe-dream future where everything is perfectly specified and standardised before starting to adapt any of the new standards. I see this as completely fine and valid if you’re working on your own project, especially if you’ve already got existing working code. That said, in my opinion, I wouldn’t recommend it as the approach for everyone. I see it as necessary to start using the new standards on new projects so that we can start going forward, if we’re always clamping to the old way of doing things it’s going to be hard to progress and the progress will be hampered.

                                                                                                                              I get the impression that the author is very knowledgeable and have plenty experience in the area, and I see the article as reflecting the opinion of the author which I respect but don’t fully agree with. I would love to have a chat with the author given the opportunity and hear more about his opinions. I’m also looking forward to read the 2022 edition next year. It’s also easy for me to contest some of the points here but it’s not completely fair without a reply from original author where he’s given a chance to elaborate and defend their choices.

                                                                                                                              Full disclosure: I’m currently writing a book on the subject and I’ve researched the strides in Python Packaging quite heavily in recent time.

                                                                                                                            2. 3

                                                                                                                              just creates a mess (see: Go).

                                                                                                                              It’s fair to say that packaging is a mess in Python but why exactly is packaging in Go a mess? Since 1.13 we have Go Modules which solves packaging very elegantly, at least in my opinion. What I especially like is that no central index service is required, to publish a package just tag a public git repo (there are also other ways to do that).

                                                                                                                              1. 8

                                                                                                                                Yeah, Go is fine now, but in the past, when the maintainers tried to have “the community” solve the packaging problem it was a mess. There were a bunch of incompatible tools (Glide, dep, “go get”, and so many more) and none of them seemed to gain real traction. Prior to Go modules the Go situation looked similar to the current Python situation. To their credit, the Go developers realized their mistake and corrected it pretty quickly (a couple years, versus going on a couple decades for Python, so far).

                                                                                                                                1. 1

                                                                                                                                  Thank you for the explanation.

                                                                                                                                  Prior to Go modules the Go situation looked similar to the current Python situation.

                                                                                                                                  Yes, I agree with you that the situation was similar before Modules were a thing. I was fed up with the existing solutions around that time and had written my own dependency management tool as well.

                                                                                                                                2. 4

                                                                                                                                  It’s fair to say that packaging is a mess in Python but why exactly is packaging in Go a mess?

                                                                                                                                  Not the original poster, but I think it’s because modules weren’t there from the start, and this allowed dep, glide, and others to pop up and further fragment dependency management.

                                                                                                                              1. 6

                                                                                                                                I don’t get it, isn’t this just a simple, reusable CLA? Also, I actually think there’s value in allowing the project to switch licenses, as long as the new license is still philosophically compatible with the old license. For example, what if a court invalidates an important part of a popular license? Presumably projects would want to switch, or “upgrade” to a newer version of the license.

                                                                                                                                1. 6

                                                                                                                                  No, it’s not. A CLA is an arbitrary agreement often reassigning ownership of your change to the company entirely. a DCO just says “I agree it’s under the current license.”

                                                                                                                                  If you want to be able to switch licenses, that’s fine, but it should be part of your license itself IMO (like GPLv3) and not in a contributor agreement.

                                                                                                                                  1. 3

                                                                                                                                    Yeah, but if your CLA just said “I agree it’s under the current license.”, wouldn’t it be the same thing? I don’t know, maybe it’s just that “Developer Certificate of Origin” seems like a weird name for what this is doing. It’s not a certificate (in the usual sense), and it has nothing to do with the origin.

                                                                                                                                    1. 2

                                                                                                                                      It is certifying that the change came from a good origin (you, or someone who can approve of its release).

                                                                                                                                      The problem with CLAs is that it could say anything and when you want to submit a small change you’re going to need legal review of the CLA you are signing (or just go in blind..)

                                                                                                                                      1. 2

                                                                                                                                        So the DCO is just a particular implementation of a CLA that is known in advance (in the same way that the GPL v3 is a particular implementation of a software license, which, generically speaking, could say literally anything). A CLA could say anything, including exactly what a DCO says. A DCO literally binds the contributor to allowing the code to be released under the project’s license. It also certifies that the contributor CAN enter into that agreement. These are two common functions of a CLA.

                                                                                                                                        1. 2

                                                                                                                                          So yeah, from a legal, English, and purely technical sense you’re 100% correct, I agree, “a DCO is just a CLA” - and I’ll readily admit that from that viewpoint my statements were wrong.

                                                                                                                                          But I would argue that in a world where 95%+ of CLAs are arbitrary legal agreements written up by lawyers at companies, and in a world where I’d bet almost every developer you would ask “What is a CLA?” to would respond with “some arbitrary legal agreement that is not open source”, it makes sense to try and draw a distinction between “a CLA” in the modern predominant usage of the term and “the DCO”. That is the viewpoint I wrote this from, and responded to you from.

                                                                                                                                          I do understand in retrospect, though, that if you do have a very nuanced view of software law / licensing that the statement the title is making would at best be a confusing one, at worst be an outlandish one (because “a DCO is just a CLA” from a purely technical view, but not from a practical lay developer’s view.) In that case, please accept my apologies and substitute “CLAs” with “95%+ of CLAs” when reading into my writing.

                                                                                                                                          1. 1

                                                                                                                                            I appreciate you humoring me, I honestly wasn’t trying to be obnoxious. It makes more sense to me now! Thank you for the explanation!

                                                                                                                                            1. 4

                                                                                                                                              The conclusion above is incorrect. The DCO is not a license agreement, and it was not designed to address the same legal problems as CLAs. Plugging a CLA shaped hole in your process with a DCO is fine only if and only if you can live with the gaps. Which probably means you’ve got a copyleft project with no foundation or company home using kernel-style e-mail-based workflow.

                                                                                                                                              Read the DCO. https://developercertificate.org/. Read a short license like MIT or BSD. Look for overlap. You won’t find much.

                                                                                                                                              Then read Apache’s Individual CLA, a de facto “standard” applied to projects in many other contexts. Compare again. Do the homework.

                                                                                                                                              1. 2

                                                                                                                                                I never said the DCO was a license agreement (at least a Software License Agreement, like MIT or BSD), I said it was a particular case of a CLA, a Contributor License Agreement, because it binds the contributor in certain ways (they agree to have their code licensed in a certain way). I think you misunderstood me.

                                                                                                                                                1. 3

                                                                                                                                                  I referred to slimsag’s statement that “a DCO is just a CLA”. It is not.

                                                                                                                                                  CLA stands for Contributor License Agreement.

                                                                                                                                                  1. 1

                                                                                                                                                    Yes, it is, otherwise it wouldn’t be useful. A CLA binds the contributor in certain ways to protect the project. So does the DCO. What, exactly, causes the DCO to fall outside the set of possible CLAs?

                                                                                                                                                    1. 5

                                                                                                                                                      You’re trying to work backward from observed behavior, a lot of which is based on misconceptions repeated in slimsag’s blog post, instead of working forward from terms. Don’t take my word for it. Read the DCO. Read, say, Apache’s individual contributor license agreement. Compare.

                                                                                                                                                      As for where the DCO belongs, see my comment above. The DCO is fit for purpose in a pretty small number of projects that don’t by chance resemble Linux in licensing situation and workflow. The DCO was written for the kernel. For why, see https://en.wikipedia.org/wiki/SCO%E2%80%93Linux_disputes, or dig up the kernel mailing list messages on DCO 1.0 from back in ’04 or whenever it was.

                                                                                                                                                      For a more details, see https://writing.kemitchell.com/2018/01/06/CLAs-Are-Not-a-Sham.html

                                                                                                                                                      1. 2

                                                                                                                                                        I’ve read them both. They seem like different implementations of the same concept. They both attest that the contributor has or will do various things, and that the contributor possesses certain legal rights. Kind of like how the GPL and MIT licenses are quite different, but they’re both obviously software licenses. In fact, clause (a) of the DCO and the first sentence of clause 5 of the AICL are basically identical. Again, the DCO is just a particular example of a CLA.

                                                                                                                                                        1. 6

                                                                                                                                                          Where is the present-tense language granting a license in the DCO? The condition requiring notice of license terms be given with copies? Any copyleft rule? Where’s the warranty disclaimer?

                                                                                                                                                          You can’t just compare general impressions of legal texts. These are functional documents. What “features” does the DCO provide? How about the Apache ICLA? Compare those feature sets. What’s the diff?

                                                                                                                                                          CLAs almost always address the question of whether a contributor has the right to license code. The DCO tries to do that, too. But CLAs go beyond the question of ability to license and grant licenses. Hence “license agreement”.

                                                                                                                                                          In other words: CLAs don’t just say “I can license this”. They say “I can license this and I do license it, under these terms”. CLAs document that act of licensing, usually by having the contributor electronically sign something, yielding a record the project steward can store long term.

                                                                                                                                                          The DCO, or more specifically the Signed-Off-By convention, were also designed to create documentation. But that documentation covers where code came from, through all the various patch reviewers and Linus lieutenants that handle a kernel patch. Not the act of licensing. There’s only one set of terms kernel code can be licensed under: GPL, and specifically GPLv2, or some “GPLv2-compatible” permissive license.

                                                                                                                                                          As for actually granting that license, it’s all implied. The words “submit” and “sign-off” are nowhere defined and explained in the DCO. They have particular, specific meaning in the context of kernel development. But most open source projects aren’t developed like Linux or Git. Most hackers have never read or even heard of --signoff, and the git-commit man page is not statutory law.

                                                                                                                                                          The problems SCO made for Linus weren’t questions about whether contributors chose GPLv2. They have to choose GPLv2. The problems were SCOs claims that particular Linux source code, much of it likely brought in by Linus’ own patches, came from the UNIX they bought, rather than Linus’ original work or other open-licensed releases. The kernel devs didn’t have evidence on hand to document where a lot of Linux code came from. SCO filled some of those gaps with claims the code was theirs.

                                                                                                                                                          1. 1

                                                                                                                                                            This is interesting context, and I enjoyed reading it. I still think it’s accurate to say that the DCO is a particular kind of CLA.

                                                                                                                                                            1. 2

                                                                                                                                                              Suit yourself. Call it a “CLA”, but it is not a License Agreement.

                                                                                                                                1. 5

                                                                                                                                  This seems fun, and maybe a good tool for build proof of concepts. But I hardly see it as being useful for large projects. Or have I become old and grumpy?

                                                                                                                                  1. 13

                                                                                                                                    As a stranger on the internet, I can be the one to tell you that you are old and grumpy.

                                                                                                                                    Ruby is definitely unusable without syntax highlighting… (Sadists excepted) Java is definitely unusable without code completion… (Sadists excepted) Whatever comes next will probably be unusable without this thing or something like it.

                                                                                                                                    1. 9

                                                                                                                                      I’m confused… Ruby has one of the best syntaxes to read without highlighting. Not as good as forth, but definitely above-average

                                                                                                                                      1. 3

                                                                                                                                        Well, this is the internet. Good luck trying to make sense of every take.

                                                                                                                                        1. 2

                                                                                                                                          I used to think this way. Then I learned Python and now I no longer do.

                                                                                                                                          When I learned Ruby I was coming from Perl, so the Perl syntactic sugar (Which the Ruby community now seems to be rightly fleeing from in abject terror) made the transition much easier for me.

                                                                                                                                          I guess this is my wind-baggy way of saying that relative programming language readability is a highly subjective thing, so I would caution anyone against making absolute statements on this topic.

                                                                                                                                          For instance, many programmers not used to the syntax find FORTH to be an unreadable morass of words and punctuation, whereas folks who love it inherently grok its stack based nature and find it eminently readable.

                                                                                                                                          1. 1

                                                                                                                                            Oh, sure, I wasn’t trying to make a statement about general readability, but about syntax highlighting.

                                                                                                                                            For example, forth is basically king of being the same with and without highlighting because it’s just a stream of words. What would you even highlight? That doesn’t mean the code is readable to you, only that adding colour does the least of any syntax possible, really.

                                                                                                                                            Ruby has sigils for everything important and very few commonly-used keywords, so it comes pretty close also here. Sure you can highlight the few words (class, def, do, end, if) that are in common use, you could highlight the kinds of vars but they already have sigils anyway. Everything else is a method call.

                                                                                                                                            Basically I’m saying that highlighting shines when there are a lot of different kinds of syntax, because it helps you visually tell them apart. A language with a lot of common keywords, or uncommon kinds of literal expressions, or many built-in operators (which are effectively keywords), that kind of thing.

                                                                                                                                            Which is not to say no one uses syntax highlighting in ruby of course, some people find that just highlighting comments and string literals makes highlighting worth it in any syntax family, I just felt it was a weird top example for “syntax highlighting helps here”.

                                                                                                                                            1. 3

                                                                                                                                              Thank you for the clarification I understand more fully now.

                                                                                                                                              Unfortunately, while I can see where you’re coming from in the general case, I must respectfully disagree at least for myself. I’m partially blind, and syntax highlighting saves my bacon all the time no matter what programming language I’m using :)

                                                                                                                                              I do agree that Ruby perhaps has visual cues that other programming languages lack.

                                                                                                                                              1. 1

                                                                                                                                                ’m partially blind, and syntax highlighting saves my bacon all the time no matter what programming language I’m using :)

                                                                                                                                                If you don’t mind me asking - have you tried any Lisps, and if so, how was your experience with those? I’m curious as to whether the relative lack of syntax is an advantage or a disadvantage from an accessibility perspective.

                                                                                                                                                1. 1

                                                                                                                                                  Don’t mind you asking at all.

                                                                                                                                                  So, first off I Am Not A LISP Hacker, so my response will be limited to the years I ran and hacked emacs (I was an inveterate elisp twiddler. I wasted WAY too much time on it which is why I migrated back to Vim and now Vim+VSCode :)

                                                                                                                                                  It was a disadvantage. Super smart parens matching helped, but having very clear visual disambiguation between blocks and other code flow altering constructs like loops and conditionals is incredibly helpful for me.

                                                                                                                                                  It’s also one of the reasons I favor Python versus any other language where braces denote blocks rather than indentation.

                                                                                                                                                  In Python, I can literally draw a veritcal line down from the construct and discern the boundaries of the code it effects. That’s a huge win for me.

                                                                                                                                                  Note that this won’t eventually keep me from learning Scheme, which I’d love to do. I’m super impressed by the Racket community :)

                                                                                                                                              2. 1

                                                                                                                                                For example, forth is basically king of being the same with and without highlighting because it’s just a stream of words. What would you even highlight? That doesn’t mean the code is readable to you, only that adding colour does the least of any syntax possible, really.

                                                                                                                                                You could use stack effect comments to highlight the arguments to a word.

                                                                                                                                                : squared ( n -- n*n ) 
                                                                                                                                                     dup * ;
                                                                                                                                                 squared 3 .  
                                                                                                                                                

                                                                                                                                                For example, if squared is selected then the 3 should be highlighted. There’s also Chuck Moore’s ColorForth which uses color as part of the syntax.

                                                                                                                                          2. 6

                                                                                                                                            Masochists (people that love pain on themselves), not sadists (people that love inflicting pain on others).

                                                                                                                                            1. 2

                                                                                                                                              Ah, thank you for the correction.

                                                                                                                                              I did once have a coworker who started programming ruby in hungarian notation so that they could code without any syntax highlighting, does that work?

                                                                                                                                              1. 4

                                                                                                                                                That count as both ;)

                                                                                                                                              2. 2

                                                                                                                                                Go to source is probably the only reason I use IDEs. Syntax highlighting does nothing for me. I could code entirely in monochrome and it wouldn’t affect the outcome in the slightest.

                                                                                                                                                On the other hand, you’re right. Tools create languages that depend on those tools. Intellij is infamous for that.

                                                                                                                                              3. 6

                                                                                                                                                You’re old and grumpy :) But seriously, the fact that it’s restricted to Github Codespaces right now limits its usefulness for a bunch of us.

                                                                                                                                                However, I think this kind of guided assistance is going to be huge as the rough edges are polished away.

                                                                                                                                                Will the grizzled veterans coding exclusively with M-x butterflies and flipping magnetic cores with their teeth benefit? Probably not, but they don’t represent the masses of people laboring in the code mines every day either :)

                                                                                                                                                1. 4

                                                                                                                                                  I don’t do those things, I use languages with rich type information along with an IDE that basically writes the code for me already. I just don’t understand who would use these kinds of snippets regularly other than people building example apps or PoCs. The vast majority of code I write on a daily basis calls into internal APIs that are part of the product I work on, those won’t be in the snippet catalog this things uses.

                                                                                                                                                  1. 4

                                                                                                                                                    I don’t doubt it but I would also posit that there are vast groups of people churning out Java/.Net/PHP/Python code every day who would benefit enormously from an AI saying:

                                                                                                                                                    Hey, I see you have 5 nested for loops here. Why don’t we re-write this as a nested list comprehension. See? MUCH more readable now!

                                                                                                                                                    1. 4

                                                                                                                                                      The vast majority of code I write on a daily basis calls into internal APIs that are part of the product I work on, those won’t be in the snippet catalog this things uses.

                                                                                                                                                      Well, not yet. Not until they come up with a way to ingest and train based on private, internal codebases. I can’t see any reason to think that won’t be coming.

                                                                                                                                                      1. 2

                                                                                                                                                        Oh sure, I agree that’s potentially (very) useful, even for me! I guess maybe the problem is that the examples I’ve seen (and admittedly I haven’t looked at it very hard) seem to be more like conventional “snippets”, whereas what you’re describing feels more like a AST-based lint that we have for certain languages and in certain IDEs already (though they could absolutely be smarter).

                                                                                                                                                        1. 2

                                                                                                                                                          Visual studio (the full ide) has something like this at the moment and it’s honestly terrible. Always suggests inverting if statements which break the logic, or another one that I haven’t taken the time to figure out how to disable is it ‘highlights’ with a little grey line at the side of the ide (where breakpoints would be) and suggests changes such as condensing your catch blocks from try/catches onto one line instead of nice and readable.

                                                                                                                                                          Could be great in the future if could get to what you suggested!

                                                                                                                                                        2. 3

                                                                                                                                                          Given that GH already has an enterprise offering, I can’t see a reason why they can’t enable the copilot feature and perform some transfer learning on a private codebase.

                                                                                                                                                          1. 1

                                                                                                                                                            Is your code in GitHub? All my employer’s code that I work on is in our GitHub org, some repos public, some private. That seems like the use case here. Yeah, if your code isn’t in GitHub, this GitHub tool is probably not for you.

                                                                                                                                                            I’d love to see what this looks like trained on a GitHub-wide MIT licensed corpus, then a tiny per-org transfer learning layer on top, with just our code.

                                                                                                                                                            1. 1

                                                                                                                                                              Yeah, although, to me, the more interesting use-case is a CI tool that attempts to detect duplicate code / effort across the organization. Not sure how often I’d need / want it to write a bunch of boilerplate for me.

                                                                                                                                                        3. 1

                                                                                                                                                          it feels like a niftier autocomplete/intellisense. kind of like how gmail provides suggestions for completing sentences. I don’t think it’s world-changing, but I can imagine it being useful when slogging through writing basic code structures. of course you could do the same thing with macros in your IDE but this doesn’t require any configuration.

                                                                                                                                                        1. 6

                                                                                                                                                          I know this doesn’t completely ban them, but I think a web without images and video is much worse off for it. I remember clearly how the web was in 1995 and the wealth of visual imagery we have now seemed like far-future fantasy. I’d love for there to be fewer (to no) ads and lighter page weights and more easily accessible information for more people, but I absolutely do not hanker after Times New Roman on a white background with no images, the ‘90s can keep that. And little to no JS on forms, so we have to round trip every bit of validation? No thanks. The motivation is totally valid but this just seems way over the top. Sure, let’s come up with proper standards for progressive web, and let’s use them. The web will be better off for it. But let’s not throw the baby out with the bathwater. Apart from anything else it comes across as overly zealous and naïve, hence reducing the strength of its argument.

                                                                                                                                                          1. 1

                                                                                                                                                            If all validation is done client side, then you’re opening up your server to be abused by criminals who will bypass your client side validation. And there’s plenty of standards for the web, even the “progressive” web. It’s just that everybody has a different set of features they want, and want to exclude all else. I know this because I was involved in the Gemini community for a time and everybody wants to extend Gemini, just in a different way.

                                                                                                                                                            1. 6

                                                                                                                                                              I don’t think the argument was that client side validation should replace server side validation, just that the addition of client side validation (where it makes sense) greatly improves the UX.

                                                                                                                                                              1. 3

                                                                                                                                                                100%. I dig purity too but I like it served with a thick slice of pragmatism :-)

                                                                                                                                                                1. 2

                                                                                                                                                                  +1, try submitting a fiddly form that only has server-side validation over a slow/unreliable network. client-side navigation and validation can make a site bearable to use in these circumstances.

                                                                                                                                                            1. 22

                                                                                                                                                              I don’t generally think that going “backward” along the timeline is the answer. I don’t think the web was strictly “better” in the 90s. It was different, for sure, but there were a ton of things that sucked. And I don’t think the web today is strictly “worse”. There are a ton of things that suck, but there are also a lot of things that are awesome.

                                                                                                                                                              For example, I want images and video. I also want JavaScript and web apps. What I don’t want is a simple blog post that has to show a loading spinner while it downloads 50MB of static assets and then autoplays a video ad. But that has nothing to do with the available technologies, people also created obnoxious, unusable websites in the 90s. And taking away JavaScript and video is throwing the baby out with the bath water.

                                                                                                                                                              1. 3

                                                                                                                                                                This is a good description of how I feel as well, but I’m not sure which incentives would help us get there. On the one hand technical measures like ranking these sites higher in search results might help, but the obnoxious ads situation is perhaps a matter of finding a compensation model for online content which works for both sides.

                                                                                                                                                                1. 5

                                                                                                                                                                  I don’t think we can expect businesses to incentivize this sort of stuff. there isn’t any money in this.

                                                                                                                                                              1. 11

                                                                                                                                                                My number one worry with encrypted email is that I will lose access to it permanently.

                                                                                                                                                                My number two worry with encrypted email is that nobody uses it so there’s no point.

                                                                                                                                                                My number three worry is traffic analysis, but it’s a long way behind one and two.

                                                                                                                                                                1. 8

                                                                                                                                                                  I’ve created so many public keys over the years because some thing (like the Ubuntu CoC or “signed” commits in GitHub) requires it and I’ve lost access to literally every one of them, mostly because I use a public key about once every five years. I would literally never set up email encryption because of your reason number one.

                                                                                                                                                                  1. 16

                                                                                                                                                                    What doesn’t help is the relative opaqueness of how gpg and its “keyring” works. With ssh it’s easy: I just have a ~/.ssh/id_ed25519 and ~/.ssh/id_ed25519.pub file, and public keys for verification are in ~/.ssh/authorized_keys. That’s pretty much all there’s to it.

                                                                                                                                                                    I’ve never lost access to a ssh key or been confused how this works. With gpg it’s so much harder.

                                                                                                                                                                    1. 5

                                                                                                                                                                      I agree with this so, so much. SSH is almost trivial to understand, and there are even concrete benefits to setting it up and using it!

                                                                                                                                                                      1. 10

                                                                                                                                                                        It’s simple to use for simple cases but a bunch of things are complicated with SSH, for example:

                                                                                                                                                                        • If a key is compromised, how do I revoke access to it from all of the machines that have it in their authorised-keys file?
                                                                                                                                                                        • If I want to have one key per machine, how do I add that key to all of the machines I want it to access?
                                                                                                                                                                        • How do I enforce regular key rollover?

                                                                                                                                                                        You can do these things with PKI but now your key management is both complicated and very different from ‘normal’ SSH.

                                                                                                                                                                        SSH is also solving a much simpler problem because it only needs to support the online use case. If I connect to a machine with SSH and it rejects my key, I know immediately and it’s my problem. The protocol is interactive and will try any keys I let it, so I can do key rollover by generating a new key, allowing SSH to use both, and eventually replacing the public key on every machine I connect to with the new one. If I send an encrypted email with the wrong key, the recipient gets nonsense.

                                                                                                                                                                        The offline design is the root cause of a lot of the problems with email. It was designed for a world where the network was somewhat ad-hoc and most machines were disconnected. When my father’s company first got Interne email, they ran an internal mail server with a backup MX provided by their ISP. When someone sent an email, it would go to their ISP, when they dialed in (with a modem, which they did every couple of hours) the backup MX would forward email to them and they’d send outgoing mail, which would go via the same multi-hop relaying process. Now, email servers are assumed to be online all of the time and it would be completely fine to report to the sender’s mail client if the recipient’s mail server is not reachable and ask them to try again later. If you define a protocol around that assumption from the start, it’s completely fine to build an end-to-end key-exchange protocol in and a huge number of the problems with encrypted email go away.

                                                                                                                                                                      2. 1

                                                                                                                                                                        With SSH however, it’s trivial to create a new key, and get it installed on the new server. Because SSH keys don’t worry about a decentralized “web of trust” in way OpenPGP does, there is no historical or technological baggage to require you carrying around your SSH keypair. I’ve been through so many SSH keys over the years on personal and corporate systems, yet never once has it ever bothered me.

                                                                                                                                                                      3. 8

                                                                                                                                                                        This is one reason I started signing every email I send years ago and also how sign every git commit. It forces my key to be a core part of my workflow

                                                                                                                                                                        1. 3

                                                                                                                                                                          Similarly, I also use gpg to encrypt documents, and to encrypt passwords in pass. You’ll also rarely need to interact with it for Arch packages. Can’t beat its ubiquity.

                                                                                                                                                                        2. 2

                                                                                                                                                                          If you want you can make a “master” key that you keep in one place permanently and at backup locations, and then sign stuff with subkeys signed by the master key. That way you will never lose anything if you lose a subkey but not the masterkey. Not a great solution but still helpful