Threads for tendstofortytwo

    1. 1

      This looks great! I wanted something like this recently, ended up using Pug.js and dealing with the syntax.

      Do you have plans for express/fastify middleware?

      1. 2

        You’d have to ask the author :]

        For express, I think this would be a view engine as opposed to a middleware.

    2. 3

      You can also use De Bruijn indexing to avoid all of the bound variable / capture issues.

      I love thinking about the semantics of LC because it all boils down to beta reduction. Beta reduction is what makes an LC expression executable. It also turns LC into an effective state machine - the result of a beta reduction step depends on the result of prior reduction steps (aka term rewriting).

      1. 2

        Or, if you want more efficient reduction, pointers/handles between lambda’s and their bound variables. Doing this properly is considerably more complicated, though.

      2. 1

        I considered using de Bruijn indexing, but this keeps the lambdas easier to read imo - though now I’m wondering if there’s a way to store variable names in addition to the indexes and switch back from de Bruijn indexing to the regular using-variables kind of abstractions 🤔

        1. 3

          Yea I’m no fan of the readability of De Bruijn indexing, but it’s there because it solves a problem.

          Since DB indexes are just sequential starting from 0 (or does it actually start at 1?), you should be able to store the variable names in an array and use the DB index to retrieve the name. They’re 1 to 1.

    3. 1

      Why is iwd considered superior to wpa_supplicant? Asking cause I use the latter and am wondering if there’s a reason to switch.

      1. 3

        strictly speaking from usability standpoint and previous experiences, it’s easier to setup the initial connection using iwd because of the tui (iwctl), with wpa_supplicant i often need to do some file based configuration before getting started.

        1. 1

          I see - so if both are being used as just a networkmanager backend on a DE, there isn’t much difference?

    4. 6

      I tried Debian GNU/Hurd a few months back, and my experience wasn’t as smooth as OP’s… I ran into an issue where all the packages were signed by a key that was no longer hosted on Debian’s keyservers, making it impossible to run apt without bypassing all signature verification. And even after that, the system was kinda unstable and didn’t boot on QEMU after the first time I got a GUI running.

      My original article about this is here (page 24) if anyone is interested:

    5. 2

      This is helpful, thanks for sharing! I’m just setting up a ZFS pool of my own, and it’s so annoying that there’s no good solution to the “add singular drive to raidz vdev” problem (the 2021 patch seems to not have merged?).

    6. 3

      inb4 why not rust

      1. 5

        I think the easiest explanation might be “Rust already has a HashMap”:

        I think it would be cool to see this implementation ported to Rust (and other languages!) just to see what sort of mechanisms those languages have to make the programming or interface nicer.

        1. 2

          A C++ version would be nice just to show how much simpler / safer the code is, with std::string and std::unique_ptr. Maybe I’ll write it once I get to a real keyboard…

    7. 44

      it’s a bit sad and funny at the same time, that the “there should be only one obvious way to do things” language is the one that has this problem.

      thanks for the article! as a primarily-not-python dev the guidelines in the other linked article line up with what I’ve mostly discovered through chance and accident:

      1. 42


        After being a Python dev for several years, I’ve come to the conclusion: don’t use Python.

        1. 8

          I also moved away from Python in part because packaging was a pain. The bad news is, when the Python team reads reports like yours, they just spin it away and say there’s nothing to see here:

          1. 9

            The OP really hammers home my feeling that whenever the “answer” is virtual environments, now you have 30 different usability problems instead.

            It really rubs me the wrong way in that blog post when the answer given to smart people like those on Changelog say “I do pip install” is “don’t use pip install” and to simply imply they are dummies for doing so. If smart people are doing the wrong thing, you’re supplying the wrong thing. It shouldn’t be easy for smart people to hold things that wrong.

          2. 8

            Yeah, Brett is one person in particular that’s involved in the project I despise. The few times I’ve interacted with him over various Python GitHub issues have all ended unpleasantly with an air of “I’m always right, and oh I’m also the mod here so issue locked after I post the last reply”

          3. 1

            What are you using instead?

            1. 12

              Mostly Go. I still use Python as an AWK replacement for throwaway work, but for anything past that I just use Go. And on the frontend I use JS, of course.

            2. 6

              At $DAYJOB? Kotlin + SpringBoot. Privately? Rust / PowerShell. I still use Python for all of the common scripting needs, but I don’t build applications with it anymore.

              1. 1

                That’s quite a hard change from Python to Kotlin+Spingboot…

                Kotlin is much more complex (has MANY language features) and Spring boot is quite a beast in the framework world.

                I don’t expect it to be an easy switch for anyone… did you learn on the job or you used special resources that helped you a lot ?

                1. 2

                  did you learn on the job or you used special resources that helped you a lot ?

                  Like most things, I teach myself and binge on whatever knowledge I can find. The hard thing about spring boot, to me, is the legacy of things. There are 3-4 different ways to accomplish DI and configuration depending on when a given piece of documentation was written as trends changed.

                  I don’t find Kotlin particularly challenging from Python, but I have prior professional experience with .NET langs (PowerShell, C#, F#), and Rust, which has a much stronger type story.

        2. 4

          Yep, I was a professional Python developer for a decade and between the terrible performance (with virtually no good options for optimization), the package management issues, and the abysmal typing story I absolutely can’t recommend it to people.

          Unless you’re doing data science, just use Go instead.

        3. 3

          I sped ran that over the past year at work. I went from Python 101, to operationalizing some proof-of-concept scripts from my director and running them in Kubernetes, and all the way back to “I’d rather not do this in Python.”

          Go, while initially being hard to understand, has been a comparative breath of fresh air.

        4. 3

          Your tldr is absolutely the right way to handle Python packaging.

          I’m not on the core team, and god knows I’m comfortable criticizing their decisions, but I genuinely don’t see why this is a problem.

        5. 2

          For my world (large web services) I feel weird using a language whose runtime doesn’t utilize multiple cores at this point. Which means no Python, no JavaScript, no Ruby… Fortunately we have good options: the JVM languages (Java, Kotlin, Clojure), Go, Rust, the .NET languages (mostly C#), the BEAM languages (Erlang and Elixir).

      2. 12

        Glad it could help you.

        The community is buzzing with attempts to fix those issues this year, so I’m hopping those posts will become obsolete one day.

        Flask’s author is attempting something interesting with rye:

        Trio’s author is drafting a spec for the equivalent of wheels, but for the whole python interpreter:

        Not advocating to use them right now, but the fact is bootstrapping Python is finally ackowledged as one major cause of packaging issues and a priority to solve.

        1. 2

          I don’t find that this article effectively supports many of the points made in the summary article. That might be because it is structured differently, so I failed to match up the sections. I also think that homebrew, poetry, anaconda, etc are too dissimilar to make such broad statements.

          My takeaways were: 1. Don’t use the latest feature release (I’d go further and apply this to most software if you want stability) 2. Don’t mix and match different distributions. 3. Be explicit about what you’re using and use environments to make shortcuts like python3 over an absolute path unambiguous.

          Your proposed setup is a viable way to accomplish this, but if you apply that advice to a different distribution like conda, most of the anaconda failure modes you described go away. I grant that there are many more packages which are only available in pip and not available in conda, but there is enough coverage, that difference seems like an advanced use case or a viable tradeoff. I don’t know about the other solutions like homebrew you mentioned because I’ve never used them.

          Pip doesn’t represent the compiled dependencies in its dependency graph deeply enough for me to trust it, and I given multiple viable solutions, I don’t see any support for (2) or (3).

          (3) and (6) are a rebuttal of (A) and (B). If the user is hitting issues with their path, it’s because they put things there for convenience. Is this saying anything more than they have an invalid workflow and should just inconvenience themselves by always doing it the long way?

          At least for pip and conda, (4) which amounts to “be consistent”, it largely a rebuttal of (E) and (F). A user got themselves into hot water because they were inconsistent, and your solution is to tell them to be consistent.

          Both of these rebuttal style advices are too surface level since they don’t knowledge any motivations that might have gotten the user into that state and just ask the user to not make mistakes.

          From my takeaways above, I wonder if a better approach might be a standalone tool that finds and points out violations. Are there multiple environments on the path? Give a warning. Are there packages outside of a venv? Warning. Etc.

          You also reference new solutions, but I don’t see how a new solution could address the core issue you’re pointing out, which is being inconsistent about which package manager you’re using.

          Quoted for reference below:

          1. Don’t install the latest major version of Python
          2. Use only the installer on Windows and Mac, or official repositories on Linux.
          3. Never install or run anything outside of a virtual environment
          4. Limit yourself to the basics: “pip” and “venv”
          5. If you run a command, use “-m”
          6. When creating a virtual environment, be explicit about which Python you use

          A. The user PATH is wrong. B. The user doesn’t know what Python they are using. C. Python is broken. D. They don’t have the permission to do something. E. They installed incompatible things together. F. They are missing a dependency

      3. 8

        I’ve found that this happens in other areas in Python too. You have unittest and doctest. You have os.path, pathlib and glob that have overlapping functionality. You have urllib, urllib2 (and urllib3).

        The whole problem that the article describes is also very complicated: there’s virtualenv, venv, pyenv, pyvenv…you can easily find SO questions with people confused about these things.

        Python was surprisingly difficult to get into coming from another language. The whole “there’s only one way to do it” seems long gone.

        1. 18

          Indeed, Python was created 30 years ago, before Java, at the time the first game boy came around.

          It accumulated a lot of things, and it’s a constant battle to balance what to change (to keep the project lean and modern) and what to keep (to maintain compat).

          Whatever move you chose, someone in the community will say the core devs are doing it wrong.

          I read as many posts saying python is moving too fast as ones saying it’s not adopting “feature x” fast enough. As many people saying we should get rid of the GIL than one saying they are deprecating too many API.

          When you are as popular as Python, with a community as diverse, it’s very hard job.

          1. 2

            When you are as popular as Python, with a community as diverse, it’s very hard job.

            True. I’m in the “move fast, adapt or die” camp, which works pretty well for Web products with continuous development. It’s not that much work to adopt language ideas this way. But I can see how Python, which is present in plenty of different (and slower) industries, would have trouble disseminating this view in those other camps.

          2. 2

            It accumulated things because the maintainers didn’t like to say “no” often enough. Go is clearly much younger, but it has changed quite a lot less in its decade and a half of existence than Python did in any 15 year span (generics being the most significant change).

        2. 10

          The whole “there’s only one way to do it” seems long gone.

          Even when it was crafted, it was a bit tongue in cheek dig against Perl that Python never really stood for (the one way). It was ironically aspirational.

        3. 10

          There are default standard packaging tools. I really strongly recommend people use them (and have written up how to), rather than turning to a rickety Jenga tower of homegrown/third-party alternatives, because that is actually where the pain and brokenness comes in.

          The main reasons people never take that advice are:

          1. The approximately fifteen vigintillion posts on the internet claiming that “Python packaging” is horribly broken unless you use the author’s preferred rickety Jenga tower of homegrown/third-party alternatives
          2. People who want a unified high-level project management and development lifecycle tool rather than a package management tool, and try to cobble one together from packaging tools.

          The second one is a problem worth working on, but only if people realize it’s not purely a “packaging” issue.. The first one is sadly unsolvable because all that stuff will stick around forever.

      4. 6

        It’s not the only one that has the problem npm/js has it. So does Ruby. Almost every scripting language in use at any kind of real scale has this problem. It’s somewhat inherent to them I think. I think it’s because they require locating and loading a large quantity of files reliably. To do so they have to know where those files are. And “where” is different between projects because each of those files they need to load are actually different versions between projects. And you haven’t even touched on the versions of the interpreter those versions work with yet. It’s an exponential explosion of dependency complexity.

        1. 6

          npm seems fine, I think? the default is to be local to your current directory and you can install globally with -g. you can use yarn if you like, but both of them seem to do the right thing and you can’t really shoot yourself in the foot by using one or the other.

          unless there’s something in the JS ecosystem I’ve missed recently?

          1. 3

            The only problem described by the article that the nodejs/npm ecosystem has is the problem of getting the right version of node installed and in the PATH. nvm is the only reliably correct way to juggle multiple versions, and if a person is new to the ecosystem they might not know it, especially if their instinct is to reach for apt/rpm/whatever.

            edit: also, previous to the existence of lockfiles in npm you could end up with a different set of dependencies installed than the application/tool was tested with, which could obviously cause issues. This hasn’t been an issue since npm v7 (and it’s also the problem that yarn was originally created to solve).

            1. 4

              I think nix counts as another way to juggle multiple node versions. But when considering the angle that newcomers may not be aware of nvm, nervermind nix!

              1. 6

                The nice thing about Nix is that this knowledge is transferable between projects. At work, we’re using Nix with Clojure, Postgres and NodeJS. At home I’m using it with CHICKEN Scheme and Postgres, and for a couple of freelancing projects I’m using it with Python and Postgres.

                That’s so much easier than “oh, for Node you need nvm, for Ruby rvm, for Clojure/Java you need to set your JAVA_HOME, for Python some arcane venv/poetry/pip setup, for CHICKEN you use chicken-belt” etc. And for heavier projects you’d need some Docker-compose monstrosity. No thank you sir!

          2. 3

            Picking up the local node_module folder and having a default pacakges.json is helping the every day work a lot.

            However, the compiled extension ecosystem is very poor on JS.

            -g can break things, which is why they introduced npx then npm create. But as you just have shown with your comment, the paradox of choices strike in JS as well.

            Also, because imports and namespaces are quite a new thing in the language, you got to use a build system for client side code, and this is quite the wild west.

            Picking up node_modules in the current directory means you can inject code in pretty much anything by casually dropping a directory in the right place, PHP style.

            All in all, I’d still say it’s better than in Python, but there are a lot to improve.

            1. 1

              A directory’s presence in node_modules doesn’t mean it’s executed, it has to be explicitly imported (or require()ed) by the code that’s running.

          3. 3

            Python virtualenvs is basically also just this. I’ve had more issues with npm in practice just from the same style of env confusion that hits Python (some package scripts’ requirements not evaluating properly), but honestly I’ve found it all roughly as painful.

            Node also has a bunch of people experimenting with different packages or task runners, lots of projects making monorepos that make a bunch of packages… so that sort of stuff is its own hell if you need to go even an inch off the beaten path.

            I think the core difference is people expect JS to be kinda slow and install a billion packages, whereas in Python it’s pretty common to have projects with only a handful of deps so the various songs and dances are morre noticeable. Also Python users honestly have higher expectations. “Just install this CLI tool with npm -g” is an ask that a lot of people do not appreciate when it’s pip. Partly the Python language version issues, but also just that people really want to be able to provide binaries that work in “any” environment.

        2. 4

          The biggest difference between Ruby and Python here is, in Ruby there’s only one right way to do it.

          1. 1

            Well, two ways: basically ‘use an rvm gemset’ or ‘use vendor/bundle’. Which you can hybridize to ‘an rvm gemset shared between all your gems/apps on your development machine + shared/vendor/bundle for capistrano-based deploys on the production machines’. And there are some rough edges. But I rarely see complaints about this state of affairs like these complaints I regularly see from the Python community. I’ve never used Python for anything large and have never used any of these environment virtualisation tools, so I can’t really pinpoint the difference, but I doubt it’s just a difference in community: it’s not like Ruby doesn’t have it fair share of complaints and drama.

            1. 1

              rvm is a holdover from the pre-bundler days but I have literally never seen it used in a production environment. It’s only just Bundler, and has been for more than a decade.

      5. 4

        It’s been a hell of a long time since there was one obvious way to do anything in Python, and pretty much every pep adds more features on.

        It’s clear that at least since Python 3 (but in practice before) one obvious was has absolutely not been a goal for Python’s design.

        1. 6

          pretty much every pep adds more features on.

          100% I miss the Python 2.7 days, it wasn’t perfect, but I could read any code I came across.

          I only use Python for automation. I’m sure the PEPs solve problems for folks, but the language is starting to feel overwhelming in addition to all the other stuff I need to stay up-to-date on.

      6. 2

        Having more than one way to do things is almost inevitable if you didn’t get it right the first time. If time passes after your insufficient solution is official, either people will live with the pain or make a backwards incompatible system. Your ecosystem is now forked. Unless one of the ecosystems gets >90% buy in, it’s cheaper to make a third solution since it splits less of the ecosystem.

    8. 3

      this website is so hard to read… this article has three banner ads and one popup leading to… itself?

    9. 1

      This seems to put a lot of responsibility on one person - a bit scary both as the company (low “bus number”) and as said person (you will be blamed if things go wrong, even if things were beyond your control).

      1. 4

        if you want redundancy you need to pay for it. Communication and coordination is an overhead and why single threaded programs still persist today. When we add more people to a project we can do so for a number of reasons.

        For me, those four areas listed help me answer the question of “what do we hope to gain by adding someone” the answer could be resiliency. Often people are added in an effort to speed things up, but they accidentally slow things down.

        you will be blamed if things go wrong

        This is more related to having a culture of trust and openness than to ownership and division of work in my opinion. If the people on the hook can’t ask for help or get it then it’s a systemic failure, not an individual one.

        1. 1

          Yeah, that’s fair. I guess I just don’t expect the same person to necessarily be good at all the four things at the same time.

          1. 2

            I agree it’s rare. But as hard as it is to get all those skills in one person it is harder to get four different people to coordinate their decision making in an optimal way.

            I think that in the ideal world when they start out they would maybe have four different coaches. Or something like that.

            One other thing it doesn’t touch on is interest and energy. If I love doing one of those and hate another then I might do it poorly. Most “how we work” materials and literature assume attention and energy are infinite and only knowledge might be missing. To do the work they must be ready, willing, and able.

      2. 2

        I agree; this is too much for one person to handle this role for an entire product. It feels healthier to assign this on a per-feature basis so that you can rotate around the responsibilities on the team; smaller features can be led by more junior developers with a healthy level of support from seniors. It’s a great way to nurture growth IME.

    10. 2

      NAT gets a lot of hate (perhaps deservedly) but I found the intended purpose of these first NAT devices quite neat - to hack around a bad prior network configuration without hours of manual effort.

    11. 4
      • Attending a local pride event
      • Giving up on TrueNAS for my NAS operating system, and installing FreeBSD instead
      1. 2

        I’m using xigmanas (formely nas4free) and really like it. It’s “boring” in the way you want that sort of thing to be

        1. 1

          ooh, this looks interesting, thanks! I was going to just use NextCloud because of their mobile app support - how is xigmanas on that front?

          1. 1

            Xigmanas is the OS so you’d install nextcloud in a freebsd jail. However - I have no experience with this! I just use NFS, ssh etc

            1. 1

              ahh, alright. thanks!

    12. 2

      I don’t know if it helps or is a better experience, but FreeBSD has VSCodium in ports:

      1. 8

        It’s worth noting that the reason VS Code support on *BSD is not great is because Electron only officially supports platforms that Chromium supports and Google refuses to allow patches to support *BSD to be merged into Chromium, even with active maintainers and people willing to pay for and manage CI infrastructure.

      2. 3

        I should mention that in the article for sure! For me I prefer sticking to OpenBSD (I’m stubborn) - so I need to default on hacks 😛

        1. 3

          do you also own an electric car and heat your house with a coal furnace

          1. 4

            No electric car but my house is heated with propane and wood ;)

            1. 2

              it’s always something isn’t it

        2. 2

          ahh, I was hoping it could help with the port to OpenBSD. if not, that’s fine :)

    13. 24

      Honestly, for me the big thing about Arch isn’t a lack of “stability”, it’s more the number of sharp edges to cut yourself on.

      For example, the author mentioned that the longest they’ve gone without a system update is 9 months. Now, the standard way to update an Arch system is pacman -Syu, but this won’t work if you haven’t updated in 9 months – the package signing keys (?) would have changed and the servers would have stopped keeping old packages, so what you instead want to do is pacman -Sy archlinux-keyring && pacman -Su.

      There’s a page on the ArchWiki telling you about this, but you wouldn’t find it until after you run a system update and it fails with some cryptic errors about gpg. It also doesn’t help that pacman -Sy <packagename> is said to be an unsupported operation on the wiki otherwise, so you wouldn’t think to do it yourself, and might even hesitate if someone on a forum tells you to do it. Any other package manager would just… take care of this.

      It’s little things like this that make me not want to use Arch, and what I think gives it a reputation for instability - it seems to break all the time, but that’s not actually instability that’s just The Arch Way, as you can clearly read halfway down this random wiki page.

      1. 10

        As it happens, they recently added a systemd timer to update the keyring.

        1. 4

          ahh, that’s a good start. Still doesn’t help my system that’s been sitting shut down in a basement for four months, but at least there’s something.

      2. 8

        If you’re worried about sharp edges like that, then yeah you probably don’t want to deal with Arch. Someone who uses the official install guide though should be made pretty clear that they exist. You hand prepare the system, and then install every package you want from there. It’s quite a bit different than a distro that provides a simple out of the box install. (I’m ignoring the projects that try to simplify the install for Arch here.)

        It’s also a rolling release. Sure, you could go for 9 months without updating but with a rolling a release that’s a long time. You’ll likely be upgrading most of your installed software in a single jump by doing so. That’s not bad, per se, but would be mildly concerning to me. There’s no clean rollback mechanism in Arch so this presents an extra level of caution.

        I think the perception of a lack of stability does come from the Arch way, but from my experience it’s usually down to changes in the upstream software being packaged and clearly nothing that the distro is adding. It seems obvious to me that if you’re pulling in newer versions of software constantly you will have less stability by design. There’s real benefit in distros that take snapshots when it comes to predictability and thus stability.

        I use Arch on exactly one system, my laptop/workstation, and I’m quite happy with it there. I get easy access to updated packages, and through the AUR a wide variety of software not officially packaged. It’s perfect for what I want on this machine and lets me customize my desktop environment exactly how I want. Doing the same with Debian and Ubuntu was much more effort on my part.

        I wouldn’t use Arch on a server, mostly because I don’t want all the package churn.

        1. 4

          It’s also a rolling release. Sure, you could go for 9 months without updating but with a rolling a release that’s a long time. You’ll likely be upgrading most of your installed software in a single jump by doing so. That’s not bad, per se, but would be mildly concerning to me. There’s no clean rollback mechanism in Arch so this presents an extra level of caution.

          That’s actually why I stopped using arch: I’ve got some devices that I won’t use for 6-12 months, but then I’ll start to use them daily again. And turns out, if you do that, pacman breaks your whole system, you’ll have to hunt down package archives and manually untangle that mess.

          1. 3

            I wish there was a happy medium between Arch and Debian. Something relatively up to date but also with an eye for stability, and also minimalist when it comes to the default install.

      3. 4

        I think that’s a bit of an exaggeration, when compared to other Linux distros where upgrades are always that scary thing.

        Also the keyring stuff .. I’m not sure when that was introduced. So might have been before that?

        I’ve done pretty long jumps on an Arch Linux System for my mother which isn’t really good with computers and technology in general on a netbook. Just a few buttons on the side in xfce worked really well until the web became too demanding for first gen Intel atoms. But I updated it quite a bit later looking for some files or something. I don’t remember that being a big issue. But I do remember how I was surprised that it wasn’t.

        I actually had many more problems, like a huge amount of them with apt for example.

        Worse of course package managers trying to be smart. If there’s one thing that I would never want to be smart it’s a package manager. I haven’t seen an instance yet where that didn’t backfire.

        1. 1

          Upgrades have been relatively fear-free for me on both Ubuntu and Fedora, though that may be a recent thing, and due to the fact that my systems don’t stray too far from a “stock” install.

          One thing I will give Arch props for is that it’s super easy to build a lightweight system for low-end devices, as you mentioned. Currently my only Arch device is an Intel Core m5 laptop, because everything else chugs on it.

          1. 1

            Have you tried Alpine? It’s really shaping up to be a decent general purpose distro, but provides snapshot stability. It’s also about as lightweight as you can get.

            1. 1

              I haven’t for that particular device, but in my time using it I couldn’t get it to function right on another netbook I had. Probably user error on account of me being too used to systemd, but I’m not in a rush to try it again either.

      4. 1

        Good to know, thanks. I’m on my first non-test Arch install at the moment and so far I’ve been surprised by the actual lack of anything being worse than on other distros. Everything worked out of the box.

    14. 10

      “Never” excludes a useful practice of embedding the ID’s type.

      Given the UUID 946f7674-2693-4e3b-b44c-92fb88a20f3e, what can you say about it? If it shows up in logs, or in an unexpected part of the UI, where would you begin trying to figure out what it represents? With a bare UUID, you’ve lost the metadata of what entity space that ID references.

      Now, let’s apply two transformations – base62 it for readability, and prefix it with {type}_. You might end up with user_4W5nmi8kGhBq9rdYbLtRvC. This is shorter than the raw UUID, double-clickable, and you know what it identifies.

      1. 2

        I’m not convinced converting it to base 62 makes it more readable. Depending on your native language, it could be argued it makes it less readable.

        In fact, when it comes to UUIDs, I don’t like the fact that hexadecimal values A through F leak through at all. Hexadecimal is a low level hardware detail leaking through to the user layer.

        One thing all of humanity seems to have in common are the digits 0 through 9. Those, and only those, should be used for unique IDs - in my opinion.

        1. 5

          You’re missing jmillikin’s primary point, which was the utility of embedding type information in the ID.

          As for base64, the point isn’t really readability, rather compactness. Base64 encodes 6 bits per character instead of 4, so it’s ⅓ smaller [maybe; I suck at mental arithmetic.] Decimal is even worse, only 3-point-something bits per character. This matters a lot in a database that might store billions of these and which is constantly shuffling them around when doing queries.

          Your stance on decimal makes zero sense to me. This isn’t the user layer; database IDs aren’t meant to be human-readable. And a UUID is a uninterpreted string of 128 random bits. It’s not a number, and certainly not one humans would ever want to do arithmetic on. Of course it can be interpreted as a number, but so what?

          (There are legitimate concerns about the ability to transcribe an ID by voice, if it’s something a human might see, like a bank account. That’s why there are encodings like base58 that avoid some easily-confused characters like “l”, without sacrificing much density.)

          1. 2

            You’re missing jmillikin’s primary point, which was the utility of embedding type information in the ID.

            I didn’t miss that point, I just didn’t address it. I think the idea has some merit.

            Your stance on decimal makes zero sense to me. This isn’t the user layer; database IDs aren’t meant to be human-readable.

            Wait a minute, the GP said:

            Now, let’s apply two transformations – base62 it for readability

            Surely “readability” refers to humans reading it, right?

            1. 1

              Agreed, “readability” is a strange thing to say about a random bit-string.

        2. 2

          I think the “readability” might come from it just being more dense, and having the “user_” prefix. A higher base is a great way to make things denser.

          Also, digits are definitely not common to all of humanity - see a few common examples other than Arabic numerals here: You might argue that Arabic numerals are “the most common”, but that also holds for the English alphabet.

          1. 2

            I think the “readability” might come from it just being more dense, and having the “user_” prefix. A higher base is a great way to make things denser.

            If density is the goal, applying base 62 seems appropriate. However, the GP said:

            Now, let’s apply two transformations – base62 it for readability

            I do not think using base 62 makes the blob more readable for a lot of humanity. I think sticking to decimal digits would achieve the goal of being more readable.

            Also, digits are definitely not common to all of humanity - see a few common examples other than Arabic numerals here: You might argue that Arabic numerals are “the most common”, but that also holds for the English alphabet.

            Decimal digits are pervasive virtually everywhere; the English alphabet much less so.

            I have co-workers that have families in countries around the world, and they can attest to the fact that hexadecimal digits A-F leaking into to the user layer makes their lives harder. It’s not clear to me why UI people think exposing everyone to hexadecimal digits is acceptable when decimal digits are far more friendly to a much wider range of humanity.

            1. 1

              Well, it’s not going to become “readable” in the sense that you’re going to be able to derive meaning from the blob. I considered the base62 more readable because the density makes it so that it interrupts the flow of a sentence less.

              I don’t personally particularly see a difference in reading English alphabet and Arabic digits, and my native language is Hindi, which uses neither. I suppose if you had a string of digits equivalent length and just as short, like user_3290483902, I would consider that equally “readable”, though of course you have a smaller space of valid user IDs this way. Probably not a problem in practice?

              1. 2

                Well, it’s not going to become “readable” in the sense that you’re going to be able to derive meaning from the blob. I considered the base62 more readable because the density makes it so that it interrupts the flow of a sentence less.

                I have to sheepishly admit I probably got too hung up on the word “readability” there and was applying the wrong kind of context to the word.

                I agree that if it’s just an opaque blob of characters not meant to be “directly read,” being shorter does make it more readable because it consumes less display real estate.

                Mea culpa.

                This probably spills over from my annoyance when I occasionally have to type in a bunch of hexadecimal digits (for whatever reason). A numpad (keyboard) or a nice big decimal digit pad (phone) would make it so easy!

                And my (older) eyes also hate, “Wait, is that an 8? Or a B? Is that a zero, or an O?” etc. I’d rather just type in the additional decimal digits! </rant> ;-)

                1. 1

                  Ah, yeah, that’s fair. Having to key hex in like that could definitely get annoying. 😅 Not looking forward to being able to see your last point, but I suppose I’ll get there eventually. :’)

    15. 33

      The problem is that C have practically no checks, so any safety checks put into the competing language will have a runtime cost, which often is unacceptable. This leads to a strategy of only having checks in “safe” mode. Where the “fast” mode is just as “unsafe” as C.”

      So, apparently the author hasn’t used Rust. Or at least hasn’t noticed the various benchmarks showing it to be capable of getting close to C performance, or in some cases outpacing. Also that because of Rust’s safety, it’s much easier to write working parallelised code v.s. C and so you can get a lot of improvements that way.

      I’ve written a lot of C over the years (4 years of embedded software PhD), and now never want to go back now I’ve seen what can be done with Rust.

      1. 12

        The author notes that he does not consider Rust a C alternative, but rather a C++ alternative:

        Like several others I am writing an alternative to the C language (if you read this blog before then this shouldn’t be news!). My language (C3) is fairly recent, there are others: Zig, Odin, Jai and older languages like eC. Looking at C++ alternatives there are languages like D, Rust, Nim, Crystal, Beef, Carbon and others.

        Now, you could argue that it’s possible to create a C-like language with a borrow checker. I wonder what that would look like.

        1. 20

          C++ is a C alternative. The author dismisses rust without any explanation or justification (I suspect it’s for aesthetic reasons, like “the language is big”). For a lot of targets (non embedded), rust is in fact a valid C alternative, and so is C++.

          1. 8

            For a lot of targets, especially embedded, Rust is an amazing C alternative. Granted, currently it’s primarily ARM Cortex M that has 1st class support but I find your remark funny how from my perspective embedded is probably the best application of Rust and its features. Representing HW peripherals as type-safe state machines that won’t compile if you missuse them? Checked. Concurrency framework providing sane interrupt handling with priorities that is data race and deadlock free without any runtime overhead that won’t compile if you violate its invariants?. Checked. Embedded C is a joke in comparison.

        2. 13

          Adding a borrow checker to C requires adding generics to C, at which point it would be more C++-like than C-like. The borrow checker operates on types and functions parameterized by lifetimes, so generics is not optional, even if you do not add type generics.

          1. 6

            Also, not adding type generics is going to make the safety gained from the borrow checker a lot less useful, because now instead of writing the unsafe code for a Vec/HashMap/Lock/… once (in a library) and using it a gazillion times, you write it once per type.

          2. 3

            Isn’t this more or less what Cyclone is?

            1. 4

              Yes it is. It is why Cyclone added polymorphic functions, polymorphic data structures, and pointer subtyping to C, check Cyclone user manual. Not because they are cool features, but because they are required for memory management.

          3. 1

            Even though types and functions are parameterized by lifetimes, they do not affect codegen. So it should be possible to create a “C with borrowchecker”.

            1. 1

              I don’t understand how codegen matters here. Clang and rustc share codegen… If C-like codegen (whatever that is) gives C-with-borrow-checker, C-with-borrow-checker is rustc.

              1. 1

                ops, I guess I should’ve said “lifetimes do not affect monorphisation”

                1. 1

                  This is still a mysterious position. You seem to think C++-ness of templates comes from monomorphisation code generation strategy, but most would say it comes from its frontend processing. Monomorphisation is backend implementation detail and does not affect user complexity, and as for implementation complexity it is among the simpler one to implement.

                  1. 1

                    the whole point was that generics are not needed for a language to have a borrowchecker.

                    if you call a generic function twice with different types, two function code is generated. if you call a generic function twice with different lifetimes, only one function code is generated.

                    borrowchecker is anmotation + static analysis. the generated code is the same. the same is not true for generics or templates.

                    1. 1

                      If you think this, you should write a paper. Yes, borrow checker is a static analysis. As it currently exists, it is a static analysis formulated to work on generics. As far as I know, no one knows how to do the same without generics.

        3. 5

          There was some recent discussion on Garnet which is an attempt to make a smaller version of Rust.

        4. 5

          This is the correct position on Rust.

          1. 32

            I disagree. To me Rust is a great C replacement, and Rust is incompatible with C++ both technically and philosophically. I’ve used C for two decades. I’ve never liked C++, but really enjoy Rust. I’ve written a C to Rust transpiler and converted projects from C to Rust.

            C programs can be converted to Rust. C and Rust idioms are different, but language features match and you can refactor a C program into a decent Rust program. OTOH C++ programs can’t be adapted to Rust easily, and for large programs it’s daunting. It’s mostly because Rust isn’t really an OOP language (it only has some OOP-like syntax sugar).

            I think people superficially see that both Rust and C++ are “big” and have angle brackets, and conclude they must be the same. But Rust is very different from C++. Rust doesn’t have inheritance, doesn’t have constructors, doesn’t have move constructors, doesn’t use exceptions for error handling. Rust’s iterator is a completely different beast than C++ iterators. Rust’s macros are closer to C++ templates than Rust’s generics. Lots of Rust’s language design decisions are at odds with C++’s.

            Rust is more like an ML language with a C subset, than C++.

            1. 9

              To me Rust is a great C replacement

              When you say “replacement”, what do you mean, exactly? For example, could C++ or Ada be great C replacements?

              I think some of the disagreements about Rust - and the whole trope about Rust being a “big” language - come from different people wanting different things from their “C replacement”. A lot of people - or maybe just a particularly vocal group of people on Internet forums - seem to like C not just because it can be used to write small, fast, native code, but because they enjoy the aesthetic experience of programming in C. For that sort of person, I think Rust is much more like C++ than C.

              Rust is very different from C++. Rust doesn’t have inheritance, doesn’t have constructors, doesn’t have move constructors, doesn’t use exceptions for error handling.

              Modern C++ (for some value of “modern”) doesn’t typically have inheritance or exceptions either. I’ve had the misfortune to program in C++ for a couple of decades now. When I started, it was all OO design - the kind of thing that people make fun of in Enterprise Java - but these days it’s mostly just functions in namespaces. When I first tried Rust, I thought it was just like C++ only I’d find out when I screwed up at compile-time rather than with some weird bug at run-time. I had no trouble with the borrow checker, as it just enforced the same rules that my team already followed in C++.

              I’ve never liked C++ because it’s too complicated. Nobody can remember the whole language and no two teams use the same subset of the language (and that applies to the same team at two different times too, as people leave and join). People who program alone, or only ever work in academia in small teams, might love the technical power it offers, but people who’ve actually worked with it in large teams, or long-lived teams, in industry, tend to have a dimmer view of it. I can see why people who have been scarred by C++ might be put off by Rust’s aesthetic similarity to it.

              1. 2

                I’ve never liked C++ because it’s too complicated. Nobody can remember the whole language and no two teams use the same subset of the language (and that applies to the same team at two different times too, as people leave and join).

                I think that’s a correct observation, but I think that’s because C++ standard library and the language itself has over 3 decades of heavy, wide industry use across much of the depth and breadth of the software development, generating demands and constraints from every corner of the industry.

                I do not think we had ‘solved’ the problem of theoretically plausible definition of the minimal, but sufficient set of language features + standard library features, that will be enough for 30+ years of use across everything.

                So all we have right now is C++ as a ‘reference stick’. If a newbie language compares well to that yardstick, we hale it. But is that the right yard stick?

              2. 1

                I definitely don’t use C for an “aesthetic experience” (I do not enjoy aesthetics of the preprocessor, spiral types, tediousness of malloc or header files). I would consider C++ also a C replacement in the same technical sense as Rust (native code with minimal runtime, C ABI, ±same performance), but to me Rust addresses C’s problems better than C++ does.

                Even though C++ is approximately a C superset, and Rust is sort-of a C superset too, Rust and C++ moved away from C in different directions (multi-paradigm mainly-OOP with sugar vs ML with more explicitness and hindsight). Graphical representation:

                Rust <------ C ----> C++

                which is why I consider Rust closer to C than C++.

                1. 2

                  Sorry for the obvious bait, but if you don’t like C, why do you use it? :-). If you’re looking for a non OOP language that can replace C, well, there’s a subset of C++ for that, and it’s mostly better: replace malloc with smart pointers, enjoy the RAII, enjoy auto, foreach loops, having data structures available to you, etc.

                  1. 5

                    In my C days I’ve been jealous of monomorphic std::sort and destructors. C++ has its benefits, but they never felt big enough for me to outweigh all the baggage that C++ brings. C++ still has many of C’s annoyances like headers, preprocessor, wonky build systems, dangerous threading, and pervasive UB. RAII and smart pointers fix some unsafety, but temporaries and implicit magic add new avenues for UAF. So it’s a mixed bag, not a clear improvement.

                    I write libraries, and everyone takes C without asking. But with C++ people have opinions. Some don’t like when C++ is used like “C with classes”. There are conundrums like handling constructor failures given that half of C++ users bans exceptions and the other half dislikes DIY init patterns. I don’t want to keep track of what number the Rule of $x is at now, or what’s the proper way to init a smart pointer in C++$year, and is that still too new or deprecated already.

        5. 1

          Now, you could argue that it’s possible to create a C-like language with a borrow checker. I wonder what that would look like.

          Zig fits in that space, no?

          1. 8

            Zig is definitely more C-like, but it doesn’t have borrow checking. I think Vale is closer. There’s MS Checked-C too.

            But I’m afraid that “C-like” and safe are at odds with each other. I don’t mean it as a cheap shot against C, but if you want the compiler to guarantee safety at compilation time, you need to make the language easy to robustly analyze. This in turn requires a more advanced static type system that can express more things, like ownership, slices, and generic collections. This quickly makes the language look “big” and not C-like.

          2. 7

            I don’t think Zig has borrow checking?

            1. 7

              It doesn’t. As I understand, its current plan for temporal memory safety is quarantine. Quarantine is a good idea, but if it was enough, C would be memory safe too.

              Android shipped malloc with quarantine, and here is what they say about it:

              (Quarantine) is fairly costly in terms of performance and memory footprint, is mostly controlled by runtime options and is disabled by default.

      2. 2

        Ada has made the same claims since 1983, and hasn’t taken over the world. Maybe Rust will do better.

      3. 1

        I think the big point here is that the author is talking in hypotheticals that don’t always pan out in practice. Theoretically, a perfectly written c program will execute faster than a rust program because it does not have safety checks.

        That being said, in many cases those limited safety checks actually turn out to be a miniscule cost. Also, as you mentioned rust may unlock better threading and other performance gains.

        Lastly, i think it is important to note that often software architecture, algorithms, and cache friendliness will matter much much more than rust vs c vs other low level languages.

    16. 4

      I’m very conflicted on this. On the one hand, I use GNOME everyday, and I really, really appreciate the cohesiveness that their way of doing things brings to the table. I’m sure folks love KDE’s infinite customizability or i3’s minimalism, but I really want something that Just Works when I need it to. GNOME has been by far the most “Just Works” for me - the defaults are coherent when you use them with a keyboard + multitouch trackpad, all it takes is a couple keybinds for Guake and an app launcher, along with Touchegg to modify the trackpad gestures a bit, and I’m at full productivity.

      On the other hand, I’m constantly worried that someday those couple keybinds and Touchegg and whatnot will stop aligning with the views of the GNOME developers and then I’ll be stuck with a system that doesn’t work for me. This already happened once before - Flameshot does not provide a good experience on GNOME Wayland since they decided unilaterally that the GNOME screenshot tool ought to be enough for everyone. I switched to the GNOME tool and thankfully it was not a big adjustment this time, but if they break something like Guake I’m not entirely sure what my next steps will be.

      Ironically, I tried to use KDE and got turned away by its lack of customization in certain areas - Yakuake does not let you change the font size with a keybind because Konsole decided that no other terminal needs access to Ctrl +/-. And unlike GNOME with Touchegg, KDE seems to have no support for changing trackpad gestures at all, so I can’t change the defaults to something I prefer. So it’s not even like GNOME is alone in this respect - I guess they just apply the same philosophy everywhere, rather than just to the more “niche” features.

    17. 4

      Really looking forward to this, consistently the only things I use SCSS for have been nesting and variables.

    18. 2

      This is only tangentially related, but during an interview for an internship I asked the prospective employer what a typical day would look like at their company. The interviewer was quite surprised; they said that (paraphrase) “nobody has ever asked us that before”. I’m not sure to this day whether that was positive surprise but I think it was. They gave me a generous offer despite my lack of practical experience or higher ed. qualifications.

      So, that’s probably a good question to ask; but also, asking questions itself, especially if they’re well thought-through, is good for the employer’s image of you, as well as finding out more about the employer!

      1. 2

        This is so interesting to hear, because as someone who currently applies for internships, “what does a typical day as an X at Y look like?” is talked about as one of the standard reverse-questions to ask during an interview.

    19. 1

      Reading - a modification of FreeBSD that continuously checkpoints application state into persistent storage and allows running programs to persist across reboots, be transferred to other machines, and more.

      I have a meeting with one of the authors tomorrow, hoping to get an undergrad research position in their lab. Fingers crossed!