Threads for talideon

    1. 21

      I agree with the author, and would summarize it like this.

      XML is a markup language like HTML, and YAML is a data exchange format like JSON.

      Markup languages are good for (surprise) marking up text. You start with text, and then layer structure and annotations on top of it with markup syntax. The markup syntax is verbose because it gives primacy to plain text, which is the default and requires no ceremony apart from escaping a few characters.

      Markup languages are bad for configuration; data exchange languages are good for configuration. It just so happens that YAML is a bad data exchange format. It has many human-friendly do-what-I-mean features but they backfire when you actually mean something else (e.g. NO for Norway, not false).

      1. 15

        How dare you RTFA and summarize it better than I made it. I am deeply offended by this breach of commenting etiquette. 😉

        1. 1

          A little off topic, but: I think your site sets the main text content’s foreground color (something dark), but neglects to set its background color, letting my browser’s personal fallback settings for unspecified styles take over.

          For unstyled sites, I have my fallback colors set to light-on-dark, so this half-enforced styling comes out as dark-on-dark.

          https://cdn.imgchest.com/files/wye3cp2g5w4.png

          1. 2

            I’m just a guest blogger, but I’ll pass it along.

      2. 5

        Markup languages are bad for configuration; data exchange languages are good for configuration

        Mostly agree, but I would go further and say

        • XML/HTML are for documents, JSON is for records / “objects”, and CSV / TSV are for tables.

        However, JSON is a pretty good data exchange language, but it’s not good for configuration because the syntax it too fiddly (comments, quoting, commas)

        YAML is definitely not good for data exchange, and it has big flaws as a config language, but there’s no doubt that many people use it successfully as a config language.

        So config languages != interchange formats in my mind. Interchange formats are mostly for two programs to communicate (although being plain text helps humans too, so it’s a bit fuzzy.)

        The space of config languages is very large, AND it blends into programming languages. Whereas JSON is clearly not a programming language (though it was derived from one)

        https://github.com/oilshell/oil/wiki/Survey-of-Config-Languages

        1. 3

          Yeah that makes sense, config language deserves its own category. I guess I’d say I prefer JSON over XML if those are your only two options (and I find it works well enough in VS Code for example, though they allow comments and trailing commas I think).

      3. 3

        I think YAML is not suitable for data exchange, because of its complexity and shaky security record. Better to stick to JSON if you need text, or CBOR or protobufs etc. if you prefer binary. Good data exchange languages are bad config languages.

        YAML is barely tolerable as a data input or configuration language, but there are better options such as json5. TOML is ugly and confusing but still better than YAML.

        1. 3

          I wish more things would adopt UCL for configuration. Like YAML, it is a representation of the JSON object model but it also has a number of features that make it more useful as a configuration language:

          • Macros.
          • Include files.
          • Explicit merging rules for includes (replace objects, add properties to objects).
          • Cryptographic signing of includes, so you can use semi-trusted transports for them.
          • Syntactic sugar for units
          1.  

            While I like UCL, it can turn into its own kind of hell. rspamd, for instance, is a fantastic piece of software, but the complex mass of includes and macros can make the configuration hard to reason about. Mind you, this isn’t UCL’s fault, just something it enables.

            1.  

              The macros can be a bit exciting but I like the fact that rspamd doesn’t have any defaults in the program, they’re all visible in the config file directory that’s included with the lowest priority.

    2. 1

      Does Windows come with something like Shortcuts? AFAIK Microsoft haven’t shipped any programming-like tools with their OS:es since DOS, but perhaps I’ve missed something?

      1. 2

        Unless something has changed recently, Window Scripting Host has been a thing for decades, supporting both VBScript and JScript (MS’s JavaScript dialect).

        I mean, it’s not well advertised, but it’s there.

    3. 3

      (This is partially TL;DR)

      Sure it’s not, but it’s as close to one as you can get. Unless you have a reason to store something in a local timezone, UTC is the right choice, but no choice is perfect.

      The complicating issue is that timezones are typically stored as offsets, not timezones. This is why you should be sensitive to the time and should, if you’re storing a time in UTC, pay attention to the absolute time when converting back to the local timezone. The good thing is that this is still better than not using UTC because it’s a stable reference and the timezone database assumes it’s a stable reference.

      So, store in UTC, but be aware you need to also use the timezone database properly. If you can’t do that, store the datetime twice.

      I need to doublecheck this, but I’m pretty sure the timezone rules are additive. They wouldn’t make sense otherwise. There’s a complication when it comes to future datetimes, but storing the revision of the timzone database won’t save you: that’s an example of when you need to store things twice.

      Also, not all countries to the way you think when it comes to Summer Time. Sometimes, the nonstandard time in in Winter.

      1. 7

        My rule of thumb is, if you are recording a time in the past, then either use UTC or use the local time and the UTC offset. Don’t use UTC if you need to record the time as it was displayed on the wall.

        If you are storing a time in the future, then store the local time and the primary location. In many cases the tz name will do as a proxy for the location, but that will fail when tz boundaries change. You might also need to store some secondary locations so it is possible to detect when a plan might be disrupted by tz changes. You might need to store an earlier/later flag to disambiguate timestamps that occur when the clocks go back; alternatively use the Japanese syle of times like 26:30 for the small hours.

        1. 3

          locations

          Wouldn’t it be more robust to just store the time zone name and the tzdb version in use when the timestamp was made? Then, if a time zone is removed you can look up the offset and ask the user with a list of close timezones suggested.

          I don’t know any libraries that support recovering timezones from coordinates, it sounds more likely to need third party databases. Isn’t there value in just sticking with the tzdb?

          1. 4

            I don’t know any libraries that support recovering timezones from coordinates

            I wrote an application that gets IANA timezone from location names, so roughly the same problem. It’s hard because there is no real database for this, so I use Wikidata; and had to fill a fair amount of timezones in Wikidata in the process.

            Sometimes you need to dig into historical data to find where former district boundaries used to be. You have borderline bad quality data, eg. all of France is marked as having a timezone, but overseas departments and territories have their own so the timezone is actually for metropolitan France; but some parts are not specifically marked as such and only as part of France. Sometimes (especially in the US and Canada) a city uses the timezone from the state/province across the border. Sometimes it’s a county. Etc.

            Here is the part that gets the timezone from an OSM id, if you are interested: https://github.com/progval/Limnoria/blob/5357f50bed9a830994faf663416c4b05b21f00b0/plugins/Geography/wikidata.py

          2. 1

            Yes, you need to record the tzdata version to detect problematic changes.

            As I said, although the tz name will often do instead of the actual location (and standards like iCalendar require it), that will fail when tz boundaries change. Boundary changes are likely to happen when DST is abolished in Europe.

      2. 1

        Sometimes, the nonstandard time in in Winter

        Southern Hemisphere says hi!

        1. 6

          Officially, Irish Standard Time is summer time, and they have a negative DST offset in the winter. When tzdata was changed so that Europe/Dublin more accurately reflected Irish law, it was the first zone in the tz database with a negative DST offset. This caused a huge number of problems! As far as I know the change was not supposed to have any user-visible effect.

          1. 2

            Wow, TIL. Thanks for this info.

    4. 3

      Another reminder that normalisation doesn’t mean you can forego validation.

    5. 5

      I really like the way that Smalltalk handles integers, which (I believe) is adopted from Lisp. Integers are stored with a tag in the low bit (or bits on 64-bit platforms). If they overflow, they are promoted to big integer objects on the heap. If you do arithmetic that overflows the fast path in Smalltalk, then your program gets slower. This doesn’t mean that it can’t lead to security vulnerabilities, but they’re a less serious kind (an attacker can force you to allocate a huge amount of memory, they can’t access arrays out of bounds). It makes me sad that JavaScript, coming over twenty years after the techniques that made this fast were invented, just used doubles.

      This is not possible in C because C has to work in situations where heap allocation is impossible. If you write a+b in a signal handler, you may deadlock if the thread that received a signal in the middle of malloc, if addition is allowed to allocate memory to create big integer objects. If you’re using C in a situation where memory allocation is always fine, you’re probably using the wrong language.

      1. 1

        This is how Python also works.

        There was a proposal to add it to Go, but it never went anywhere because it would break existing code. https://github.com/golang/go/issues/19623

        1. 2

          I’m pretty sure all Python objects are really pointers with their own object headers. “small” integers (-5 to 256, inclusive) are statically allocated and most ways of getting these values will share them. Most, but not all:

          >>> 7 is 6+1
          True
          >>> 7 is int('7')
          True
          >>> 7 is int.from_bytes(b'\x07', 'big')
          False
          

          but in any case they’re still real objects which reside at real locations in memory. Python does use long arithmetic instead of its base-2**30 big integers when it can, but it doesn’t have any other tricks to avoid allocation, as far as I know.

          1. 2

            Accurate, but the parent comment is mainly around underlying implementation rather than the fact that Python does object interning for the sake of efficiency. The interning Python (and some other languages) does has unintended consequences though: I recall spending a whole hour convincing a developer I used to work with that == and is are not interchangeable. I literally had to show you them the C source to show exactly where the interning of small integers was happening before they’d believe me.

            1. 1

              Tricky: I thought using a big enough number would make this obvious:

              >>> (1<<10) is (1<<10)
              

              but this one comes out True…

              Ok, so I’ll try even bigger numbers, but I’ll factor out N first:

              >>> N=10; (1<<N) is (1<<N)
              False
              

              wat (:

              Apparently it’s constant folding, and then sharing the constant:

              >>> dis.dis(lambda: (1<<10) is (1<<10))
                1           0 LOAD_CONST               1 (1024)
                            2 LOAD_CONST               1 (1024)
                            4 IS_OP                    0
                            6 RETURN_VALUE
              
              1. 1

                Yeah, I went through all that, and that’s what was so frustrating about it. 1 << 10 gets interned because it evaluates to a constant, as you noted. Constants are interned in addition to small numbers, hence why I ended up having to point all this out in the Python source code before I was believed.

      2. 1

        It makes me sad that JavaScript, coming over twenty years after the techniques that made this fast were invented, just used doubles.

        Wasn’t JS famously developed during a weekend or something … snark aside, it is a bit sad that so many people just accept the limitations of C instead of looking at other solutions.

        1. 2

          From my vague recollections, it was originally meant to be Scheme with Java syntax, so not knowing how Lisp did numbers was a bit surprising.

        2. 1

          A basic Scheme interpreter is really easy to implement over a couple of days, especially when you’ve a full runtime to lean on for a bunch of the awkward bits. Converting the original LiveScript over to a C-like syntax probably took longer than the original interpreter.

    6. 10

      I particularly appreciate this bit at the end:

      It’s a common mistake to confuse programming with typing. If someone’s just sitting there staring into space, it doesn’t look like they’re doing anything useful. If they’re rattling furiously on a keyboard, though, we assume they’re achieving something. In fact, real programming often happens before the typing, and sometimes instead of it.

      I often describe Go as a language which expects you to arrive at your editor with a well-defined plan already in mind. It’s not a language that encourages or even really supports “exploratory” programming, like you might do in more REPL-oriented languages like Clojure, Ruby, or Python.

      1. 1

        Some may disagree with that description. Wasn’t one of the design goals to make the compiler fast to support quick iteration and experimentation?

        1. 3

          It was a design goal to make the compiler fast, absolutely. But I don’t think the motivation for that goal was to support experimentation, I think it was primarily motivated by pain from the very slow compile times for large e.g. C++ and Java projects (at the time).

    7. 4

      Is this actually specific to Go? I feel like it could be about nearly any language from the 1980s.

      1. 10

        You could write a regex to make this apply to anything with garbage collection and not-too-many features. The author also carelessly (and offensively) misapplies the concept of, and the word, “Tao”. Water’s flowing may be it’s wu wei, it’s lack of doing, it’s being: the water is flowing downhill, rather downhill-ness being some innate property of the water. It doesn’t have “a Tao”. Tao isn’t just “work[ing] with the grain”, it’s understanding the natural order that resulted in the grain to take the most effortless next action. I guess it’s just advertising copy so it’s not that important, but if I was a Taoist I’d be pretty miffed

        1. 3

          Don’t be miffed. Mabye you’re already a Taoist dreaming that you’re a WilhelmVonWeiner.

      2. 2

        But then you get only 1% of the clicks…

      3. 2

        Given Go is essentially a reskin of Algol 68…

        1. 2

          What isn’t, these days?

    8. 1

      The Advent of Computing podcast recently had an episode covering the LGP-30, which had a bit-serial architecture. It’s the machine made famous by The Story of Mel.

    9. 17

      Not to be insensitive to anyone, but who is this?

      1. 13

        Thank you for saying this, because I had the same question.

        1. 7

          If I had to post just one thing, I think this essay explains it best :

          https://hackingcapitalism.io/why/

      2. 12

        If you’re reading someone’s (essentially) obit and don’t know who they are, you can safely just keep going without this kind of inquiry. Suffice to say those who posted and are discussing know, and the question is (intentionally or not) insensitive.

        1. 15

          I’d ask you to look at it differently: if asked politely, a genuine inquiry is a good way to honor somebody.

          We all die two deaths: when we cease living, and when we are no longer remembered.

          If a stranger asks in passing about the subject of a public mourning, I believe it is a chance to postpone that second death just a little bit longer.

          1. 1

            Well said. The two deaths gives me something macabre to think about today 😳

      3. 2

        Here are some of the various projects/repos she had. There are some useful projects, some whimsical projects, and other neat stuff in there.

      4. 3

        Someone in the tech community. Super easy to Google.

        1. 14

          I googled and I still don’t understand the notoriety.

          1. 24

            She mostly grew up in notoriety in the golang and k8s community. I will not do a full eulogy, this is not the place, nor am i well placed for that.

            But despite all the disagreements we may have with her, Nòva was a genuinely nice person to have in our community, pushing forward in interesting front, saying things that needed to be said, and in general someone a lot of us appreciated having in our communities.

            1. 3

              Thanks for the note!

        2. 19

          No, they are not. Now, Googling her just results in a lot of people expressing sadness over her passing. I do not want to intrude on anyone’s grief here, but I’d never heard of her before and I’ve not been able to find any solid info about her life or work, just as @4ad below says.

          @colindean above said she did the best talk of FOSDEM 2023. I was at that event, but I’d never heard of her, nobody mentioned this talk, and Colin does not link it so I don’t know for sure what talk it was.

          I’ve googled that too – this is way more work than I should have to do, frankly – and I think it might be this one, but the blurb is fairly content-free and tells me nothing useful.

          But the talk is also mentioned here and that gives a little context.

          Apparently she ran Hachyderm. I hadn’t heard of that, either. Apparently it is a Mastodon instance – I see little to distinguish these, TBH, and while I’m on Mastodon/Fediverse, I find it little to no use. But David Calvert said:

          mostly known for hosting the tech industry at hachyderm.io since Musk bought the bird site.

          That’s more info than anywhere else in this thread.

          Hachyderm is apparently

          a safe space, LGBTQIA+ and BLM, primarily comprised of tech industry professionals world wide powered by Mastodon and ActivityPub.

          1. 25

            She mightn’t have been well-known amongst journos, but she was well known amongst developers and ops/infrastructure engineers.

            Searching for her on Google and DDG, she’s famous enough to warrant a sidebar on Google, and her website is above the fold on both. DDG gives better results, and also includes a link to her ORA author’s profile above the fold. She’s definitely very Googlable.

            Hachyderm was a side-gig. The blurb also wasn’t content-free: the talk was about how she managed to scale out Hachyderm, which started as a personal Mastodon instance, in the face of a massive influx of users.

            The Hacyderm talk she gave at FOSDEM earlier this year was on the main track, which was wedged, which is unusual for a non-keynote talk in Jansen. She also gave a talk about the Aurae low-level container runtime on the Rust track.

            We should all remember that our lack of awareness of somebody does not mean that they’re not well known.

            1. 3

              I guess Cunningham’s Law was right after all.

    10. 3

      Is it my imagination or are the python people simply stuck in the realm of single processing computing? Go has been able to do multi-core for like it’s whole existence. Same thing with rust, nim C. C. Sharp, I just never see why Python is such a popular language. Perhaps it’s just the good marketing.

      1. 36

        The thing to remember is Python’s age and heritage. It’s older than Java, and comes from the Unix tradition where if you wanted to do more than one thing at a time you forked a separate process for it.

        So that’s what Python did, and then Java happened. And Java did threading (because it was originally designed to run in environments that couldn’t support true process-based multitasking), and Java marketing hype pushed threading as the one true way to do things, and suddenly threading was a thing people were demanding.

        So Python added threading. But, at that stage of its history, one of Python’s big draws was the ease with which it could wrap around existing popular C libraries. And basically none of those wrappers were thread-safe, so Python had to make a tough decision/

        The decision was to preserve as much backwards compatibility as possible, via the GIL. Which, at the time, seemed a reasonable tradeoff: the GIL primarily hurts you if your workload is pure Python and CPU-bound, and at that time most of what people wanted threading for was IO-bound workloads like network daemons and services. Plus, most people didn’t have true multi-processor/multi-core hardware, and even if they did their operating system might not be able to use it (anyone remember how long it took to land Linux SMP, for example?).

        Now, decades later, we all have multi-core/multi-processor computers in our pockets, and so it looks like a less reasonable tradeoff, but that’s solely because of the benefit of hindsight.

        Also, you mention Go, Rust, Nim and C# as examples of languages which somehow magically had the foresight to get it right… you might want to check when those languages first appeared, and compare to when Python did (and when Python had to make its threading choices).

      2. 15

        You realise that Python is much older than Go, Rust, and Nim, right?

        The GIL isn’t an issue inherent to Python, but to the CPython C API. It exists because to make it easier for those writing C extensions that are not guaranteed to be threadsafe out of the box to be treated as threadsafe.

        Go partly solved this issue by creating a largely isolated ecosystem. When you do need to reach to the outside world, the inevitable use of CGo is painful. Same for C#: .NET is its own isolated ecosystem. Rust has a complicated type system to cope with this, and the unsafe escape hatch where you really need to know what you’re doing.

        Python’s “marketing” is that it’s a pleasant and approachable language. The real shame is that Python didn’t starting out with a C API more like that of Lua or Tcl, but Lua is a younger language, and only got it’s current API in 4.0 when it was much less popular and breaking compatibility was thus less of an issue. I can’t say much about Tcl historically except note on the current state of its C API.

        The gap in your knowledge is a lack of a historical perspective. They’re not “stuck in the realm of single processing”, it’s simply that nobody particularly wants to break the C API, and a lot of effort has been put into avoiding doing so in the past, but things are finally starting to give.

      3. 11

        What bar needs to be cleared for a programming language to deserve existing? Code written in Python has been used to earn Nobel Prizes in the sciences.

        1. 1

          So has code written in FORTRAN77, but that doesn’t mean that I’d want to inflict the experience of writing F77 code on someone.

        2. 1

          Code written in Python has been used to earn Nobel Prizes in the sciences.

          That’s a pretty bad argument. Code written in Python has also been used to commit atrocities.

          1. 1

            I’d like to know more about these atrocities.

      4. 8

        Many people have great responses, but I just wanted to mention another aspect. When did you first own a computer that could execute two threads simultaneously?

        I think I first had a hyper threading processor before I had multiple cores. That would then be the Pentium 4, released in 2002. That would make it 1/3 into the lifetime of Python up until now.

        Another other thing that I think could be important is that threads were cheap on windows and processes were expensive. In Unix, a thread was not that much less expensive than a process. I remember the feeling of “threads are for Windows where they can’t handle many processes and can’t spawn them quickly”.

        POSIX threads were also only standardized in 1995. To decide on a cross platform implementation of threading, when windows didn’t support POSIX threads natively, would have been tough to do. Especially for a feature that wasn’t super useful for performance.

        1. 1

          Another other thing that I think could be important is that threads were cheap on windows and processes were expensive. In Unix, a thread was not that much less expensive than a process. I remember the feeling of “threads are for Windows where they can’t handle many processes and can’t spawn them quickly”.

          IIRC, they’re still expensive and require something like 1MB of memory per process at minimum.

          1. 3

            It’s much less than that. They typically require something like 2-8 MiB of address space for the stack, but that stack is lazily committed so you’re not consuming real memory unless you actually use all of the stack space. Beyond that, you typically have a few KiBs of kernel state for the thread (including register save area, which can be quite large with AVX or SVE), userspace thread structures, and userspace TLS. These rarely add up to more than a few pages.

      5. 3

        Perhaps it’s a nice pairing of syntax and semantics, on top of a mostly-sane standard library, that’s been around since 1991.

    11. 27

      While I don’t disagree with the general message here, this sentence stuck in my throat:

      “The conceptual model of HTTP is also simple, in that you can basically view it as an RPC system.”

      Ugh. That’s exactly what HTTP is not, conceptually or otherwise.

      1. 8

        I don’t want to say that HTTP is a good RPC system or that it has all of the features of a proper one, but at a high level I think what a lot of people want out of their RPC is ‘make fixed call, get results and/or status’. HTTP requests with JSON payloads work pretty well for this and they don’t require, say, a connection setup phase the way a SMTP-like or IMAP-like protocol might have. At a low level HTTP can have keepalives and connection reuse and so on, but I think that tends to be either hidden away in the HTTP library or optional.

        (I’m the author of the original, linked-to entry.)

      2. 6

        In what way is HTTP — or, at least, HTTP POST — not RPC?

        REST certainly isn’t RPC, sure, but REST is an abstraction layer above HTTP.

        1. 38

          The words are all messed up at this point, but RPC traditionally meant mechanisms like “procedure naming” and “marshal arguments and return values”. It’s designed to make remote procedure calls as convenient as local ones with respect to particular programming languages.

          So I’d say HTTP isn’t RPC because it’s not designed for, or convenient for, that purpose.

          You can make it do such things, but it takes extra work, and you also lose things like HTTP proxy support (middleboxes).

          If HTTP were RPC, then you wouldn’t have additional protocols like JSON-RPC tunneled inside HTTP. JSON-RPC specifies how your arguments and return values are marshaled.

          If HTTP were RPC, you wouldn’t have to say “HTTP POST”. Middleboxes understand HTTP verbs; they often don’t understand your custom protocol inside HTTP. The network isn’t just about the end points.

          Likewise, TCP/IP isn’t a protocol for hypertext, although you can use it that way. You have another layer with more semantics.

          1. 7

            Ah, okay, I’m definitely using a weaker definition of RPC than what you’re describing here.

          2. 2

            I get the reluctance but I still think it’s somewhat semantic from the server’s perspective:

            • HTTP verbs are cache controls, which is not unusual for RPCs (otherwise would be done in request metadata/headers/etc)– middleboxes are generally aligned with this.
            • HTTP endpoints (or “resources”) are encoded procedure names

            To me, HTTP is a somewhat specialized kind of RPC (with specified caching and method encoding). We can of course build other kinds of RPCs on top of any RPC too.

            We can draw the line elsewhere, but at the very least HTTP is very close to being an RPC. :)

          3. 2

            I suppose the idea of “HTTP is RPC” takes a minimalist definition of both terms– RPC as in “it invokes procedures remotely”, and HTTP as in “a well-supported presentation layer”.

        2. 2

          Agreed. HTTP is definitely an RPC system. It even has verbs!

          1. 4

            I mean Whois is definitely an RPC system: it has a verb! Sure, it’s implicit, but still!

            1. 1

              I think you need to allow arbitrary verbs to be RPC :P

        3. 1

          In that HTTP is all about manipulating a large set of resources (nouns) with a very small set of fixed commands (verbs), and there are no remote procedures at a conceptual level at all.

          1. 2

            Right. As I mentioned in a sibling comment, I think I’m using a weaker definition of RPC than what you’re describing here. For me — and I think for most developers in this day and age — RPC is anything which is synchronous and follows request-response semantics. By this definition, all HTTP operations qualify. I understand this is not the original meaning.

            1. 2

              Fair enough. Thanks for the qualification and clarification 👍

    12. 8

      Generics do take away probably 30% of my complaints with go. No sum types is still a big pain for me, they’re just how I naturally model things. I’ve now used Go at work for 2 jobs in a row, and I legitimately don’t like it (in a “meh” way), but I am also completely productive in it. So I’ve really taken the stance that, minus some outlandish edge cases, language isn’t all that important.

      Like, writing a for loop vs using map feels really annoying at first, but the difference ends up being pretty superficial in the long run. I do not buy at all that switching to Haskell would drastically change my daily life as a programmer. And I literally love functional programming.

      One other thing I’d really like to see in Go is explicit interface conformance, i.e. type MyThing struct implements MyInterface. I know that defeats the whole purpose of implicit interfaces, which has other benefits (though they are lost on me), but at least make it optional. I always want to just look at something and see what it implements.

      1. 9

        One other thing I’d really like to see in Go is explicit interface conformance, i.e. type MyThing struct implements MyInterface.

        There’s a hack to do it: either before or after the methods, add

        var _ Iface = (*type)(nil)
        

        If you write that first, the compiler will actually give you the steps, though just one at a time (e.g. missing method X, method X should have signature so and so, missing method Y, method Y should have signature so and so)

        The “var _ Iface” should make it reasonably easy to find as well.

      2. 4

        The lack of sum types is probably Go’s original sin. It left the language with nil, a clumsy way of approximating enumerations, and an error type that’s gradually mutated into effectively a kind of checked except that you couldn’t even check until 1.16, and now it’s as heavyweight as escape continuations anyway, and less convenient.

        1. 1

          Always having a zero value interacts poorly with sum types. What’s the zero value of int | string?

          1. 2

            Ha! Fun question. You can’t let the zero value be nil, or nil-ish – that would defeat the purpose of the sum type in the first place. And I suspect it would be infeasible to prevent the construction of zero-value sum types altogether – such a constraint would require an enormous number of language changes. I guess the only remaining option is for the zero value of a sum type to be the zero value of the first type in its type list. So the zero value of int | string would be int(0). That’s not particularly satisfying!

            1. 1

              Yeah, it’s hard to retrofit onto Go as it exists now. I think you can sort of squint and imagine a version of Go where it has undefined and you need to be able to statically show that something is not undefined before you’re allowed to read from it. So var x union{string, int}; f(x) would be illegal at compile time because x is undefined. But it starts to look more like “not Go” the more you think it through.

              1. 2

                Go pretty much already has the answer, in the interface extension to type sets for generic bounds: a sum type in Go would likely be relaxing the language to allow interface { int | string } to be used as a regular type (currently it’s only allowed for trait bounds), and adding support for type refinement & exhaustiveness to type switches.

                This means the zero value would be the nil interface.

                Unless Go specifically removed nil interfaces & default values from such a type, but I don’t see that happening given how far it would cascade through the language. Not unless the team decided to completely overhaul default values in Go 2 anyway. Which I’d hope but would not hold my breath for.

          2. 1

            There are a whole bunch of different types where a zero value makes little sense, and we don’t even have to be talking about sum types. The fact they interact poorly by default isn’t even necessarily a bad thing, and if a language designer so deemed it useful, they’re could just allow a constructor to be nominated as the default one.

      3. 2

        Generics solve your last problem because interfaces are type constraints so you can use a function with an empty body to assert to the compiler which interfaces you intend a type to implement.

    13. 3

      Golang will soon be as bloated as any other language that makes bold claims from the start and then spends it’s later years backsliding to try and draw in it’s critics.

      1. 16

        I certainly have that concern too. However, from what I’ve seen, the Go team continues to be judicious and cautious in its choice of what to include. The only language change to speak of has been the addition of generics in 1.18. All other changes are in the standard library and the tooling, which as far as I can tell just keep getting better. It doesn’t have the same design-by-crowd/committee feeling as Python has had in recent years.

      2. 12

        This is an odd criticism of Go in general and this coroutine proposal specifically, I think. It’s proposing a standard library package which happens to lean on runtime support for performance reasons only. Go has an HTTP server in its standard library. Being a batteries-included language seems to be out of favour at present, but this would seem to be an example of Go (quite correctly, imho) swimming against the tide and including batteries anyway.

        I’m not sure what they’ve backslid on. People bring up generics a lot, but prior to the successful generics design, the Go FAQ had said:

        Generics may well be added at some point. We don’t feel an urgency for them, although we understand some programmers do.

        for what, a decade?

        Full disclosure, I don’t think Go is a good language. But I feel like they’ve been remarkably consistent on scope and feature creep.

        1. 3

          Yeah, it’s kind of silly for Go’s critics to claim that Go is a bloated language when the only real piece of bloat that has been added has been generics and its critics positively screamed for that for over a decade.

          1. 6

            The critics who are concerned that Go is becoming bloated, and the critics who screamed for generics, are different people.

            (A lot of social phenomena make a lot more sense when you consider that what looks like one group from the outside is actually several groups, each with different thoughts and motivations.)

      3. 5

        I remember when Go’s critics were arguing that its simplicity claims were easy to make as it was a new language, but “give it 10 years and it will be as bloated as Java/Python/C++/etc”, well it has been 14 years since Go debuted and it remains pretty bloat-free (generics is the only real bit of bloat added to the language, and that was positively demanded by Go’s critics). It’s nice to see that people are still making this same argument 14 years later. :)

      4. 3

        This is hardly bloat. It’s quite close to Lua’s asymmetric coroutines and Python’s (synchronous) yield statement, which are both solid designs with minimal overhead.

      5. 3

        This is a new package in the standard library, not a language feature.

      6. 1

        Contrariwise, it seems they are doing an admirable (if quite slow) job of growing their language. I look forward to the addition of macros, inheritance, gradual typing, and unrestricted compile-time computation.

        1. 2

          I don’t know — Go goes against almost everything in that absolutely great talk. A core tenet of which is that one should be able to write a library that can seamlessly extend a language. Go has real trouble around it with an over reliance on built-ins.

          It’s no accident that Guy Steele worked on Java and later on Scheme (among others). Scheme is a more niche language and while the above core tenet definitely well-applies to that language, I would highlight Java over Go in this this gradual growth regard, given its longer life and extensive use, and if anything, that’s its core strategy: last movers advantage. The team really well evaluates which language features are deemed “successful” after they were tried out by more experimenting PLs.

          1. 7

            A core tenet of which is that one should be able to write a library that can seamlessly extend a language. Go has real trouble around it with an over reliance on built-ins.

            The coroutine library described in the article is a library.

          2. 2

            It’s no accident that Guy Steele worked on Java and later on Scheme (among others).

            I think you have this backwards. He worked on scheme in the 70s and Java was created in the 90s. Not exactly sure when Steele got involved in Java though. I know his big contribution was generics, but I imagine you have to do some work before you get invited to do something so major.

            1. 7

              His contribution was the Java Language Specification, among other things. He was one of the original team of 4 (James Gosling, Guy Steele, Bill Joy, and maybe Arthur van Hoff, the namesake of Java’s AWT package a la “Arthur Wants To”). He’s still at Sun/Oracle Labs (his office was right next to mine).

              He’s also known for Lambda The Ultimate (LTU), a series of papers that he published in the 70s, I think. (There’s a website by that name, inspired by those papers, and focused on PL topics.) And a bunch of other things he’s been involved with over the years at and around MIT. Including square dancing.

              1. 4

                until recently with some significant breakthrough, his name was still on the paper defining how to cast a float to a string precisely (minimal length round trip safe conversion).

                To be fair. This technique was probably known before hand, and the co-author, Dyvbig iirc, probably did more work than him. But still.

                Oh this algorithm is every where. Like nearly all libc.

              2. 1

                He was one of the original team of 4 (James Gosling, Guy Steele, Bill Joy, and maybe Arthur van Hoff, the namesake of Java’s AWT package a la “Arthur Wants To”).

                Ah! That’s interesting! Wikipedia only lists Gosling as the designer. I double checked the wiki page for Java before posting, but I guess it doesn’t tell the whole story. Thank you for the clarification!

            2. 2

              Oh, didn’t realize Scheme is that old of a language! He has made quite some contributions, having been also on the ECMAScript committee.

              1. 2

                Yeah, he also was one of the original authors of emacs. Pretty crazy career when you think about it!

          3. 1

            I guess the sarcasm didn’t quite come through…

            That said, I’m actually somewhat curious—not really knowing either language particularly well—in what respects java is more expressive and orthogonal than go. Both added generics after their initial release, are garbage-collected (though java is much better at it), have some semblance of first-class functions, have some semblance of generic interfaces, and have some cordoned-off low-level primitives. Both are sufficiently expressive and capable that most of their infrastructure is written in itself (contrast with, say, python); hotspot happens to be written in c++, but this seems more a consequence of historical factors, and istr cliff click said that if he were writing hotspot today, he would write it in java. Java has inheritance and exceptions, where go does not; are there other major differences?

            1. 3

              Go has goroutines which Java is soon to get. Go has value types and value/pointer distinction, which Java is maybe getting at some point

              The biggest difference is that Java (in a typical implementation) is a fairly dynamic language, while Go is a fairly static one. Dynamically loading classes into your process is the Java way. Java is open-world, Go is closed-world.

              I feel the last property is actually the defining distinction, as language per se doesn’t matter that much. I do expect Go & Java to converge to more-or-less the same language with two different surface syntaxes.

              1. 2

                Is Java actually going to get this soon? I feel like I’ve been hearing it’s just around the corner for like 5-10 years.

                1. 1

                  I don’t follow too closely, but my understanding is that green threads (the current reincarnation of) are fairly recent (work started 2017), and are almost there (you can already use them, they are implemented, but not yet stabilized).

                  Work on value types I think started in 2014, and it seems to me that there’s no end in sight.

                  1. 2

                    The green threads work started a lot earlier than 2017. Not sure when it got staffed in earnest, but the R&D had been going on for a while before I left in 2015.

                2. 1

                  I think it goes in this year. I’m not a java guy, but I think this is the final JEP.

                  https://openjdk.org/jeps/444

              2. 1

                Interesting. Where can I read more about the Java equivalent of goroutines?

                1. 3

                  It’s called the Loom project. It is already available in preview for a few Java versions, so you can also play with it. The cool thing about it is that it uses the same APIs as the existing Thread library, so in many cases you can just change a single line to make use of it.

        2. 1

          I can’t tell if the sarcasm here is “you think Go is becoming bloated very quickly” or “Go is never going to add these features (and that’s a bad thing)”…

    14. 1

      One thing that I think a lot of people forget about is directly booting a kernel with your bolted on executable. They tend to be very small and very effective. They’re just a little bit hard to make.

      1. 2

        Historical note: this is basically what all DOS games did, IIRC.

        1. 1

          Is this referring to that DOS/4GW thing that games seemed to use? Or something else entirely?

          1. 3

            Kind of the opposite. That was what was called a “dos extender” which basically was a combination of an extended init that gets the CPU in 32bit protected mode (which could coordinate with other popular software that may interfere with doing so the easy way), a small runtime to call into v86 mode so that you could still use DOS for file access and call the VGA BIOS, and some library code to talk to it. DOS/4G was a standalone one that was pretty expensive, so wasn’t very common, but DOS/4GW was the same product bundled in with Watcom C++ and was everywhere.

      2. 1

        I’m failing to understand what this has to do with the article? The submission is about how Firecracker is designed for the kind of workload that might work for, but is a very poor fit for the dev environment use case.

        1. 1

          I think the main thought that I had was around micro VMs. It might have been a bit longitudinal in the sense that it’s not a build agent, but I am betting that a micro VM would be another valid use case for a very fast build server.

      3. 1

        You mean unikernels?

    15. 2

      Good! I’ve loathed the .egg format for so very long. One day, hopefully not so far in the future, setuptools and everything associated with it will, with any luck, shrivel down to something much smaller and much more sane. Preferably, it’ll just go away completely.

    16. 8

      was once rejected from a job specifically because I mentioned Erlang and the founder said he thought I was more of a computer scientist than an engineer

      That’s interesting, because there’s a fair amount of Erlang used in industry — it was created for telephone switching systems, not as an academic exercise. CouchDB is mostly written in it. Is Kafka in Erlang or am I misremembering?

      As to your main point, I’m not a Lisper, and to me the quotes you gave tend to reflect my feelings: stuff that once made Lisp special is widely available in other languages, the cons cell is a pretty crude data structure with terrible performance, and while macros are nice if not overused, they’re not worth the tradeoff of making the language syntax so primitive. But I don’t speak from a position of any great experience, having only toyed with Lisp.

      1. 9

        Lisp has very little to do with cons cells.

        1. 2

          Can you elaborate? Aren’t lists the primary data structure, in addition to the representation of code? And much of the Lisp code I’ve seen makes use of the ability to efficiently replace or reuse the tail portion of a list. That seems to practically mandate the use of linked lists — you can implement lists as vectors but that would make those clever recursive algorithms do an insane amount of copying, right?

          1. 8

            Aren’t lists the primary data structure, in addition to the representation of code? And much of the Lisp code I’ve seen makes use of the ability to efficiently replace or reuse the tail portion of a list

            No. Most lisp code uses structures and arrays where appropriate, same as any other language. I’m not sure what lisp code you’ve been looking at, so I can’t attest to that. The primordial LISP had no other data structures, it is true, but that has very little to do with what we would recognise as lisp today.

            1. 6

              I think it stems mostly from how Lisp is taught (if it’s taught at all). I recall back in college when taking a class on Lisp it was all about the lists; no other data structure was mentioned at all.

          2. 6

            That’s a popular misconception, but in reality Common Lisp, Scheme, and Clojure have arrays/vectors, hashtables, structures, objects/classes, and a whole type system.

            I don’t know what Lisp code you’ve looked at, but in real projects, like StumpWM or the Nyxt browser or practically any other project, lists typically don’t play a big role.

            Unfortunately, every half-assed toy language using s-expressions gets called “a Lisp”, so there’s a lot of misinformation out there.

          3. 3

            Clojure and Fennel and possibly some other things don’t used linked lists as the primary data structure. Both use some kind of array, afaik (I’ve never properly learned Clojure, alas). How this actually works under the hood in terms of homoiconic representation I am not qualified to describe, but in practice you do code generation stuff via macros anyway, which work basically the same as always.

            As I said above, this is a divisive issue for some people, but I’d still call them both Lisp’s.

        2. 1

          Depends a bit on the person’s perspective. I’ve seen some people get absolutely vitriolic at Clojure and Fennel for ditching linked lists as the primary structure. I personally agree with you, but apparently it makes enough of a difference for some people that it’s a hill worth dying on.

      2. 6

        You might be thinking of RabbitMQ. Kafka is on the JVM.

      3. 3

        I don’t think Kafka is, but CouchDB certainly is and, famously, WhatsApp. It’s still not so common but not unheard of, especially now in the age of Kubernetes, although Elixír seems reasonably popular. Either way I don’t think most people know much about its history, they just sort of bucketize it as a functional language and whatever biases they have about them

        I never actually wrote much Erlang – I only even mentioned it in that interview because the founder mentioned belonging to some Erlang group on his LinkedIn. It turned out to have been something from his first startup, which failed in a bad way, and I think he overcorrected with regard to his attitude toward FP. He was a jerk in any case

        edit: looks like you might be thinking of RabbitMQ? https://en.wikipedia.org/wiki/RabbitMQ

      4. 3

        Is Kafka in Erlang or am I misremembering?

        Kafka is a JVM project. It’s written in Java and Scala.

      5. 1

        It’s quite possible you’re thinking of Riak, which was implemented in Erlang, though the two are very different beasts.

    17. 7

      I would like to have this back.

      1. 7

        Very soon. WebAuthn & Passkeys

        1. 5

          I think you complete misunderstand my comment.

          I want this back, because WebAuthn is far to complex and adds the problem that the Website and the backend has to implement the authentication. With KEYGEN all about the keys is handled by the browser. The authentication check then can be done by the httpd.

          Yes I know there are some issues with the UI and other issues on implementation side. But this is nothing conceptional and can be improved.

          To your other comment about storing the credential on separate device: What stops a browser from doing the same thing with keys generated by KEYGEN?

          1. 4

            Nothing, in fact browsers support that (smartcards)

        2. 3

          All I’ve seen from the WebAuthn world has made it seem like an excellent way to lock yourself into either Google’s or Apple’s ecosystem as you he two companies demand to control all your online accounts. Where does the person who uses Android on their phone, macOS on their laptop and Linux on their desktop fit in to this brave new world?

          1. 3

            You can store credentials on a device that speaks USB or Bluetooth pr NFC. No need to store the material on your computing device.

            1. 4

              If only! At $WORK, we use WebAuthn for SSO, and we use YubiKeys as the second factor. We explicitly set “usb” as the only allowed transport because we require people to use their YubiKeys to generate their WebAuthn token. However, neither Chrome nor Safari respect this and will instead try to get the user to register a passkey instead, which naturally won’t work. And the token registration UIs in both are actively hostile to using any methods other than passkeys. Firefox is at least better in this regard, but possibly only because its WebAuthn support is less extensive.

            2. 3

              Right, but that’s now what any of the big players are making, even if it’s technically possible.

              And I’m not going to be bringing around a dedicated Bluetooth or USB key device. And I doubt my iPhone would support it even if I did.

              The whole “let’s get rid of passwords” WebAuthN thing seems like a huge lock-in opportunity for the huge companies and nothing more IMO.

              1. 2

                I get your scepticism, but imho it doesn’t sound too justified to me.

                WebAuthn: You can already buy a Yubikey that does NFC and works for iPhone.

                PassKeys I don’t have experience with, but I know there are open implementations that will help avoid lock-in.

                1. 2

                  Alright but I’m not going to be using Yubikeys. So how do I sync my passkeys between my phone and desktop, so that I can log in to any account without the involvement of the other device?

          2. 3

            Nothing in WebAuthn adds a dependency on anything other than your computer. On Windows, the private keys are stored in the TPM, on macOS they’re stored in the Secure Element, and on Android devices they’re stored in whatever the platform provides (a TrustZone enclave in the worst case, a separate hardware root of trust in the best case). All of these are defaults and, as far as I’m aware, all support using an external U2F device as well. At no point does Apple or Google have access to any my WebAuthn private keys. On other platforms, it’s up to the platform how it stores the keys, but I believe the TPM is pretty well supported on Linux.

            1. 1

              Aaaaand by what mechanism are the keys synced between my iPhone and my Linux desktop?

              I need to be able to create an account on my Linux desktop (which doesn’t have a TPM, by the way) and then log in to that account with my iPhone (without the involvement of the desktop at the time of login). I also need to be able to create an account on my iPhone and then log in to that account with my desktop (without the involvement on my phone at the time of login). This is no problem using passwords and password managers. My understanding is that it’s impossible with WebAuthn.

              1. 2

                Aaaaand by what mechanism are the keys synced between my iPhone and my Linux desktop?

                They aren’t. By design, there is no mechanism to remove the keys from secure storage. If there were, an OS compromise could exfiltrate all of your keys instantly. You create a key on one device and use that to authorise the next device. Alternatively, you use a U2F device and move it between the two machines (I believe iOS support U2F devices over NFC, your Linux machine definitely supports them over USB).

                my Linux desktop (which doesn’t have a TPM, by the way)

                Are you sure? Most vaguely recent motherboards have one (at least an FTPM in the CPU). Without one, there’s no good way of protecting LUKS keys, so that might be something to look for in your next upgrade.

                1. 2

                  You seem very interested in pushing this U2F device thing. I don’t know how many times I need to say I’m uninterested.

                  And if I can’t create an account on one device and then log in on another device without the first device involved, this is not something for me. What do I do if I make an account on my phone, happen to not log in to it on anything other than my phone, but then my phone breaks and I need to log in on my desktop? Is that just … not supported anymore?

                  And why should I think Apple’s implementation will even allow me to authorize my Linux machine? Is that something which falls naturally out of the standard or has Apple publicly committed to it or are you just hoping they’ll be nice?

                  … TPM …

                  Are you sure? Most vaguely recent motherboards have one

                  I just know my (rarely used) Windows install doesn’t let me upgrade to 11 due to missing TPM. Also, older hardware is a thing.

                  I also don’t have a need for LUKS.

                  1. 2

                    You seem very interested in pushing this U2F device thing. I don’t know how many times I need to say I’m uninterested.

                    You need to store keys somewhere. You have three choices:

                    • In software, where anything that can compromise that software layer can exfiltrate them. Check the number of CVEs in the Linux kernel that would allow an attacker to do this before you think it’s a good idea (not particularly singling out Linux here, any kernel that is millions of lines of C is going to be compromised)l
                    • In some hardware tied to the device (TPM, Secure Element, whatever). This is convenient for the device and gives you some security in that an OS compromise lets an attacker launch an online attack but not exfiltrate keys (these things often do some rate limiting too). The down side is that it’s tied to the device.
                    • I’m some external hardware that you can move between devices. The standard for these to interface with computers is called U2F.

                    And if I can’t create an account on one device and then log in on another device without the first device involved, this is not something for me. What do I do if I make an account on my phone, happen to not log in to it on anything other than my phone, but then my phone breaks and I need to log in on my desktop? Is that just … not supported anymore?

                    That’s what WebAuthn recovery codes are for. Store them somewhere safe and offline.

                    And why should I think Apple’s implementation will even allow me to authorize my Linux machine?

                    I have no idea what this even means. Apple, Google, Microsoft, and Mozilla implement the client portion of WebAuthn. They have no control over which other devices any WebAuthn provider lets you use, just as a recommendation to use a strong password in Safari has no impact if you reset the password in Chrome or Edge.

                    You seem to think WebAuthn is something completely different to what is actually is. I can’t really help unless you explain what you think it is so that I can understand how you get to the claims you’re making.

                    I just know my (rarely used) Windows install doesn’t let me upgrade to 11 due to missing TPM. Also, older hardware is a thing.

                    I believe Windows 11 requires a TPM 2.0 implementation. TPM 1.x is fine for these uses and is 14 years old at this point.

                    I also don’t have a need for LUKS.

                    You place a lot of faith in your physical security.

                    1. 2

                      I have tried to read up on WebAuthn actually, and have never found out how they intend transfer of identities between devices to work. It leads me to believe that you’re either supposed to have one device (the phone) be the device which authenticates (similar to how 2FA systems work today), or sync keys using some mechanism that’s not standardised. But it sounds like you believe there’s another mechanism; can you explain or link to some documentation on how that’s supposed to work?

                      1. 2

                        Nothing stops you from having multiple keys with a single account, IIRC. You could have one device initially authorize you on another system and then make a new key for the other device.

                      2. 2

                        You haven’t read that because it is out of scope for WebAuthn. WebAuthn provides a mechanism for permitting a remote device to attest to its user’s identity. It is up to the implementer of the server-side part to provide a mechanism (beyond recovery codes) to add a second device. The normal way of doing this is to use one device to enrol another. For example, you try to log in on your computer, it shows a 2-3 digit number, then you log in on your phone and approve, now both devices are authorised.

                        If your objection to WebAuthn is that the higher-level flows that people build on top of it have problems then you should direct your criticisms there.

                  2. 1

                    Windows 11 requires TPM 2.0, So its possible to have a TPM without W11 supporting it

      2. 2

        Honestly, I’m not sure it would be that usable by modern standards. I don’t think anything other than RSA was widely supported, and then limited to 2048 bit key sizes, etc. It would need a lot of modernisation. I wonder if the web crypto API can provide any suitable alternatives? Not sure if it has facilities for local key storage.

        1. 2

          The limit to RSA and 2048 bit key size is just a implementation limit. Of course this should be improved. The charming part of this is, the website don’t has to interact with the key. Yes I know there are some issues with TLS client auth, but with auth optional this can improved.

    18. 3

      This is a real pity. Hopefully the idea will come back in an adjusted form as it solves a real problem.

    19. 14

      Unlike Python, in Go, errors are just values

      Exceptions are values in Python. I’m not sure where the notion that they’re not could come from. This is an issue is flow control. Go has one kind of escape continuation, which is triggered by the return statement, where as Python has two: the one triggered by return and the one triggered by raise. However, both of these handle values.

      1. 4

        I think what it’s traditionally intended by “errors are values in Go” is related to the way they are being handled, not produced. In languages where error escaping is done with exceptions, in this case Python, they are usually handled by type, not by value.

        try:
            raise NameError('HiThere') # raise by value
        except NameError: # handle by type
            print('An exception flew by!') 
            raise
        
        1. 3

          When you use the likes of fmt.Errorf() you’re minting new objects, just as you do with calling an exception type’s constructor. The difference is that you have basic pattern matching (because that’s what an exception handler does) on the type, allowing you to discriminate between them, which you can’t do with the likes of fmt.Errorf().

          1. 3

            OK. I’m not sure if you disagree with me, I just tried to explain how I understood the “errors are values” concept in Go.

      2. 3

        What about panic?

        1. 1

          A panic is more akin to a Unix signal.

          1. 9

            panic unwinds the stack until a recover — this is not the same as a signal.

      3. 3

        “Errors are just values” links to a blog post that explains what that means. https://go.dev/blog/errors-are-values

        1. 3

          I know. What I disagree with is the ‘Unlike Python’ bit.

          1. 6

            I think you’re looking to closely at specifics, rather than the gist of it: errors are handled using the same language features as other values, unlike exceptions which are handled using dedicated language constructs. Python doesn’t do return Exception even if it could.

            1. 1

              This is not accurate. Go tuples are an error-specific language feature in the same as way Python’s except. You can’t use tuples as a general construct.

              1. 2

                Go doesn’t have tuples, but it does have multiple return values. Often the last return value is an error, but that’s not a language feature or requirement or anything, it’s just a convention. So I think the OP is accurate.

              2. 2

                There’s a strong convention to use it like that, but Go lets you use it with any type you want. You could use it to return e.g. (x,y) coordinates if you wanted.

                Go dropped the ball on having special-case multiple-value return instead of just single-value return with tuples and destructuring, but having “simple” non-generalizable features is sort of their thing.

                But even when used with errors, if err != nil is definitely spiritually closer to if (ret != -1) than try/catch.

              3. 2

                Go doesn’t have first class tuples, but its return tuples are not specific to error handling. You can return two ints; or an int, a bool, and a float; or whatever else.

                1. 1

                  Sure, by its nature this is true, because it’s a tuple. But with non-error types you have multiple ways to return them, whereas errors are always returned using the tuple special-case. Tuples exist to return errors.

                  1. 1

                    I’m not sure what you’re saying. As a convention, people return errors as the last type in a tuple, but it’s just a convention. You can return them through global values (like C errno) or out value pointers instead if you wanted to. I have a couple of helper functions that take error pointers and add context to the thing they point at. And people return other things in tuples, like bools or pairs of ints. It’s a historical question whether multiple return was intended for errors, but I do know that before Go 1.0, the error type was a regular type in the os package and it was only promoted to a built in when Roger Peppe (who doesn’t work at Google AFAIK) proposed doing so.

                    1. 1

                      Out of curiosity, I dug up the original public introduction from 2009. To my surprise, Pike made it entirely through that presentation without ever mentioning error-handling that I can find.

                      So, I hit the wayback machine. The very first introduction of function tuples on the website uses error returns as the use-case. This slide deck from Pike’s Go course also introduces multiple returns with error handling.

                      I don’t think it’s fair to say this is just a convention, it is how the designer of the language chose to introduce the feature to people for the first time.

                      The distinction worth making here is not that “errors are values”, that is uninteresting. It’s true in Python. The distinction is non-local returns vs multivariate functions.

                      1. 2

                        You’re describing multiple return values as “function tuples”. I don’t think this is really accurate, as those return values are always discrete. Go doesn’t really have a concept of a tuple.

                        The thing that “errors are values” tries to communicate isn’t any detail about the specific implementation of the error type, but rather that errors are not fundamentally different than other types like ints or structs or whatever, and that error handling can and should use the same language constructs as normal programming.

          2. 3

            In Python, exceptions are values, but they’re not just values; they’re values that interact with the exception-handling mechanism, which does things to and with them (unlike return, which doesn’t care what kind of value you give it, and doesn’t modify that value).

    20. 6

      I thought this was an interesting compare-and-contrast of Python package management with Node and C#. Being a relative newbie to Python packaging, I’m never 100% sure if something is confusing because I haven’t invested the time to learn it yet, or if it’s confusing because it’s actually more complex than it needs to be. The author echos some frustrations I’ve personally felt but couldn’t quite articulate myself.

      The biggest surprise to me was that the author argued against virtualenvs! I’ve always assumed that venvs were a necessary evil, and that the way to address their rough edges was to use a tool that manages them for you (like pipenv or Poetry). This blog article is the first place I’ve heard of PDM or PEP 582 — my inclination is to go with the crowd and stick with Poetry (or any more-popular tool that displaces it), but I wish luck to everyone involved in replacing the venv model with something simpler.

      1. 3

        They currently are a necessary evil, because something like PEP 582 requires a bunch of buy-in that virtual environments don’t. They were a solution to a problem that didn’t require any action from anyone but the user.

        1. 3

          They were a solution to a problem that didn’t require any action from anyone but the user.

          That’s a very good description of what went wrong with Python packaging.

          1. 4

            I don’t disagree! Mind you, virtualenv was a revelation back in the day. It’s just a pity the core team have a history of what is at best benign neglect when it comes to packaging, which lead to the setuptools debacle, and how it was left to rot for years after PJE became a motivational speaker. A lot of the problems with the Python ecosystem can be traced to that abandonment.