1. 6

    I don’t know PHP but in many languages, I would refactor this to a table lookup, something like:

       genders = [ 'male', 'female' ];
       return genders[input] || 'unknown'
    

    Table lookups are often overlooked when considering how to implement something but they’re often much faster, cleaner and more succinct than alternatives.

    The out-of-range/error handling would vary greatly by language. Rust has .unwrap_or(), C would need an explicit if condition at which point this won’t look especially elegant. In many applications, the out-of-range condition would be an assertion or error rather than an “unknown” value. I appreciate that my comment is not really considering anything beyond the initial code snippet in the original article and is not applicable to the later parts.

    1. 1

      For very small tables (IIRC <=8 or so, last time I tested it) these kind of lookups usually aren’t really faster vs. looping over an array/list, and 2 comparisons are faster still. I assume this is due to the overhead of hashing the keys.

      Array lookups like your example would perform a bit better, but I wouldn’t be so sure if they’re faster than a simple ===.

      Not that it really matters in >99.9% of the cases.

      1. 2

        Not faster: true. But clearer.

        1. 3

          Yeah, I agree. I would write it like the above as well actually, unless there’s a compelling reason not to. One of the nice things is that you get an iterable array (or map) of all values for free, which is often useful in various cases (like validation for example, or listing all options).

    1. 4

      Good story, shame about the spelling mistake in the title…

      1. 1

        Agreed. At first I thought it might be a British spelling (although I’d never seen it that way anywhere), but confirmed it’s just a misspelling. It was interesting to read about his “origin” story with computing, although I feel like it confirms something that I read recently about that generation of programmers trying to recreate their early experience with the way they promote modern computing to new developers. Teaching computer science on the high school level though, it’s clear that hacking away on a Sinclair is not how the next generation of developers is going to experience technology.

        1. 2

          Could be from Kerry. Who knows how you even spell any of that? 🙃 Anyway, I’ve stuck to British spelling ever since moving there, and I’ve noticed I accidentally used “ou” instead of just “u” in a few cases as well.

          I started on the similar MSX (also a Z80 CPU) in the mid/late-90s; a testament how fun and useful these machines can be even when antiquated, especially for learning. When we (finally) got a Windows 98 machine I stopped programming for a few years as it was so hard to get started back then, compared to the BASIC environment that the MSX shipped with. I only picked it up again after I discovered Linux and BSD.

      1. 9

        Place the if/else cases in a factory object that creates a polymorphic object for each variant. Create the factory in ‘main’ and pass it into your app. That will ensure that the if/else chain occurs only once.

        […]

        My fear is that the if/else/switch chain that the author was asking about is replicated in many more places within the code. Some of those if/else/switch statements might switch on the integer, and others might switch on the string. It’s not inconceivable that you’d find a if/else/switch that used an integer in one case and a string in the next!

        This seems like a weird argument to be honest; the entire point of that determineGender() function is to put the logic in a single place, so it doesn’t need to be repeated.

        Other higher level modules tend to depend on the modules that contains those if/else/switch statements. Those higher level modules, therefore, have transitive dependencies upon the lower level modules. This turns the if/else/switch statements into dependency magnets that reach across large swathes of the system source code, binding the system into a tight monolithic architecture without a flexible component structure.

        So? I don’t really see the problem with this, certainly not for this use case.

        1. 8

          Dear God, they were talking about GNU Hurd even back then…

          1. 3

            for 30 years they’ve been talking about it as if it’s almost ready.

            1. 3

              I always used to joke by substituting “when hell freezes over” with “when Duke Nukem Forever is ported to GNU Hurd”. I was a little bit sad when Duke Nukem Forever was finally released as it killed my joke :-(

              1. 1

                Imagine if we got +net fusion power before GNU Hurd?

                1. 1

                  I don’t think this is quite fair, the GNU project have been mostly describing it as rather complete and usable since 2015. They describe it as an interesting development project, suitable for further development and technical curiosity, rather than necessarily a “production” OS, but the idea that HURD is perpetually nearly ready is fictional.

                  1. 6

                    Hurd has been making progress somewhat more slowly than the baseline requirements for a useful OS have been progressing. It passed the point of an early ‘90s *NIX kernel quite a long time ago (basic filesystem, able to run an X server, small number of drivers) and had quite a lot of nice features. The design of Hurd means that things like containers are supported automatically (anything that’s a global namespace in a traditional *NIX is just a handle that you get from the parent, so creating an isolated namespace is trivial. I still find it an interesting example of worse-is-better that the overwhelming majority of container deployments are on the one contemporary system that doesn’t have native support for containers.

                    1. 1

                      For me the biggest problem with contemporary Hurd is also the one I’m unqualified to fix: the drivers are all Linux 2.2-2.6 era. Given a more modern filesystem and newer drivers it’d be quite liveable.

                2. 2

                  I thought a nod in the announcement for Linux, but no. In very early email threads, Linus wrote things like “this might be a fun toy to play with, until Hurd is usable, in a year or two.”

                1. 2

                  which allows remote attackers to cause a denial of service (virtual memory allocation, or memory consumption if an overcommit setting is not used)

                  I’m not sure if these kind of denial-of-service things should really be listed as security bugs in the first place. It has always seemed to me that these are just regular bugs. Often (though not always) critical ones that ought to be fixed as soon as feasible, but not really security bugs IMO. It’s really not in the same class as a remote code execution or the like.

                  1. 3

                    Unless you can deny service in a way that is unique to the app and might raise a security threat against someone else, e.g. ambulance dispatch

                    1. 3

                      Data Availability is a security concern like data confidentiality is.

                      1. 2

                        Typically the condition that makes it a security concern is the ability for some third party to, at will, deny service to other users. If some malicious prankster can effectively turn off your web server, that’s definitely a security concern.

                      1. 33

                        I still think that its a big failure if a project as well funded as GTA can’t be bothered to track down minutes of loading time to a sscanf. That’s not some hickup or such, that’s a major dent in this games experience (speaking of experience). So while the author surely showed us how easy it is to get wrong I’d expect a company as big as this one to find something that takes not some seconds but minutes.

                        1. 17

                          Agreed; it seems absurd that you even have to point out that there’s a difference between “I’m hacking on this fun toy in my spare time and I made an oopsie that wasted an entire 1.8 seconds” vs “I work on a multimillion dollar game and can’t be bothered to investigate why it’s wasting four extra minutes every time it loads”

                          Edit: this is not a little embarassing coding failure; it’s a tremendous organizational failure.

                          1. 8

                            I would agree that it is an organizational failure, but I can also imagine why it didn’t get priority. There are always dozens to hundreds of things on the backlog, so there is always something to do. Long loading time isn’t always as pressing as it should be.

                            1. 3

                              Long loading time isn’t always as pressing as it should be.

                              I understand that the backlog can be huge, but a long-loading issue facing all users (and the users being quite vocal about it) should definitely get bumped up in priorities, because it degrades the experience and trust of the people who will decide whether they buy your next title or not.

                              1. 3

                                because it degrades the experience and trust of the people who will decide whether they buy your next title or not

                                I can’t imagine how mad I’d be if I had actually spent money on this game and then found out they had this little regard for how my time was wasted.

                          2. 9

                            A few months ago I played Pathfinder: Kingmaker, and I noticed that as the game progressed the save/load times became longer and longer to the point of ridiculousness. I checked the save directory and the save games were huge! Some basic inspection showed they’re just zip files, and after unzipping it turned out it was saving this huge logfile of hundreds of megabytes containing many log entries since I started the game.

                            I deleted the logfile, rezipped, and my save/load times were fast again. I later discovered there’s actually a mod to do this automatically.

                            I get that these mistakes happen: that’s okay and we do all make them. It’s the not fixing it that annoys me, and the closed source which means it’s very hard/impossible to fix it as a user. My Pathfinder: Kingmaker case was actually pretty simple, but I don’t really have the experience (or patience!) to muck around with dissemblers like the GTA article.

                          1. 7

                            It seems very weird to even have a “CPU MHz” graph, as if the hertz mean anything at all - especially since it’s comparing AMD CPUs and Intel CPUs?

                            I have no idea what “Events per second” means. What kind of event? It doesn’t seem like that’s explained anywhere?

                            I don’t know what’s going on with the memory read and write measurements. Obviously the average and minimum speed for a single memory read or write operation is going to be 0ms? Isn’t milliseconds way too coarse grained for that kind of measurement? What is counted as a “memory read operation” or “memory write operation” anyways? Does it measure a read from cache or does the benchmark make sure to actually read from main memory? Wouldn’t memory throughput and memory latency (with separate measurements for read and write) make more sense than “memory operations per second” and “milliseconds per memory operation”?

                            Same with “File I/O”; isn’t latency and throughput more interesting than just ops per second? Is the “operations” the same as what’s measured when we measure IOPS or is it something else? What is the “minimum/maximum/average”? Is the “minimum time for a read operation” just measuring the time it takes to read a page from the page cache (aka just a benchmark of the memory system) or does it make sure the files aren’t in the page cache? And again, clearly milliseconds is way too coarse grained for these measurements given that they’re all at 0?

                            Am I missing something or do most of these benchmarks seem underexplained and not that well thought through? I like the concept, seeing a wide variety of benchmarks on the various VPSes could be interesting, but I don’t really feel like I can conclude anything from the numbers here. Maybe running the Phoronix benchmark suite on the different $5/month VPSes could provide some more useful results.

                            1. 5

                              I’ve flagged this as spam because it’s some vague hand waving to get you to click on the referrer links at the bottom of the page. It looks as if it’s really just there to get referrer kick-backs.

                              1. 3

                                Am I missing something or do most of these benchmarks seem underexplained and not that well thought through?

                                Aren’t you talking about most benchmarks that do the rounds? Benchmark blogs never seem to learn from the earlier criticism. They are often:

                                • Have simplistic or naïve design. At most they test something the author was interested in.
                                • Unreproducible even when they include scripts because they are missing other key information.
                                • Designed around supporting a specific narrative (e.g. slamming Python for having the GIL).
                                • Are purely designed to drive clicks for a company that’s not too close to any of the products or services being benchmarked, that they’ll complain a lot.

                                A better test of VPS usage, especially when it’s a single node, might be to see how many requests per second you can get out of a WordPress instance on it. It’s far from perfect, but that’s a big reason they exist. Ideally, you’d add in some problematic code and see how well that performs. That was actually an idea that Ian Bicking had suggested at PyCon long ago for Python performance comparisons because that’s what is happening when most people need to investigate performance.

                                1. 3

                                  Am I missing something or do most of these benchmarks seem underexplained and not that well thought through? I like the concept, seeing a wide variety of benchmarks on the various VPSes could be interesting, but I don’t really feel like I can conclude anything from the numbers here.

                                  You’re not missing anything.

                                  Another factor is that VPSs can have pretty variable performance, which is why he used three instances and “averaged the results where applicable”. A provider giving consistent performance vs. a provider with large differences seems like an interesting data point. Also n=3 seems pretty low to me.

                                  And things like “Maximum (ms)” for “CPU” (maximum what? The time an “event” took?) could be a single event that’s an outlier, and the mean average for these kind of statistics isn’t necessarily all that useful. You really want a distribution of timings.

                                  I did find the scripts he uses on his GitHub though; basically it just runs sysbench.

                                  I agree something like this could be useful; but this is not it. Quite frankly I’d say it’s borderline blogspam

                                1. 4

                                  My understanding is that the POSIX shell spec isn’t nearly detailed enough to define a complete shell, so the promise of “if it runs on mrsh, it’s portable” seems dubious.

                                  1. 10

                                    IMO to be strictly POSIX compliant, you’d have to randomly choose between ambiguous interpretations so that you can’t rely on a specific behavior.

                                    1. 2

                                      This would be wonderfully chaotic! I can imagine the docs now:

                                      Feature foo does either thing x or, if you’re unlucky, another slightly different thing y.

                                      1. 4

                                        It is not like this hasn’t ever been done before.

                                        1. 1

                                          Wow that’s simultaneously hilarious and disturbing. I suppose it gets the point across…

                                    2. 6

                                      My understanding (I could very well be wrong!) is that POSIX sh is the “minimum bar” of a shell – it’s quite usable by itself (as long as you don’t need fancy features like, gosh, arrays), and every modern shell strives to be compatible with it. In my experience, anything that’s POSIX-sh-compliant will run properly on dash, bash, mksh, etc.

                                      1. 4

                                        I think that’s more of the result of these shells being tested against each other.

                                        1. 4

                                          In practice I find that the biggest problem is compatibility of shell utilities, rather than the shell itself. The POSIX shell utilities are really bare-bones (and some are outright missing, like stat) and it can be rather tricky to do things sticking to just the POSIX flags. It’s really easy for various extensions to sneak in even when you intend to write a fully compatible script (and arguably, often that’s okay too, depending on what you’re doing and who the intended audience is).

                                          I appreciate that POSIX is intended as a “minimum bar” kind of specification, but sometimes I wish it would be just a little less conservative.

                                        2. 2

                                          Strong https://xkcd.com/1312/ vibes

                                        1. 14

                                          The distinction sounds to me a bit like No True Scotsman of software design. If it works for me, it’s an abstraction. If it doesn’t, it’s magic.

                                          1. 7

                                            Yes, I suspect it would be more constructive to catalogue properties of abstractions that are unhelpful. One mentioned briefly in the text is the “where is that [code] anyway” problem; i.e., when I’m trying to see through an abstraction because it isn’t working for me, how do I find the code that implements it?

                                            1. 7

                                              There is some subjectivity involved, and I don’t think this article does a stellar job at explaining it, but I do think there’s something to it.

                                              The biggest issue I ask is “how hard is it to debug if the abstraction doesn’t work or does something unexpected?”, which is kind of a different way of phrasing “How easy is it to understand?” Sometimes this is very easy and there is no problem; other times it can be very hard, and this is when you run in to trouble.

                                              There are a few factor that influence this: assumptions that the abstraction makes, interactions with other parts of the system, or just being too clever for its own good.

                                              1. 4

                                                Abstractions always work in the service of some greater good, otherwise there’s no way to judge whether they’re appropriate or not. So here’s your test: measure how much time users spend trying to understand, use, debug your abstraction level as opposed to reaching their own goals.

                                                It’s a simple measurement, no Scotsmen required.

                                                1. 3

                                                  This seems to take the argument posited in bad faith.

                                                  Magic often involves twiddling some arcana that has little correspondence to the task at hand. Whereas an abstraction should let you peel it away when necessary, and signal to the reader that something more is going on.

                                                  Smart pointers in C++ fall firmly into an abstraction because they always let you get a raw pointer out while doing what they purport to do. A more magical construct of them would likely not allow raw pointer access after they’ve been constructed, forcing users into casts and state tweaking to access them.

                                                  It’s not the best example of magic, mind you. But magic typically requires multiple lines of comments, whereas abstractions flow better due to less conceptual context switching.

                                                1. 12

                                                  Related story from last month: LibreSSL languishes on Linux

                                                  I think this is a good move; Void is a fairly small distro, and the less work it creates for itself, so to speak, the better. And OpenSSL seems to have sufficiently cleaned up their act to make LibreSSL less advantageous than it used to be. Arguably, OpenSSL is even the better choice now.

                                                  1. 2

                                                    And OpenSSL seems to have sufficiently cleaned up their act to make LibreSSL less advantageous than it used to be.

                                                    I’ve heard this a few times but what specifically changed?

                                                    I haven’t been following too closely, but I just took a quick skim. It seems like the worst stuff is still there, as far as I can tell. For example:

                                                    • Dangerous features like custom allocators that make the life of sanitizers hard.
                                                    • Outdated cryptography that should never be enabled in a secure system. SSL3, MD2. The known backdoored ERBG random generagor. etc.
                                                    • Support for CPUs that have never existed, like big-endian x86. Granted, not likely to be a source of production issues, but crazy nonetheless.

                                                    I remember seeing that they’ve cleaned up the worst of the coding style, but what dangerous bits have they gotten rid of, or fixed?

                                                    1. 1

                                                      Support for CPUs that have never existed, like big-endian x86

                                                      Stratus: big endian intensifies

                                                      Numeric values in VOS are always big endian, regardless of the endianness of the underlying hardware platform. On little endian servers with x86 processors, the compilers do a byte swap before reading or writing values to memory to transform the data to or from the native little endian format.

                                                  1. 12

                                                    I typically write user-visible dates as “26 Feb 2021”: this way it’s always clear for pretty much anyone who can speak English (and also many who don’t, since month names tend to be very similar across languages). You could argue that “2021-02-26” is better, but many people aren’t used to it so I don’t think it’s very user-friendly.

                                                    For databases and other technical (non-user visible) there is no discussion that ISO 8601(-ish) dates formatted as “2021-02-25” is the only acceptable format.

                                                    This is also why I often use (thin) spaces for thousands separators by the way: “64 737” instead of “64,737”.

                                                    1. 17

                                                      I’m using ISO8601 whenever i have to sign something. Never had any complaints or problems.

                                                      1. 1

                                                        Same, but I wonder if that’s because I’m in the US where everyone expects the month to come before the day anyway. I’m moving the year to the beginning but otherwise it is a normal human-readable date. Does it go over as well in countries that use DD/MM dates normally?

                                                        1. 1

                                                          Always has for me! (Australia.) I think people here are more sensitive to different date formats, given we use the less popular of the two and everyone’s online.

                                                      2. 7

                                                        I do the same when communicating professionally. We use English mostly and I tend to refer to dates as “26 Feb 2021 12:00 CET (UTC+1)” because spending a couple of seconds writing usually saves hours of confusion when scheduling meetings etc.

                                                        How do you output thin spaces? It’s part of Swedish typographical standard to use for thousands separators but it’s a pain online because of bad support for thin spaces (that also have to be non-breaking).

                                                        1. 7

                                                          I have it set up in XCompose:

                                                          <Multi_key> <space> <space> : " " U202F # NARROW NO-BREAK SPACE
                                                          

                                                          So pressing right alt + space + space will insert U+202F.

                                                          1. 2

                                                            I have this:

                                                            <Multi_key> <space> <t> : " "   U2009    # Thin space
                                                            <Multi_key> <space> <m> : " "   U2003    # Wide (em) space
                                                            

                                                            Is U202F a thin space too?

                                                            1. 2

                                                              I think they render pretty much identical, although this depends on the font. The advantage of the non-breaking space is that it shouldn’t be used as a line-break: https://imgur.com/a/tbVnamc

                                                              Neither is “more correct”, in English at least, and using just a thin space is fine too, but I think the first looks a bit nicer.

                                                          2. 3

                                                            It’s part of Swedish typographical standard to use for thousands separators but it’s a pain online because of bad support for thin spaces (that also have to be non-breaking).

                                                            I thought the thousands separator was just a non-breaking space, not a thin non-breaking space.

                                                            1. 3

                                                              No, it’s a thin one. Source: Typografisk handbok.

                                                              1. 2

                                                                Huh – thanks for the tip. Just made my life a tiny bit harder… :-)

                                                                1. 1

                                                                  You only have to care when typesetting professionally. The web is a lost cause.

                                                                  1. 1

                                                                    It’s kind of been a lost cause ever since the printing press was invented 🙃 I don’t think simplification is necessarily a bad thing though.

                                                                    1. 1

                                                                      OK, so I tried to implement this on my blog, with an idea of adding a filter so that I could use commas as thousands separators in Markdown, then they would be replaced by non-breaking thin spaces during rendering.

                                                                      But while it works in the main matter, it does not work in headings for some reason.

                                                          3. 13

                                                            many people aren’t used to it

                                                            My approach is that they will have to deal with it.

                                                            1. 3

                                                              “26 Feb 2021” is often recommended or even required, to avoid confusion between dd/mm/yyyy and mm/dd/yyyy when dealing with cases across borders.

                                                              1. 2

                                                                For databases and other technical (non-user visible) there is no discussion that ISO 8601(-ish) dates formatted as “2021-02-25” is the only acceptable format.

                                                                Depending on application, unix timestamp may also be the way to go.

                                                                1. 11

                                                                  Unix timestamps is the worst of both worlds - not human readable and affected by leap seconds.

                                                                  Use TAI timestamps instead.

                                                                  1. 3

                                                                    Except usually I want leap seconds in there so my time isn’t wrong?

                                                                    1. 4

                                                                      Then you implement a lookup table with historical leapseconds, and use that to display the current UTC time.

                                                                      If you are ambitious, you can amend the lookup table as soon as a new leapsecond is announced.

                                                                      The point is that from the point of view of your code (or database), there’s only a monotonical increase of seconds. You effectively treat the difference between TAI and UTC as a sort of timezone.

                                                                      Of course, to be really useful, each clock in your system that emits events needs to use TAI (or some other time scale without leap seconds, like GPS time). Then it’s up to the user-facing interface to translate to civil time.

                                                                2. 1

                                                                  “thin spaces” has really made me stop and wonder…

                                                                  1. 4

                                                                    Unicode has spaces for many tastes - http://jkorpela.fi/chars/spaces.html

                                                                    1. 1

                                                                      I like the idea of “taste” in a field so normally strict. I should look for more of that sort of thing…

                                                                1. 4

                                                                  I’ve been trying to find a tactful way to say this for a few weeks now, but couldn’t find anything better than just saying it directly: I think you’re spamming your tree notation way too much.

                                                                  1. 3

                                                                    Could you define that more numerically? Maybe on Lobsters my % of posts about Tree Notation is higher than on HN or Reddit, but that’s because the feedback I get here is often so enlightening and helpful. Have developed some great relationships here stemming from my TN posts. Really it’s been amazing, and as someone who has chosen to go 100% public domain thus cutting off a lot of potential ways to fund my own team and get “paid” code reviewers, the “reviewers” I’ve got here have been invaluable.

                                                                    To me the quality of the discussions generated is an indication that this is not spam. I’ve been around so many things in the early days (Linux, Firefox, git, HN, reddit, bitcoin), and generally just find that in the early days things may seem “spammy” due to their half baked nature but there’s some core early idea being batted around that couldn’t be further from spam.

                                                                    1. 4

                                                                      A big difference between Lobsters and HN/Reddit is that Lobsters is a pretty small site, with a fairly small userbase, and doesn’t see that many stories posted every day. Even after not being on the site for a few days I can usually catch up on stories from the last few days from /newest. Try doing that on HN!

                                                                      So if you post something repeatedly, people will notice as quite a few people will see most submitted stories. This is rather different from HN, where it’s more of a “shouting in to the crowd”-kind of thing, and people who saw your first submission may not see your second, or vice versa.

                                                                      1. 1

                                                                        I’m always trying to raise the bar. I like how this post has -2. That gives me good feedback. But to be honest in my going on 2 decades on social media even my prediction about what people will find interesting is wrong 80% of the time. The stuff about Tree Notation that has generated the most buzz I would never have predicted.

                                                                        I feel like there’s an easy system in place already to determine whether something is providing value to the community or not: upvotes/downvotes.

                                                                  1. 73

                                                                    Apparently I need to remind people that:

                                                                    1. No, it doesn’t matter if a cryptography blog has cartoons on it.
                                                                    2. No, it’s off topic to rehash this meta discussion. Again. You can click the domain to find it on several earlier posts.
                                                                    3. No, it’s not appropriate to flame the author because their blog has cartoons. This has been the final straw in at least one ban so far.
                                                                    4. No, it’s not appropriate to flag the story for mod attention because it feels like you get to punish cartoons. Reread the previous point and consider whether you want mod attention. You can click ‘hide’ below the title if you want to not see the story or any comments to it.
                                                                    1. 4

                                                                      I feel this and I agree with you. However, I have flagged this as spam because this person keeps posting their own blog over and over. Maybe that’s not how that flag is intended to be used? I’m open to criticism about that. In any case I wanted to be clear about my motives for flagging this since it seems like there’s some other thing going on that I didn’t know about until reading this.

                                                                      1. 46

                                                                        A qualitatively good article every two weeks is hardly spam in my opinion.

                                                                        1. 38

                                                                          Spam is low effort, low quality, high volume. This is none of those things.

                                                                          1. 28

                                                                            It’s well written work, yes it’s posted regularly, but that doesn’t make it blog spam. The submissions tend to see engagement, so I think there’s some level of consensus there amongst the community.

                                                                            1. 3

                                                                              there’s a previous thread somewhere discussing if self-posting should be permitted or not, there’s definitely arguments for both sides. some users routinely post their own site and it’s just really uninteresting stuff (including this post), but I don’t think that’s spam— it’s just content I’m not interested in personally.

                                                                              Also, there’s a user script for hiding posts from a domain/user/whatever. someone linked it to me in a previous thread but the story was nuked.

                                                                              1. 6

                                                                                Here’s the Greasemonkey script for blocking domains, stories, or users.

                                                                                1. 3

                                                                                  Yeah, this is more in line with what my opinion is. I’m not arguing whether or not it’s well written and it’s certainly on topic for this site. I’m not trying to come in here to argue with folks. I flag something as spam if someone keeps posting “self-promotion” things like this. That still seems like spam to me but I’m fine being told I’m wrong as it’s clear what the consensus is. I won’t reply to each person that replied but I appreciate them all the same.

                                                                                  1. 9

                                                                                    If it helps to add context, I do submit things other than what I’ve written, if I believe they belong on this site. :)

                                                                                    1. 4

                                                                                      Yes, I see 4 out of your 15 posts aren’t something you authored. One of which comes after this thread started. Either way, my opinions aren’t personal attacks or facts… they’re just opinions. I mean you no disrespect.

                                                                                      1. 7

                                                                                        If it’s on-topic then it’s on-topic, and does it really matter who submitted it? In general I prefer it when authors submit their own stuff, because if I have some comment or criticism I can write “I don’t agree with you there”, “I think you are mistaken”, “I think this is unclear”, etc. and can then have a conversation with the author themselves, instead of “I don’t agree with the author”.

                                                                                        1. 3

                                                                                          Obviously, I think it matters or I wouldn’t have flagged it. Seems we prefer different things.

                                                                                          1. 6

                                                                                            If you don’t personally like it, even though it’s on topic and there’s nothing wrong with it, use the hide feature.

                                                                            1. 5

                                                                              Yet the new fs API doesn’t support contexts at all, so you have to embed them in your fs struct.

                                                                              1. 8

                                                                                The io package in general doesn’t support contexts (io.Reader and io.Writer are examples of this), and they’re a bit of a pain to adapt. In particular, file IO on Linux is blocking, so there’s no true way to interrupt it with a context. Most of the adapters I’ve seen use ctx.Deadline() to get the deadline, but even that isn’t good enough because a context can be cancelled directly. I’d imagine that’s why it’s not in the fs interfaces.

                                                                                For every Reader/Writer that doesn’t support Context directly, you need a goroutine running to adapt it which is not ideal. There is some magic you can do with net.Conn (or rather *net.TCPConn) because of SetDeadline, but even those would need a goroutine and other types like fs.File (and *os.File) would leave the Read or Write running in the background until it completes which opens you up to all sorts of issues.

                                                                                1. 1

                                                                                  You can you use the fs.FS interface it to implement something like a frontend for WebDav, “SSH filesystem”, and so forth, where the need for a context and timeouts is a bit more acute than just filesystem access. This actually also applies to io.Reader etc.

                                                                                2. 1

                                                                                  I’m not entirely sure why the new fs package API doesn’t support contexts, but you could potentially write a simple wrapper for that same API which does, exposing a new API with methods like WithContext, maybe?

                                                                                  Especially considering what the documentation for context states:

                                                                                  Contexts should not be stored inside a struct type, but instead passed to each function that needs it.

                                                                                  But ideally I’d like to see context supported in the new fs package too.

                                                                                1. 27

                                                                                  What’s up with these web pages that don’t actually contain their content? Why should I run some javascript in order to read some text?

                                                                                  1. 29

                                                                                    An article that complains about the size of icon fonts and “flash of unstyled text” yet loads 604k and has a “one moment please” progress spinner to display some basic text is somewhat hilarious. 2021 frontend in a nutshell.

                                                                                  1. 12

                                                                                    a world where applications are spec-driven (or protocol-driven) sounds like a nice place to live, unfortunately i don’t think it works in reality. even if i look at the examples of spec-driven platforms provided by the article, many of those i consider failed:

                                                                                    • email: that thing is mostly broken, because of spam, and when email providers try to limit spam, they have to employ heuristics, which in turn cause problems for smaller email providers (this is even mentioned in the article). no usable encryption too. i think people don’t use email much for communication anymore. for example, in my personal case, most of the communication with friends & family happens through instant-messaging-platforms. my email is for proving my identity when signing up for services, receiving newsletters, talking on mailing lists and getting job offers. so yeah, maybe the last two can be considered communication. but i think you get the point.
                                                                                    • xmpp: xmpp failed for me even without google closing down google-talk. my problem with xmpp was ironically their protocol being extensible ( https://xmpp.org/extensions/ ). (mind you, my experiences are from like 5 years ago, maybe today all this is fixed) this means, you can have an xmpp-client that considers themselves an xmpp-client but it does not support for example file-transfer, or video-calls. in theory we can see a certain beauty there, see, you can create a minimalistic client that can transmit messages through xmpp. on the other hand, this is useless for non-technical users. so, you while you can say that xmpp has many many clients and servers, and that xmpp has many many features, those two are not true at the same time. of course, one approach is to have one well working and well supported reference implementation that does all that is needed, and when talking about xmpp one can point users to that specific implementation…. i think you see where i am going here :-)
                                                                                    • common lisp: the common-lisp standard is from 1994. there is no newer standard. iirc, it does not have sockets for example. so, to write common-lisp applications, you will learn the standard library of the given specific common lisp implementation.

                                                                                    i’m trying to think about applications meant for non-technical users, where the is a “spec” rather than an app. web-browsers are such a thing i guess? and even there, we only have 2 (or 3,depending how you count) “real” implementations.

                                                                                    1. 13

                                                                                      you can have an xmpp-client that considers themselves an xmpp-client but it does not support for example file-transfer, or video-calls

                                                                                      It was (is?) so much worse than that. You could have two clients that implemented file transfer and video calls, but using different extensions and so they couldn’t interop.

                                                                                      One of the big problems here is that most users do not know the difference between a protocol, service, service provider, and app. They see Hotmail or Gmail or WhatsApp as monolithic things. They use the client provided by the service and they see the client, service, and service provider as a single monolithic entity. A few may sometimes use an additional client, but often don’t realise that they’re still using the same service when they do.

                                                                                      Signal gave up on trying to educate users here. The Signal service, protocol, and app are all presented as a uniform blob. I suspect that this is the biggest reason why Signal is the most successful open messaging platform.

                                                                                      1. 8

                                                                                        …but often don’t realise that they’re still using the same service when they do.

                                                                                        And somehow it goes even further! Just the other day a friend told me that her mother got a new Facebook account because she got a new phone and thought her Facebook account was linked inextricably to the physical device she used to access it. I also have heard a number of anecdotes where people sign up for a new Gmail account each time they get a new Android phone.

                                                                                        If people want federation and open protocols to succeed they should stop writing new software and specs and start working out how to explain the whole idea to users and make them care about it.

                                                                                      2. 8

                                                                                        i’m trying to think about applications meant for non-technical users, where the is a “spec” rather than an app.

                                                                                        • BitTorrent
                                                                                        • Many parts of the Web and its underlying infrastructure, to varying degrees
                                                                                        • The Fediverse: not as mainstream as social media giants, but still a thriving community with millions of users
                                                                                        • Matrix: despite my concerns, it’s still definetly on the “open” side of the closed/open spectrum
                                                                                        1. 10

                                                                                          The Fediverse: not as mainstream as social media giants, but still a thriving community with millions of users

                                                                                          Fediverse is barely a spec. Mastodon only implemented the S2S portion of the ActivityPub spec and even then only a subset of it. Mastodon still uses a bespoke API to talk to its clients and the client/server situation is a mess. The only 3 well-supported S2S implementations I know are Mastodon, Pleroma, and honk. I don’t even want to get into all the issues with ActivityPub.

                                                                                          1. 3

                                                                                            You are absolutely correct. I considered including Mastodon as an example of an implementation with too many idiosyncratic features, but I don’t know enough about it. Maybe you could make a blogpost in response!

                                                                                          2. 6

                                                                                            BitTorrent was originally the name of both the protocol and the client. Later people used the same protocol to build other clients, but the original model wasn’t too different from e.g. Signal. This is probably a major reason for its success. If it had used the “XMPP model” of just publishing a specification (or a specification with a client as an afterthought) then it probably never would have become as popular as it did.

                                                                                            The same more or less applies to the web: Berners-Lee didn’t just publish a bunch of specifications, he published a HTTP server, web browser, and web editor. And the web’s later mainstream growth was driven by products (Netscape and IE mainly) rather than specifications.

                                                                                            1. 4

                                                                                              To be fair, Jabber was the protocol of a reference implementation with corporate backing first, then the IETF spec came later. That was just so long ago in internet years that it’s barely relvant anymore

                                                                                              1. 1

                                                                                                That’s really interesting! I’d love to see someone write a post called “Opening closed platforms”.

                                                                                                1. 3

                                                                                                  Neither were ever a “closed” platform; it’s just the difference in focus/communication:

                                                                                                  • Publish an open protocol, market/communicate the protocol, and wait for people to write clients for it.

                                                                                                  • Publish and open protocol and write a good (or at least decent) open implementation of it, and mainly advertise the implementation rather than the protocol.

                                                                                            2. 4

                                                                                              common lisp: the common-lisp standard is from 1994. there is no newer standard. iirc, it does not have sockets for example. so, to write common-lisp applications, you will learn the standard library of the given specific common lisp implementation.

                                                                                              Not necessarily, take the example of sockets. As the language is extensible, a library can abstract over implementations extending the language. The same applies to threads, for example. Then all other libraries depend on these projects, instead of implementations.

                                                                                              1. 4

                                                                                                Let’s not forget IRC, where implementing a server is almost not-possible by just reading the RFCs. Everyone likes to talk about how easy it is to implement a client, but a server is a whole other can of worms.

                                                                                              1. 14

                                                                                                Perhaps I’m in a foul mood [1] but just once, I would like to see someone not rant, but sit down and actually implement their ideas. Stop complaining, and just do it! Yes, I’ve seen this rant before. I’ve been reading rants like this since 1990 and you know what? NOTHING HAS CHANGED! Why? Probably because it’s easier to complain about it than do anything about it.

                                                                                                [1] UPS died today, and it took out my main computer.

                                                                                                1. 8

                                                                                                  Rust tries. Have a look at Shape of errors to come.

                                                                                                  1. 8

                                                                                                    People have been ranting about it well before 1990, as well.

                                                                                                    However, it is worth noting that plenty of people have sat down and implemented their ideas. There are programming languages out there that attempt to throw off the points mentioned in this article. (Fortress, for example, uses traditional mathematical notation, with a tool to automatically translate source into a LaTeX file.) But the fact is that none of these experiments have gained much traction – or when they do gain traction (e.g. Perl), they are often widely reviled for those exact traits.

                                                                                                    Proponents of these ideas will argue that people are too hidebound for new ideas to reach a critical mass of adoption … and while that is certainly a factor, after so many decades of watching this pattern repeat, I have to wonder. I question the initial premise. Would programming languages actually be better if they were written in a proportional font, or required punctuation marks not printed on a standard keyboard? It’s not clear to me that this assumption is true.

                                                                                                    1. 1

                                                                                                      Well, “would be better if ” omits the most important point – which is “better for whom”. APL is very popular among fans of APL, who have already done the groundwork of learning new tooling and getting access to specialized characters, which is different from but not fundamentally harder than becoming familiar with our usual UNIX build toolchain.

                                                                                                      So long as the unstated “whom” factor in this question is “people who are already professional programmers”, radically new ideas will have a hard time gaining popularity. If we expand it to “people who are planning to become professional programmers”, a handful of other technologies start to make the cut based on the ease with which people can pick them up. Our current software ecosystem is optimized by some mix of two main utility functions: “is it convenient for people who have been administering UNIX machines since 1985” and “is it convenient for people who don’t know what a for loop is”.

                                                                                                      I don’t personally think proportional fonts are a plus. Typography people are big on proportional fonts because they remove ‘rivers’, which can be distracting when you’re looking at a page of text at a distance because part of human nature is to project meaning onto shapes (such as the shape of whitespace), and in densely-packed prose, patterns in whitespace between lines are almost always noise. In source code, patterns in whitespace between lines are basically always intentional and semantically meaningful, and monospace fonts are the easiest way to make such patterns controllable.

                                                                                                      But, unicode can be fantastic for readability, since it aids in chunking. Where the density of operators based on ascii in obfuscated perl and in APL-derived languages like j make code seem “write-only”, turning multi-character operators into single non-ascii character operators makes functionally read-only code. Still, lots of modern languages support unicode operators & have unicode aliases for multi-character operators, and perhaps widescale adoption of these aliases will produce pressure to create specialized extended keyboards that can type them and lower the price of such keyboards. In the mean time, there are (monospace) fonts that render multi-character operators as single characters, simulating composition, and if they didn’t screw up code alignment this would be a fantastic solution.

                                                                                                      There are a lot of tiny but thriving tech microcultures – forth, smalltalk, APL, prolog, red, and tcl all have them. There are bigger tech microcultures too – the ones around haskell, rust, and erlang. Very occasionally, they seed ideas that are adopted by “the mainstream” but it’s only through decades of experimentation on their own.

                                                                                                      1. 2

                                                                                                        Our current software ecosystem is optimized by some mix of two main utility functions: “is it convenient for people who have been administering UNIX machines since 1985” and “is it convenient for people who don’t know what a for loop is”.

                                                                                                        Any proof of this assertion?

                                                                                                        1. 1

                                                                                                          Only anecdotal evidence from 10 years in the industry and another 10 developing software in public, but I imagine other folks on lobste.rs can back me up.

                                                                                                          I can exhaustively list examples of mediocre tech that has been adopted and popularized for one of these two reasons, but that’s not exactly a proof – it’s merely evidence.

                                                                                                          1. 2

                                                                                                            My own anecdotal experience of a similar amount of time (add 6 years of developing software/doing research in academia, subtract accordingly from the others, and sprinkle in some other years to make the math work) is not the same. There are complicated incentives behind software development, maintenance, and knowledge transmission that affect all of these things, and these incentives are not captured in a dichotomy like this. I see it most trotted out when used to justify alternative opinions, the “You just don’t understand me” defense.

                                                                                                            1. 1

                                                                                                              I would naturally expect the constraints to be quite different in academic research, & of course, I’ve simplified the two cases to their most recognizable representatives. At the same time, highly competent people who have a lot of social capital in organizations (or alternately, people in very small organizations where there’s little friction to doing wild experiments, or people who have been institutionally isolated & are working on projects totally alone) have more flexibility, and I’ve been blessed to spend much of my time in such situations and have limited my exposure to the large-institution norm.

                                                                                                              We could instead say that the two big utility functions in industry are intertia & ease of onboarding – in other words, what tools are people already familiar with & what tools can they become familiar enough with quickly. They interact in interesting ways. For instance, the C precedence rules are not terribly straightforward, but languages that violate the C precedence rules are rarely adopted by folks who have already internalized them (which is to say, nearly everybody who does a lot of coding in any language that has them). How easy/intuitive something is to learn depends on what you’ve learned beforehand, and with professional developers, it is often much easier to lean on a large hard-learned set of rules (even if they are not very good rules) than learn a totally new structure from scratch (even if that structure is both complete and reasonably simple). It’s a lot easier to learn forth from scratch than it is to learn Java, but everybody learns Java in college and nobody learns forth there, so you have a lot less friction if you propose that the five recent graduates working on your project implement some kind of 10,000 line java monstrosity than that they learn forth well enough to write the 50 line forth equivalent.

                                                                                                              As long as there’s a lot of time pressure, we should expect industry to follow inertia, and follow onboarding when it coincides with inertia.

                                                                                                              Onboarding has its own problems. Take PHP, for instance. A lot of languages make the easy things very easy and the difficult things almost impossible, & undirected beginners lack the experience to recognize which practices don’t scale or become maintainable. I spent a couple years, as a kid, in the qbasic developer ghetto – rejecting procedure calls and loop structures in favor of jumps because qbasic’s versions of these structures are underpowered and because I had never written something that benefitted much from the modularity that structured programming brings. Many people never escape from these beginning-programmer ghettos because they don’t get exposed to better tooling in ways that make them want to adopt it. I might not have escaped from it had I not been susceptible to peer pressure & surrounded by people who made fun of me for not knowing C.

                                                                                                              And onboarding interacts with inertia, too. PHP and ruby were common beginner’s languages during the beginning of the “web 2.0” era because it was relatively easy to hook into existing web frameworks and make vaguely professional-looking web applications using them on the server side. These applications rarely scaled, and were full of gotchas and vulnerabilities that were as often inherited from the language or frameworks as they were from the inexperience of the average developer. But Facebook was written in PHP and Twitter in Ruby, so when those applications became big and suddenly needed to scale, rather than immediately rewriting them in a sensible way, Twitter spent a lot of time and money on new Ruby frameworks and Facebook literally forked PHP.

                                                                                                              Folks who are comfortable with all of PHP’s warts move in different circles than folks who are comfortable with all of C’s warts, or UNIX’s, or QBasic’s, but they are unified in that they gained that comfort through hard experience &, given a tight deadline, would rather make use of their extensive knowledge of those warts than learn a new tool that would more closely match the nuances of the problem. (Even I do this most of the time these days. Learning a totally new language in and out can’t be put on a gantt chart – you can’t estimate the unknown unknowns reliably – so I can only do it when not on the clock. And, when I’m not on the clock, learning new languages is not my highest priority. I often would prefer to eat or sleep. I am part of the problem here.)

                                                                                                              Obviously, most wild experiments in programming language design will be shitty. Even the non-shitty ones won’t immediately get traction, and the ones that do get traction will probably take decades to become popular enough in hobby communities for them to begin to appear in industry. I think it’s worth creating these wild experiments, taking them as far as they’ll go, and trying other people’s wild experiment languages too. The alternative is that we incrementally add new templates to STL (while keeping all the existing ones backward compatible) forever, and do comparable work in every other stack too.

                                                                                                              1. 3

                                                                                                                so when those applications became big and suddenly needed to scale, rather than immediately rewriting them in a sensible way, Twitter spent a lot of time and money on new Ruby frameworks and Facebook literally forked PHP.

                                                                                                                Familiarity isn’t the reason why Twitter and Facebook spent time trying new frameworks and forking PHP. The reason is that these companies believed, from a cost-benefit perspective, that it was cheaper to preserve all of their existing code in the existing language and try to improve the runtime rather than rewrite everything in a different language, and risk all the breakages that come with language transitions. I have friends who were around both companies around the time (and at FB on the teams working on Hack, as it was a decent destination for PLT folks back then) and these were well known. Having worked on language migrations myself, I can say that they are very expensive.

                                                                                                                1. 3

                                                                                                                  Familiarity isn’t the reason why Twitter and Facebook spent time trying new frameworks and forking PHP. The reason is that these companies believed, from a cost-benefit perspective, that it was cheaper to preserve all of their existing code in the existing language and try to improve the runtime rather than rewrite everything in a different language, and risk all the breakages that come with language transitions.

                                                                                                                  There’s no contradiction there.

                                                                                                                  Switching to another language would not, ideally, be a language migration so much as a from-scratch rewrite that perhaps reused some database schemas – all of the infrastructure that you created to insulate you from the language (and to insulate the way you want to do & think about problems from the way the language designers would like you to do and think about problems) would be of no particular use, unless you switched to another language so similar to the original one that there wasn’t much point in migration at all. This is a big project, but so is maintenance.

                                                                                                                  I don’t have first-hand access to these codebases, but I do know that PHP is insecure and Ruby on Rails doesn’t scale – and that solving those problems without abandoning the existing frameworks requires a lot of code that’s very hard to get right. If you knew you were likely to produce something very popular, you wouldn’t generally choose PHP or Ruby out of the gate because of those problems, and conceptually nothing about Facebook’s feature set is uniquely well-suited to PHP (and nothing about Twitter’s is uniquely suited to Ruby).

                                                                                                                  I hear that Twitter eventually did migrate away from Ruby for exactly this reason.

                                                                                                                  The inertia of old code & the inertia of existing experience are similar, and they can also overlap. A technical person on the ground level can have a better idea about whether a total rewrite is feasible than a manger with enough power to approve such a rewrite. And techies tend to be hot on rewrites even when essential complexity makes them practically prohibitive. Facebook may have made the right move, when they finally did move, simply because they have this endless accumulated cruft of features introduced in 2007 that hardly anybody has used since 2008 but that still needs to be supported (Facebook Memories recently reminded me of a note I wrote 12 years ago – around the time I last saw the facebook note feature used); had they put a potential rewrite on the table in 2006, when the limitations of PHP were already widely known, the cost-benefit ratio may have been very different.

                                                                                                                  I’ve got some first-hand experience with this. Where I work, we had a large C and perl codebase that grew about 5 years before I joined when we bought a competitor and integrated their existing large Java codebase. Most of the people who had touched the Java codebase before we inherited it either left the company or moved into management. When I was brought on as an intern, one of my tasks was to look at optimizing the tasks that one of the large (millions of lines) java projects was handling. It turned out that this project had extremely wasteful retry logic designed to deal with some kind of known problem with a database we hadn’t supported in a decade, and that the very structure of our process was extremely wasteful. I worked alone for months on one component & got this process’ run time down from 8 days to 5 (though my changes were never adopted). Later, I worked with a couple people to get the process down from 8 days to 1 day by circumventing some of the retry logic & doing things in parallel that could easily be done in parallel. Last year, we moved from our local datacenter to the cloud, and we had to radically restructure how we handled this process, so we rewrote it completely (using a process that was based on something I had developed for mirroring our data in backup data centers) and – the 8 day time period turned into about 20 minutes and the multi-million-line java codebase turned into an approximately one hundred line shell script. I am under the impression that practically every million-line java codebase in my organization can be turned into a one hundred line shell script with equivalent functionality and substantial performance improvements, and that the biggest time and effort sink involved is to reverse engineer what is actually being done (in the absence of the original authors) and whether or not it needs to be done at all. I don’t want to underestimate or understate the detective work involved in figuring out whether or not legacy code paths should exist, because it is substantial and it requires someone with both depth and breadth of experience.

                                                                                                                  This is a bit of a hot take, and it’s not remotely viable in a commercial environment, but I think our software would benefit quite a bit if we were less afraid of rewrites and less afraid of reinventing the wheel. The first working version of any piece of software ought to be treated as an exploration of the problem domain (or at most, a prototype) and should be thrown away and rewritten from scratch using the knowledge gained – now that you know how to make it work, make it right from the ground up, using the tools, techniques, and structure that are suited to the task. This requires knowing a whole range of very dissimilar tools, and ideally would involve being willing to invent new tools.

                                                                                                                  1. 1

                                                                                                                    I am under the impression that practically every million-line java codebase in my organization can be turned into a one hundred line shell script with equivalent functionality and substantial performance improvements, and that the biggest time and effort sink involved is to reverse engineer what is actually being done (in the absence of the original authors) and whether or not it needs to be done at all.

                                                                                                                    Ouch. I’ve never had quite this bad an experience, but I’ve had similar experiences in academia. Must have been cathartic to reduce all that extraneous complexity though.

                                                                                                                    This is a bit of a hot take, and it’s not remotely viable in a commercial environment, but I think our software would benefit quite a bit if we were less afraid of rewrites and less afraid of reinventing the wheel. The first working version of any piece of software ought to be treated as an exploration of the problem domain (or at most, a prototype) and should be thrown away and rewritten from scratch using the knowledge gained – now that you know how to make it work, make it right from the ground up, using the tools, techniques, and structure that are suited to the task. This requires knowing a whole range of very dissimilar tools, and ideally would involve being willing to invent new tools.

                                                                                                                    I think this boils down to what the purpose of software is. For me, the “prime imperative” of writing software is to achieve a desired effect within bounds. In commercial contexts, that’s to achieve a specific goal while keeping costs low. For personal projects, it can be many things, with the bounds often being based around my own time/enthusiasm. Reducing complexity, increasing correctness, and bringing out clarity are only means to an end. With that in mind, I would not be in favor of this constant exploratory PoC work (also because I know several engineers who hate writing throwaway code and become seriously demoralized when their code is just tossed, even if it’s them writing the replacement). I’m also not optimistic about the state space of tools, techniques, and structure. I don’t actually think there are local maxima much higher than the maximum we find ourselves in now from redoing everything from the ground up, and the work in reaching the other maxima is probably a lot more than the magnitude of difference between the maxima. I do think as a field of study we need to balance execution and exploration more so that we don’t make silly decisions based on what’s available to us, but I’m not optimistic that the state space really has a region of maxima much higher than our own at all, let alone within easy reach.

                                                                                                                    1. 2

                                                                                                                      I’m not optimistic that the state space really has a region of maxima much higher than our own at all, let alone within easy reach.

                                                                                                                      I can’t bring myself to be so pessimistic. For one thing, I’ve seen & used environments that were really pleasant but are doomed to never become popular enough to support development after the current crop of devs are gone, and some of these environments have been around for decades. For another thing, if I allowed myself to believe that computers and computing couldn’t get much better than they are right now, I’d get quite depressed. The current state of computing is, to me, like being at the bottom of a well with a broken leg; to imagine that it could never be improved is like realizing that there is no rescue.

                                                                                                                      Then again, in terms of the profit motive (deploying code in such a way that it makes money, whether or not anybody actually likes or benefits from using it), I don’t prioritize that at all. It is probably a mistake to ignore or underestimate that factor, but I think most big shifts have their origin in domains that are shielded from it, and so going forward I’d like to create more domains that are shielded from the pressures of commerce.

                                                                                                                      1. 2

                                                                                                                        The current state of computing is, to me, like being at the bottom of a well with a broken leg; to imagine that it could never be improved is like realizing that there is no rescue.

                                                                                                                        This is the problem with these discussions. They really just boil down to internal assumptions of the state of the world. I don’t think computing is broken. I think computing, like anything else, is under a complex set of forces, and if anything, I’m excited by all the new things coming out in computing. If you don’t think computing is broken, then the prospect of an unknown state space with already discovered maxima isn’t a bad thing. If you do, then it is. And so folks who disagree with the state of computing think we need to change directions and explore, folks who agree think that everything is mostly okay.

                                                                                                                        I don’t prioritize that at all. It is probably a mistake to ignore or underestimate that factor, but I think most big shifts have their origin in domains that are shielded from it, and so going forward I’d like to create more domains that are shielded from the pressures of commerce.

                                                                                                                        Do you mean specifically pressures of commerce, or pressure in general? There are a lot of people for whom programming is just a means to an end, and so there are still pressures, just maybe not monetary. They just slap together some Wordpress blog for their club, or write some uninspired CRUD to help manage inventory at their local library. These folks aren’t sitting there trying to understand whether a graph database better fits their needs than bog standard MySQL; they just care more about what technology does for them than the technology itself. I don’t think that’s unrealistic. Technology should be an enabler for humans, not an ideal to aspire unto.

                                                                                                                        1. 2

                                                                                                                          Do you mean specifically pressures of commerce, or pressure in general?

                                                                                                                          Commerce in particular. I think it’s wonderful when people make elaborate hacks that work for them. Markets rapidly generate complex multipolar traps that incentivize the creation and adoption of elaborate hacks that work for no one.

                                                                                                                          1. 1

                                                                                                                            Full agreement about this, personally.

                                                                                                    2. 3

                                                                                                      Wholeheartedly agree. This was a very annoying thing to read and moreover I think most of the ideas presented here are actually pretty terrible. I’d love to see the author implement their ideas and prove me either wrong or right.

                                                                                                      1. 3

                                                                                                        I’ve been reading rants like this since 1990 and you know what? NOTHING HAS CHANGED

                                                                                                        Well, that’s not quite the case, because this:

                                                                                                        If I write int f(X x), where X is an undeclared type, the compiler should not do what GCC does, which is to write the following:

                                                                                                        error: expected ‘)’ before ‘x’
                                                                                                        

                                                                                                        [..]

                                                                                                        It should say either something both specific and helpful, such as:

                                                                                                        error: use of undeclared type ‘X’ in parameter list of function ‘f’
                                                                                                        

                                                                                                        is now (ten years after this article was written):

                                                                                                        $ cat a.c
                                                                                                        int f(X x) { }
                                                                                                        
                                                                                                        $ gcc a.c
                                                                                                        a.c:1:7: error: unknown type name 'X'
                                                                                                            1 | int f(X x) { }
                                                                                                              |       ^
                                                                                                        
                                                                                                        $ clang a.c
                                                                                                        a.c:1:7: error: unknown type name 'X'
                                                                                                        int f(X x) { }
                                                                                                              ^
                                                                                                        

                                                                                                        So clearly there is progress, and printing the actual code is even better than what was suggested in this article IMO.

                                                                                                        gcc anno 2011 was kind of a cherry-picked example anyway, as it was widely known to be especially horrible in all sorts of ways, including error messages which with notorious and widely disliked.


                                                                                                        As for the rest of the article: many would disagree Perl is “human friendly”, many people find the sigils needlessly hard/confusing, and “all of the ugly features of natural languages evolved for specific reasons” is not really the case; well, I suppose there are specific reasons, but that doesn’t mean they’re useful or intentional. I mean, modern English is just old Saxon English badly spoken by Vikings with many mispronunciations and grammar errors, and then the Normans came and further morphed the language with their French (and the French still can’t speak English!) Just as natural selection/evolution doesn’t always pick brilliantly great designs, neither does the evolution of languages.

                                                                                                        Why don’t we use hyphens for hyphenation, minus signs for subtraction, and dashes for ranges, instead of the hyphen-minus for all three?

                                                                                                        As if distinguishing -, −, and – is human friendly… “Oh oops, you used the wrong codepoint to render a - character: yes we know it looks nearly identical and that we already know what you intended from the context, but please enter the correct codepoint for this character that is not on your keyboard.”

                                                                                                        Even if and were easy to type, visually distinguishing between the two is hard. I had to zoom to 240% just now to make sure I entered them correctly. And I’m 35 with fairly decent eye sight: in 20 years I’ll probably have to zoom to 350%.

                                                                                                        1. 2

                                                                                                          The author is clearly some kind of typography nerd, but there are good ideas embedded in this. French-style quote marks (which look like << and >>) are a much more visually distinguishable version of “smart quotes”, and the idea of having nestable quotes is good enough that many languages implement them (even if they do not implement them in the same way). Lua’s use of [[ and ]] for nestable quotes seems closest to the ideal here. Of course, you’ll still need either a straight quote system or escaping to display a lone quote :)

                                                                                                          1. 1

                                                                                                            Yeah, using guillemets might be a slight improvement, but to be honest I think it’s a really minor one. And you can’t just handwave away the input issue: I use the Compose key on X, and typing « means pressing Alt < <; even with this, three keystrokes for such a common character is quite a lot IMO.

                                                                                                            “Code is read more often than it is written”, yes yes, I know, but that doesn’t mean we can just sacrifice all easy of writing in favour of readability. Personally I’d consider it to be a bad trade-off.

                                                                                                            1. 1

                                                                                                              It’s a chicken-and-egg problem between hardware and software, and hardware is a little more centralized so it’s easier for a single powerful figure to solve this problem. If tomorrow Jony Ive decided that every macintosh keyboard would have extra keys that supported the entire APL character set from now on, we’d probably see these characters in common use in small projects within a year, and bigger projects within a couple years, and within ten years (assuming other keyboard manufacturers started to follow suit, which they might) we’d probably see them in common use in programming in any language that supported them (including as part of standard libraries, probably with awkward multi-character aliases for folks who still can’t type them).

                                                                                                              The same cannot be said for software, as the failure of coordinated extreme efforts by the python core devs to exterminate 2.x over the course of nearly a decade has shown.

                                                                                                              1. 1

                                                                                                                Many languages never use guillemets; I never use them when writing Dutch or English, for example. They are used in some other European languages, but it’s nowhere near universal. Adding these characters to these keyboards only for programmers seems a bit too narrowly focused, IMO: most people typing on keyboards are not programmers. Hell, even as a programmer I probably write more English and Dutch than code.

                                                                                                                This is why the French, German, and English keyboard layouts are all different: to suit the local language.

                                                                                                                1. 2

                                                                                                                  Folks writing english prose also rarely use the pound, at, carat, asterisk, square braket, angle bracket, underscore, vertical bar, and back slash characters (and for half a century or more, have been discouraged by style guides from even learning what the semi-colon is for), but every US keyboard has them – mostly because programmers use them regularly. Because they are available, users (or developers of user-facing software) have invented justifications for using at, pound, and semicolon in new ways specific to computing. Any new addition to a keyboard that gets widespread adoption will get used because of its availability.

                                                                                                                  Even emoji are now being used in software now (even though not only are they difficult to type on basically all platforms but they don’t display consistently across them and don’t display at all on many).

                                                                                                                  1. 1

                                                                                                                    That’s true, but for one reason or the other they are on the keyboard and (much) easy to input, and people are familiar with them due to that.

                                                                                                                    Perhaps the keyboard layout should be changed; I wouldn’t be opposed. But I also don’t think this is something that can really be enforced from just the software community, but maybe I’m wrong.

                                                                                                      1. 12

                                                                                                        Hear hear!

                                                                                                        This is exactly the kind of low slung cognitive hurdle I find really turns me off to any programming language.

                                                                                                        I think part of my dis-taste stems from the fact that this language was designed so very recently in history of programming languages terms.

                                                                                                        It’s easy to understand why a language like C has some aspects of its syntax that might not feel immediately natural to modern day programmers, it came into its own 30 years ago on radically different hardware than what we run today.

                                                                                                        Go doesn’t have that excuse. I feel like its designers revel in this class of abstraction fail - mostly because they’ve been using C forever and their choices don’t present roadblocks to them.

                                                                                                        Make no mistake, enjoying the experience of using one programming language over another is an inherently subjective thing. For my meager brain and the problems I want and need to solve, I will always choose a programming language that presents a higher level of abstraction to me as a programmer.

                                                                                                        Thanks for writing this!

                                                                                                        1. 3

                                                                                                          Do you think there is any truth to the following statement?

                                                                                                          Abstraction solves all problems except for the problem of abstraction.

                                                                                                          If so, do you think this is a problem worth addressing? If so, how? Are there different ways to address it than you would that don’t involve “reveling in abstraction fail”?

                                                                                                          1. 5

                                                                                                            I don’t see abstraction as a problem. I see it as a solution.

                                                                                                            If you’re referring to the performance penalties abstraction tends to impose, then choosing a tool with a lower level of abstraction might well make sense.

                                                                                                            So, tool to task, and then no abstraction problem :)

                                                                                                            1. 9

                                                                                                              If abstraction is never viewed as a problem to you, then you likely don’t share some very basic assumptions made by both myself and likely the designers of Go. But maybe that’s not as fun as snubbing your nose and talking about how instead they “revel in abstraction fail.” And then further go on to speculate as to why they think that way.

                                                                                                              1. 7

                                                                                                                Your point about shared assumptions is very true.

                                                                                                                I tend to enjoy solving problems that aren’t particularly performance intensive, and I enjoy working at a very high level of abstraction.

                                                                                                                However clearly there’s another viewpoint that deserves to be heard here, so I’ll try to hunt down some resources around the points you’re making. Actually I just found this Wikipedia article which gives this a bit more flavor and I think I understand it better. Abstraction (phrased as indirection in the article) can cause you to think you’re solving hard problems when in reality you’re just moving them around.

                                                                                                                Also I apologize if my poor chose of words offended you. I didn’t intend to snub my nose at anything, and I should have been more careful to strictly personalize the point I was making.

                                                                                                                1. 10

                                                                                                                  Thanks. No apology necessary, I’d just love if we could talk about PL design in terms of trade offs instead of being so reductionist/elitist.

                                                                                                                  I don’t think your Wikipedia article quite does the idea I’m trying to get across justice. So I’ll elaborate a bit more. The essential idea of “abstraction solves all problems except for the problem of abstraction” is that abstraction, can itself, be a problem. The pithy quote doesn’t say what kind of problem, and I think that’s intentional: abstraction can be at the root of many sorts of problems. You’ve identified one of them, absolutely, but I think there are more. Principally on my mind is comprehension.

                                                                                                                  The problem with saying that abstraction can cause comprehension problems is that abstraction can also be the solution to comprehension problems. It’s a delicate balance of greys, not an absolutist statement. This is why the “herp derp Go designers have no excuse for ignoring 30 years of PL advancement” is just such a bad take. It assumes PL design has itself advanced, that there’s no or little room for reasonable disagreement because “well they wrote C for so long that their brains have rotted.” Sure, you didn’t say that exact thing, but that’s the implication I took away from your words.

                                                                                                                  Do you think all abstractions are created equal? I don’t. I think some are better than others. And of course it depends on what you’re trying to do. Some abstractions make reading the code easier. Some abstractions can make it harder, especially if the abstraction is more powerful than what is actually necessary.

                                                                                                                  There’s another pithy saying, “No abstraction is better than the wrong abstraction.” This gets at another problem with abstraction which is that you can build an abstraction to tightly coupled to a problem space, and when that problem space expands or changes, sometimes fixing a bad abstraction is harder than fixing code that never had the abstraction in the first place. Based on this idea, you might, for example, decide to copy a little code instead of adhering to DRY and building an abstraction.

                                                                                                                  Performance is probably the biggest problem I have with abstraction on a day-to-day basis, but it seems like that’s one you’re already familiar with.

                                                                                                                  There are many ways to tackle the problems of abstraction. Go is perhaps on the more extreme end of the spectrum, but IMO, it’s a very reasonable one. If you believe one of the primary motivations for problematic abstractions is the tools available to build abstractions, for example, then a very valid and very reasonable solution to that would be to restrict the tooling. This is, for example, the primary reason why I’ve long been opposed to adding monads to Rust.

                                                                                                                  I don’t believe this is just limited to programming languages either. The comprehension problem with abstraction can be seen everywhere. When conversing with someone, or reading a book or even in educational materials, you’ll often see or hear a general statement, and that’s quickly followed up with examples. Because examples are concrete and help us reason through the abstraction. Of course, the extent to which examples are helpful or even necessary depends on the person. I’ve met plenty of people who have an uncanny ability to stay in abstraction land without ever coming down on to something concrete. I’m not one of them, but I know they exist.

                                                                                                                  1. 7

                                                                                                                    Thank you very much for taking the time to write this up.

                                                                                                                    Let’s hope I manage to actually learn from this and not succumb to knee jerk mis-scoped reactions in the future.

                                                                                                                    What’s interesting is that I have a very different reaction to Rust. I feel like Rust brings some very new ideas to the table. It offers a mix of very high and very low level abstractions, but in general offers a much lower level of abstraction than the programming languages I work in regularly and am most familiar with - Python & Ruby.

                                                                                                                    I think part of my attitude towards Go is rooted in my attitude towards C. It’s an incredible and venerable tool, but one I have stubbed my toe on enough that I find it painful and not enjoyable, and that has perhaps unjustly tainted my perceptions WRT Golang.

                                                                                                                    1. 5

                                                                                                                      Here are some opinions I’ve written on Rust vs. Go, although mostly about Go’s poor abstraction facilities and how they inhibit me. https://users.rust-lang.org/t/what-made-you-choose-rust-over-go/37828/7

                                                                                                                      1. 2

                                                                                                                        This seems like it would be an interesting read, but unfortunately it doesn’t seem to load?

                                                                                                                        1. 5

                                                                                                                          Works for me. I’ve copied it below.


                                                                                                                          I’ve been writing Go ~daily since before 1.0 came out. I’ve been writing Rust ~daily also since before its 1.0 release. I still do. I primarily write Go at work and Rust in my free time these days, although I sometimes write Rust at work and sometimes write Go in my free time.

                                                                                                                          Go’s goals actually deeply resonate with me. I very much appreciate its fast compilation times, opinionated and reasonably orthogonal design and the simplicity of the overall tooling and language. “simplicity” is an overloaded term that folks love to get pedantic about, so I’ll just say that I’m using it in this context to refer to the number of different language constructs one needs to understand, and invariably, how long it takes (or how difficult it is) for someone to start becoming productive in the language.

                                                                                                                          I actually try hard to blend some of those goals that I find appealing into Rust code that I write. It can be very difficult at times. In general, I do my best to avoid writing generic code unless it’s well motivated, or manifests in a way that represents a common pattern among all Rust code. My personal belief is that Go’s lack of generics lends considerably to its simplicity. Writing in Go code heavily biases towards less generic code, and thus, more purpose driven code and less YAGNI code. I don’t mean this to be a flippant comment; the best of us succumb to writing YAGNI code.

                                                                                                                          So if I like Go’s goals so much, why do I use Rust? I think I can boil it down to two primary things.

                                                                                                                          The first is obvious: performance and control. The projects I tend to take on in my free time bias towards libraries or applications that materially benefit from as much performance tuning as you want. Go has limits here that just cannot be breached at all, or cannot be breached without sacrificing something meaningful (such as code complexity). GC is certainly part of this, and not just the GC itself, but the affects that GC has on the rest of your code, such as memory barriers. Go just makes it too hard to get optimal codegen in too many cases. And this is why performance critical routines in Go’s standard library are written in Assembly. I don’t mind writing Assembly when I have to, but I don’t want to do it as frequently as I would have to in Go. In Rust, I don’t have to.

                                                                                                                          The second reason is harder to express, but the most succinct way I can put it is this: Go punishes you more than Rust does when you try to encapsulate things. This is a nuanced view that is hard to appreciate if you haven’t used both languages in anger. The principle problems with Go echo a lot of the lower effort criticism of the language. My goal here is to tie them to meaningful problems that I hit in practice. But in summary:

                                                                                                                          • The lack of parametric polymorphism in Go makes it hard to build reusable abstractions, even when those abstractions don’t add too much complexity to the code. The one I miss the most here is probably Option<T>. In Go, one often uses *T instead as a work-around, but this isn’t always desirable or convenient.
                                                                                                                          • The lack of a first class iteration protocol. In Rust, the for loop works with anything that implements IntoIterator. You can define your own types that define their own iteration. Go has no such thing. Instead, its for loop is only defined to work on a limited set of built-in types. Go does have conventions for defining iterators, but there is no protocol and they cannot use the for loop.
                                                                                                                          • Default values. I have a love-hate relationship with default values. On the one hand, they give Go its character and make a lot of things convenient. But on the other hand, they defeat attempts at encapsulation. They also make other things annoying, such as preventing compilation errors in struct literals when a new field is added. (You can avoid this by using positional struct literal syntax, but nobody wants to do that because of the huge sacrifice in readability.)

                                                                                                                          So how do these things hamper encapsulation? The first two are pretty easy to exemplify. Consider what happens if you want to define your own map type in Go. Hell, it doesn’t even have to be generic. But maybe you want to enforce some invariant about it, or store some extra data with it. e.g., Perhaps you want to build an ordered map using generics. Or for the non-generics case, maybe you want to build a map that is only permitted to store certain keys. Either way you slice it, this map is going to be a second class citizen to Go’s normal map type:

                                                                                                                          • You can’t reuse the mymap[key] syntax.
                                                                                                                          • You can’t reuse the for key, value := range mymap { construct.
                                                                                                                          • You can’t reuse the value, ok := mymap[key] syntax.

                                                                                                                          Instead, you wind up needing to define methods for all of these things. It’s not a huge deal, but now your map looks different from most other maps in your program. Even Go’s standard library sync.Map suffers from this. The icing on the cake is that it’s not type safe because it achieves generics by using the equivalent of Rust’s Any type and forcing callers to perform reflection/type conversions.

                                                                                                                          Default values are a completely different beast. Their presence basically makes it impossible to reason conclusively about the invariants of a type. Say for example you define an exported struct with hidden member fields:

                                                                                                                          type Foo struct {
                                                                                                                              // assume there are important invariants
                                                                                                                              // that relate these three values
                                                                                                                              a, b, c int
                                                                                                                          }
                                                                                                                          
                                                                                                                          func NewFoo() Foo { ... }
                                                                                                                          
                                                                                                                          func (f *Foo) DoSomething() { ... }
                                                                                                                          

                                                                                                                          Naively, you might assume that the only way to build Foo is with NewFoo. And that the only way to mutate its data is by calling methods on Foo such as DoSomething. But this isn’t quite the full story, since callers can write this:

                                                                                                                          var foo Foo
                                                                                                                          

                                                                                                                          Which means the Foo type now exists as a default value where all of its component members are also default values. This in turn implies that any invariant you might want to come up with for the data inside Foo must account for that component’s default values. In many cases, this isn’t a huge deal, but it can be a real pain. To the point where I often feel punished for trying to hide data.

                                                                                                                          You can get a reprieve from this by using an unexported type, but that type cannot appear anywhere (recursively) in an exported type. At least in that case, you can be guaranteed to see all such constructions of that type in the same package.

                                                                                                                          None of these things are problems in Rust. YMMV over how much you consider the above things to be problems, but I’ve personally found them to be the things that annoy me the most in Go. An honorary mention is of course sum types, and in particular, exhaustive checks on case analysis. I’ve actually tried to solve this problem to some extent, but it’s not ideal for a number of reasons because of how heavyweight it is: https://github.com/BurntSushi/go-sumtype — From my perspective, the lack of sum types just means you can move fewer invariants into the type system. For me personally, sum types are a huge part of my day-to-day Rust coding and greatly lend to its readability. Being able to explicitly enumerate the mutually exclusive states of your data is a huge boon.

                                                                                                                          Anyway, Rust has downsides too. But I’m tired of typing. My main pain points with Rust are: 1) complexity of language features, 2) churn of language/library evolution and 3) compile times. Go suffers from basically none of those things, which is pretty nice.

                                                                                                                    2. 0

                                                                                                                      It assumes PL design has itself advanced

                                                                                                                      I think it has in some ways, but it’s as much in figuring out which abstractions are valuable as it is with expanding the scope of what abstractions we can express.

                                                                                                                      And I think Go provided a really solid catalyst for a lot of good PLT and developer experience improvements in newer programming languages.

                                                                                                              2. 3

                                                                                                                I think the answer is to have a low level kernel and build up the rest of the language from lower abstraction primitives. Present the highest level layer as the default interface, but provide access to the primitives.

                                                                                                                For example, python has IO buffers that it uses to open files, but you normally don’t need to access them to do file work. If you are doing something a bit weird you can drop down to that level.

                                                                                                            1. 2

                                                                                                              It’s nice as a little, well, tour to get a bit of a feel of the language and see how it roughly works and what it can roughly do, but it’s ill-suited to actually learn the language.

                                                                                                              Any recommended material on getting started with Go?

                                                                                                              1. 6

                                                                                                                The Go Programming Language book is quite good.

                                                                                                              1. 9

                                                                                                                The nice thing about Go is that when it is verbose, like for the list deletion example, it highlights that the computer is doing more work. If you are constantly deleting items in the middle of an array (or if you’re doing this at all), it might not be the best choice of data structure.

                                                                                                                1. 30

                                                                                                                  This sounds like a post hoc rationalization to me. You can always hide arbitrary computations behind a function call.

                                                                                                                  I believe this is explained by the lack of generics, and I predict that, if/when generics are implemented, go stdlib will gain a bunch of slice-manipulating functions a-la reverse.

                                                                                                                  1. 2

                                                                                                                    Which is probably one of the stronger arguments for genetics.

                                                                                                                    1. 2

                                                                                                                      This sounds like a post hoc rationalization to me.

                                                                                                                      This was always pretty reliably cited as the motivation for a lot of design decisions, back when Go was just released.

                                                                                                                      You can always hide arbitrary computations behind a function call.

                                                                                                                      Yes, but one nice thing about Go is that functions are essentially the only way to hide arbitrary computation. (And, relatedly, the only mechanism that Go has to build abstraction.) When you’re reading code, you know that a + b isn’t hiding some O(n²) bugbear. That’s valuable.

                                                                                                                      1. 3

                                                                                                                        Yup, I agree with the general notion that reducing expressive power is good, because it makes the overall ecosystem simpler.

                                                                                                                        But the specific example with list manipulation is “a wrong proof of the right theorem”, and this is what i object to.

                                                                                                                        Incidentally, a + b example feels like a wrong proof to me as well, but for a different reason. In Go, + can also mean string concatenation, so it can be costly, and doing + in a linear loop can be accidentally quadratic. (I don’t write Go, might be wrong about this one).

                                                                                                                        1. 1

                                                                                                                          In Go, + can also mean string concatenation, so it can be costly

                                                                                                                          How do you mean? String concatenation is O(1)…

                                                                                                                          doing + in a linear loop can be accidentally quadratic. (I don’t write Go, might be wrong about this one).

                                                                                                                          How’s that?

                                                                                                                          1. 2

                                                                                                                            String concatenation is O(1)…

                                                                                                                            Hm, I don’t think that’s the case, seem to be O(N) here:

                                                                                                                            
                                                                                                                            14:48:36|~/tmp
                                                                                                                            λ bat -p main.go 
                                                                                                                            package main
                                                                                                                            
                                                                                                                            import (
                                                                                                                                "fmt"
                                                                                                                                "time"
                                                                                                                            )
                                                                                                                            
                                                                                                                            func main() {
                                                                                                                                for p := 0; p < 10; p++ {
                                                                                                                                    l := 1000 * (1 << p)
                                                                                                                                    start := time.Now()
                                                                                                                                    s := ""
                                                                                                                                    for i := 0; i < l; i++ {
                                                                                                                                        s += "a"
                                                                                                                                    }
                                                                                                                                    elapsed := time.Since(start)
                                                                                                                                    fmt.Println(len(s), elapsed)
                                                                                                                                }
                                                                                                                            }
                                                                                                                            
                                                                                                                            14:48:40|~/tmp
                                                                                                                            λ go build main.go && ./main 
                                                                                                                            1000 199.1µs
                                                                                                                            2000 505.49µs
                                                                                                                            4000 1.77099ms
                                                                                                                            8000 3.914871ms
                                                                                                                            16000 14.675162ms
                                                                                                                            32000 49.782358ms
                                                                                                                            64000 182.127808ms
                                                                                                                            128000 661.137303ms
                                                                                                                            256000 2.707553408s
                                                                                                                            512000 11.147772027s
                                                                                                                            
                                                                                                                            1. 3

                                                                                                                              You’re right! O(N). Mea culpa.

                                                                                                                    2. 40

                                                                                                                      it highlights that the computer is doing more work

                                                                                                                      It seems strange to me to trust in the ritual of doing strange and obnoxius, bug-prone, programming to reflect the toil the computer will be burdened with. Doubly strange when append() is generic magical go function built-in which can’t be implemented in Go so it’s hard to even say, without disassembly and studying the compiler, what the code would even end up doing at runtime; I guess we can make very good educated guesses.

                                                                                                                      1. 20

                                                                                                                        IMHO, no performance concerns are valid* until a profiler is run. If it wasn’t run, the program was fast enough to begin with, and if it was, you’ll find out where you actually have to optimise.

                                                                                                                        • Exception: if there are two equally readable/maintanable ways to write code, and one is faster than the other, prefer the faster version. Otherwise, write readable code and optimise when/if needed.
                                                                                                                        1. 4

                                                                                                                          I feel this statement is a bit too generic, even with the exception. Especially when you’re talking about generic stdlib(-ish) functions that may be used in a wide variety of use cases, I think it makes sense to preëmptively think about performance.

                                                                                                                          This doesn’t mean you should always go for the fastest possible performance, but I think it’s reasonable to assume that sooner or later (and probably sooner) someone is going to hit some performance issues if you just write easy/slow implementations that may be “fast enough” for some cases, but not for others.

                                                                                                                        2. 18

                                                                                                                          But I want the computer to be doing the work, not me as human source code reader or, worse, source code writer.

                                                                                                                          1. 1

                                                                                                                            Sure, but a language that hides what is computationally intensive or not, is worse.

                                                                                                                            1. 24

                                                                                                                              Source code verbosity is poorly correlated with runtime cost, e.g. bubble sort code is shorter and simpler than other sorting algorithms.

                                                                                                                              Even in the case of removing items from arrays, if you have many items to remove, you can write a tiny loop with quadratic performance, or write longer, more complex code that does extra bookkeeping to filter items in one pass.

                                                                                                                              1. 6

                                                                                                                                Then don’t build your language to hide what’s computationally expensive. Fortunately, most languages don’t do any sort of hiding; folding something like list deletion into a single function call is not hiding it - so this statement isn’t relevant to the discussion about Go (even if true in a vacuum). Any function or language construct can contain any arbitrary amount of computational work - the only way to know what’s actually going on is to read the docs, look at the compiler, and/or inspect the compiled code. And, like @kornel stated, “Source code verbosity is poorly correlated with runtime cost”.

                                                                                                                              2. 1

                                                                                                                                As a user, I don’t want the computer to be doing the work. It reduces battery life.

                                                                                                                                1. 10

                                                                                                                                  If you’re referring to compilation or code-generation - that work is trivial. If you’re referring to unnecessary computation hidden in function calls - writing the code by hand is an incredibly bad solution for this. The correct solutions include writing efficient code, writing good docs on code to include performance characteristics, and reading the function source code and/or documentation to check on the performance characteristics.

                                                                                                                                  As @matklad said, “You can always hide arbitrary computations behind a function call.”e