1. 16
  1.  

  2. 14

    Perhaps I’m in a foul mood [1] but just once, I would like to see someone not rant, but sit down and actually implement their ideas. Stop complaining, and just do it! Yes, I’ve seen this rant before. I’ve been reading rants like this since 1990 and you know what? NOTHING HAS CHANGED! Why? Probably because it’s easier to complain about it than do anything about it.

    [1] UPS died today, and it took out my main computer.

    1. 8

      People have been ranting about it well before 1990, as well.

      However, it is worth noting that plenty of people have sat down and implemented their ideas. There are programming languages out there that attempt to throw off the points mentioned in this article. (Fortress, for example, uses traditional mathematical notation, with a tool to automatically translate source into a LaTeX file.) But the fact is that none of these experiments have gained much traction – or when they do gain traction (e.g. Perl), they are often widely reviled for those exact traits.

      Proponents of these ideas will argue that people are too hidebound for new ideas to reach a critical mass of adoption … and while that is certainly a factor, after so many decades of watching this pattern repeat, I have to wonder. I question the initial premise. Would programming languages actually be better if they were written in a proportional font, or required punctuation marks not printed on a standard keyboard? It’s not clear to me that this assumption is true.

      1. 1

        Well, “would be better if ” omits the most important point – which is “better for whom”. APL is very popular among fans of APL, who have already done the groundwork of learning new tooling and getting access to specialized characters, which is different from but not fundamentally harder than becoming familiar with our usual UNIX build toolchain.

        So long as the unstated “whom” factor in this question is “people who are already professional programmers”, radically new ideas will have a hard time gaining popularity. If we expand it to “people who are planning to become professional programmers”, a handful of other technologies start to make the cut based on the ease with which people can pick them up. Our current software ecosystem is optimized by some mix of two main utility functions: “is it convenient for people who have been administering UNIX machines since 1985” and “is it convenient for people who don’t know what a for loop is”.

        I don’t personally think proportional fonts are a plus. Typography people are big on proportional fonts because they remove ‘rivers’, which can be distracting when you’re looking at a page of text at a distance because part of human nature is to project meaning onto shapes (such as the shape of whitespace), and in densely-packed prose, patterns in whitespace between lines are almost always noise. In source code, patterns in whitespace between lines are basically always intentional and semantically meaningful, and monospace fonts are the easiest way to make such patterns controllable.

        But, unicode can be fantastic for readability, since it aids in chunking. Where the density of operators based on ascii in obfuscated perl and in APL-derived languages like j make code seem “write-only”, turning multi-character operators into single non-ascii character operators makes functionally read-only code. Still, lots of modern languages support unicode operators & have unicode aliases for multi-character operators, and perhaps widescale adoption of these aliases will produce pressure to create specialized extended keyboards that can type them and lower the price of such keyboards. In the mean time, there are (monospace) fonts that render multi-character operators as single characters, simulating composition, and if they didn’t screw up code alignment this would be a fantastic solution.

        There are a lot of tiny but thriving tech microcultures – forth, smalltalk, APL, prolog, red, and tcl all have them. There are bigger tech microcultures too – the ones around haskell, rust, and erlang. Very occasionally, they seed ideas that are adopted by “the mainstream” but it’s only through decades of experimentation on their own.

        1. 2

          Our current software ecosystem is optimized by some mix of two main utility functions: “is it convenient for people who have been administering UNIX machines since 1985” and “is it convenient for people who don’t know what a for loop is”.

          Any proof of this assertion?

          1. 1

            Only anecdotal evidence from 10 years in the industry and another 10 developing software in public, but I imagine other folks on lobste.rs can back me up.

            I can exhaustively list examples of mediocre tech that has been adopted and popularized for one of these two reasons, but that’s not exactly a proof – it’s merely evidence.

            1. 2

              My own anecdotal experience of a similar amount of time (add 6 years of developing software/doing research in academia, subtract accordingly from the others, and sprinkle in some other years to make the math work) is not the same. There are complicated incentives behind software development, maintenance, and knowledge transmission that affect all of these things, and these incentives are not captured in a dichotomy like this. I see it most trotted out when used to justify alternative opinions, the “You just don’t understand me” defense.

              1. 1

                I would naturally expect the constraints to be quite different in academic research, & of course, I’ve simplified the two cases to their most recognizable representatives. At the same time, highly competent people who have a lot of social capital in organizations (or alternately, people in very small organizations where there’s little friction to doing wild experiments, or people who have been institutionally isolated & are working on projects totally alone) have more flexibility, and I’ve been blessed to spend much of my time in such situations and have limited my exposure to the large-institution norm.

                We could instead say that the two big utility functions in industry are intertia & ease of onboarding – in other words, what tools are people already familiar with & what tools can they become familiar enough with quickly. They interact in interesting ways. For instance, the C precedence rules are not terribly straightforward, but languages that violate the C precedence rules are rarely adopted by folks who have already internalized them (which is to say, nearly everybody who does a lot of coding in any language that has them). How easy/intuitive something is to learn depends on what you’ve learned beforehand, and with professional developers, it is often much easier to lean on a large hard-learned set of rules (even if they are not very good rules) than learn a totally new structure from scratch (even if that structure is both complete and reasonably simple). It’s a lot easier to learn forth from scratch than it is to learn Java, but everybody learns Java in college and nobody learns forth there, so you have a lot less friction if you propose that the five recent graduates working on your project implement some kind of 10,000 line java monstrosity than that they learn forth well enough to write the 50 line forth equivalent.

                As long as there’s a lot of time pressure, we should expect industry to follow inertia, and follow onboarding when it coincides with inertia.

                Onboarding has its own problems. Take PHP, for instance. A lot of languages make the easy things very easy and the difficult things almost impossible, & undirected beginners lack the experience to recognize which practices don’t scale or become maintainable. I spent a couple years, as a kid, in the qbasic developer ghetto – rejecting procedure calls and loop structures in favor of jumps because qbasic’s versions of these structures are underpowered and because I had never written something that benefitted much from the modularity that structured programming brings. Many people never escape from these beginning-programmer ghettos because they don’t get exposed to better tooling in ways that make them want to adopt it. I might not have escaped from it had I not been susceptible to peer pressure & surrounded by people who made fun of me for not knowing C.

                And onboarding interacts with inertia, too. PHP and ruby were common beginner’s languages during the beginning of the “web 2.0” era because it was relatively easy to hook into existing web frameworks and make vaguely professional-looking web applications using them on the server side. These applications rarely scaled, and were full of gotchas and vulnerabilities that were as often inherited from the language or frameworks as they were from the inexperience of the average developer. But Facebook was written in PHP and Twitter in Ruby, so when those applications became big and suddenly needed to scale, rather than immediately rewriting them in a sensible way, Twitter spent a lot of time and money on new Ruby frameworks and Facebook literally forked PHP.

                Folks who are comfortable with all of PHP’s warts move in different circles than folks who are comfortable with all of C’s warts, or UNIX’s, or QBasic’s, but they are unified in that they gained that comfort through hard experience &, given a tight deadline, would rather make use of their extensive knowledge of those warts than learn a new tool that would more closely match the nuances of the problem. (Even I do this most of the time these days. Learning a totally new language in and out can’t be put on a gantt chart – you can’t estimate the unknown unknowns reliably – so I can only do it when not on the clock. And, when I’m not on the clock, learning new languages is not my highest priority. I often would prefer to eat or sleep. I am part of the problem here.)

                Obviously, most wild experiments in programming language design will be shitty. Even the non-shitty ones won’t immediately get traction, and the ones that do get traction will probably take decades to become popular enough in hobby communities for them to begin to appear in industry. I think it’s worth creating these wild experiments, taking them as far as they’ll go, and trying other people’s wild experiment languages too. The alternative is that we incrementally add new templates to STL (while keeping all the existing ones backward compatible) forever, and do comparable work in every other stack too.

                1. 3

                  so when those applications became big and suddenly needed to scale, rather than immediately rewriting them in a sensible way, Twitter spent a lot of time and money on new Ruby frameworks and Facebook literally forked PHP.

                  Familiarity isn’t the reason why Twitter and Facebook spent time trying new frameworks and forking PHP. The reason is that these companies believed, from a cost-benefit perspective, that it was cheaper to preserve all of their existing code in the existing language and try to improve the runtime rather than rewrite everything in a different language, and risk all the breakages that come with language transitions. I have friends who were around both companies around the time (and at FB on the teams working on Hack, as it was a decent destination for PLT folks back then) and these were well known. Having worked on language migrations myself, I can say that they are very expensive.

                  1. 3

                    Familiarity isn’t the reason why Twitter and Facebook spent time trying new frameworks and forking PHP. The reason is that these companies believed, from a cost-benefit perspective, that it was cheaper to preserve all of their existing code in the existing language and try to improve the runtime rather than rewrite everything in a different language, and risk all the breakages that come with language transitions.

                    There’s no contradiction there.

                    Switching to another language would not, ideally, be a language migration so much as a from-scratch rewrite that perhaps reused some database schemas – all of the infrastructure that you created to insulate you from the language (and to insulate the way you want to do & think about problems from the way the language designers would like you to do and think about problems) would be of no particular use, unless you switched to another language so similar to the original one that there wasn’t much point in migration at all. This is a big project, but so is maintenance.

                    I don’t have first-hand access to these codebases, but I do know that PHP is insecure and Ruby on Rails doesn’t scale – and that solving those problems without abandoning the existing frameworks requires a lot of code that’s very hard to get right. If you knew you were likely to produce something very popular, you wouldn’t generally choose PHP or Ruby out of the gate because of those problems, and conceptually nothing about Facebook’s feature set is uniquely well-suited to PHP (and nothing about Twitter’s is uniquely suited to Ruby).

                    I hear that Twitter eventually did migrate away from Ruby for exactly this reason.

                    The inertia of old code & the inertia of existing experience are similar, and they can also overlap. A technical person on the ground level can have a better idea about whether a total rewrite is feasible than a manger with enough power to approve such a rewrite. And techies tend to be hot on rewrites even when essential complexity makes them practically prohibitive. Facebook may have made the right move, when they finally did move, simply because they have this endless accumulated cruft of features introduced in 2007 that hardly anybody has used since 2008 but that still needs to be supported (Facebook Memories recently reminded me of a note I wrote 12 years ago – around the time I last saw the facebook note feature used); had they put a potential rewrite on the table in 2006, when the limitations of PHP were already widely known, the cost-benefit ratio may have been very different.

                    I’ve got some first-hand experience with this. Where I work, we had a large C and perl codebase that grew about 5 years before I joined when we bought a competitor and integrated their existing large Java codebase. Most of the people who had touched the Java codebase before we inherited it either left the company or moved into management. When I was brought on as an intern, one of my tasks was to look at optimizing the tasks that one of the large (millions of lines) java projects was handling. It turned out that this project had extremely wasteful retry logic designed to deal with some kind of known problem with a database we hadn’t supported in a decade, and that the very structure of our process was extremely wasteful. I worked alone for months on one component & got this process’ run time down from 8 days to 5 (though my changes were never adopted). Later, I worked with a couple people to get the process down from 8 days to 1 day by circumventing some of the retry logic & doing things in parallel that could easily be done in parallel. Last year, we moved from our local datacenter to the cloud, and we had to radically restructure how we handled this process, so we rewrote it completely (using a process that was based on something I had developed for mirroring our data in backup data centers) and – the 8 day time period turned into about 20 minutes and the multi-million-line java codebase turned into an approximately one hundred line shell script. I am under the impression that practically every million-line java codebase in my organization can be turned into a one hundred line shell script with equivalent functionality and substantial performance improvements, and that the biggest time and effort sink involved is to reverse engineer what is actually being done (in the absence of the original authors) and whether or not it needs to be done at all. I don’t want to underestimate or understate the detective work involved in figuring out whether or not legacy code paths should exist, because it is substantial and it requires someone with both depth and breadth of experience.

                    This is a bit of a hot take, and it’s not remotely viable in a commercial environment, but I think our software would benefit quite a bit if we were less afraid of rewrites and less afraid of reinventing the wheel. The first working version of any piece of software ought to be treated as an exploration of the problem domain (or at most, a prototype) and should be thrown away and rewritten from scratch using the knowledge gained – now that you know how to make it work, make it right from the ground up, using the tools, techniques, and structure that are suited to the task. This requires knowing a whole range of very dissimilar tools, and ideally would involve being willing to invent new tools.

                    1. 1

                      I am under the impression that practically every million-line java codebase in my organization can be turned into a one hundred line shell script with equivalent functionality and substantial performance improvements, and that the biggest time and effort sink involved is to reverse engineer what is actually being done (in the absence of the original authors) and whether or not it needs to be done at all.

                      Ouch. I’ve never had quite this bad an experience, but I’ve had similar experiences in academia. Must have been cathartic to reduce all that extraneous complexity though.

                      This is a bit of a hot take, and it’s not remotely viable in a commercial environment, but I think our software would benefit quite a bit if we were less afraid of rewrites and less afraid of reinventing the wheel. The first working version of any piece of software ought to be treated as an exploration of the problem domain (or at most, a prototype) and should be thrown away and rewritten from scratch using the knowledge gained – now that you know how to make it work, make it right from the ground up, using the tools, techniques, and structure that are suited to the task. This requires knowing a whole range of very dissimilar tools, and ideally would involve being willing to invent new tools.

                      I think this boils down to what the purpose of software is. For me, the “prime imperative” of writing software is to achieve a desired effect within bounds. In commercial contexts, that’s to achieve a specific goal while keeping costs low. For personal projects, it can be many things, with the bounds often being based around my own time/enthusiasm. Reducing complexity, increasing correctness, and bringing out clarity are only means to an end. With that in mind, I would not be in favor of this constant exploratory PoC work (also because I know several engineers who hate writing throwaway code and become seriously demoralized when their code is just tossed, even if it’s them writing the replacement). I’m also not optimistic about the state space of tools, techniques, and structure. I don’t actually think there are local maxima much higher than the maximum we find ourselves in now from redoing everything from the ground up, and the work in reaching the other maxima is probably a lot more than the magnitude of difference between the maxima. I do think as a field of study we need to balance execution and exploration more so that we don’t make silly decisions based on what’s available to us, but I’m not optimistic that the state space really has a region of maxima much higher than our own at all, let alone within easy reach.

                      1. 2

                        I’m not optimistic that the state space really has a region of maxima much higher than our own at all, let alone within easy reach.

                        I can’t bring myself to be so pessimistic. For one thing, I’ve seen & used environments that were really pleasant but are doomed to never become popular enough to support development after the current crop of devs are gone, and some of these environments have been around for decades. For another thing, if I allowed myself to believe that computers and computing couldn’t get much better than they are right now, I’d get quite depressed. The current state of computing is, to me, like being at the bottom of a well with a broken leg; to imagine that it could never be improved is like realizing that there is no rescue.

                        Then again, in terms of the profit motive (deploying code in such a way that it makes money, whether or not anybody actually likes or benefits from using it), I don’t prioritize that at all. It is probably a mistake to ignore or underestimate that factor, but I think most big shifts have their origin in domains that are shielded from it, and so going forward I’d like to create more domains that are shielded from the pressures of commerce.

                        1. 2

                          The current state of computing is, to me, like being at the bottom of a well with a broken leg; to imagine that it could never be improved is like realizing that there is no rescue.

                          This is the problem with these discussions. They really just boil down to internal assumptions of the state of the world. I don’t think computing is broken. I think computing, like anything else, is under a complex set of forces, and if anything, I’m excited by all the new things coming out in computing. If you don’t think computing is broken, then the prospect of an unknown state space with already discovered maxima isn’t a bad thing. If you do, then it is. And so folks who disagree with the state of computing think we need to change directions and explore, folks who agree think that everything is mostly okay.

                          I don’t prioritize that at all. It is probably a mistake to ignore or underestimate that factor, but I think most big shifts have their origin in domains that are shielded from it, and so going forward I’d like to create more domains that are shielded from the pressures of commerce.

                          Do you mean specifically pressures of commerce, or pressure in general? There are a lot of people for whom programming is just a means to an end, and so there are still pressures, just maybe not monetary. They just slap together some Wordpress blog for their club, or write some uninspired CRUD to help manage inventory at their local library. These folks aren’t sitting there trying to understand whether a graph database better fits their needs than bog standard MySQL; they just care more about what technology does for them than the technology itself. I don’t think that’s unrealistic. Technology should be an enabler for humans, not an ideal to aspire unto.

                          1. 2

                            Do you mean specifically pressures of commerce, or pressure in general?

                            Commerce in particular. I think it’s wonderful when people make elaborate hacks that work for them. Markets rapidly generate complex multipolar traps that incentivize the creation and adoption of elaborate hacks that work for no one.

                            1. 1

                              Full agreement about this, personally.

      2. 8

        Rust tries. Have a look at Shape of errors to come.

        1. 3

          I’ve been reading rants like this since 1990 and you know what? NOTHING HAS CHANGED

          Well, that’s not quite the case, because this:

          If I write int f(X x), where X is an undeclared type, the compiler should not do what GCC does, which is to write the following:

          error: expected ‘)’ before ‘x’
          

          [..]

          It should say either something both specific and helpful, such as:

          error: use of undeclared type ‘X’ in parameter list of function ‘f’
          

          is now (ten years after this article was written):

          $ cat a.c
          int f(X x) { }
          
          $ gcc a.c
          a.c:1:7: error: unknown type name 'X'
              1 | int f(X x) { }
                |       ^
          
          $ clang a.c
          a.c:1:7: error: unknown type name 'X'
          int f(X x) { }
                ^
          

          So clearly there is progress, and printing the actual code is even better than what was suggested in this article IMO.

          gcc anno 2011 was kind of a cherry-picked example anyway, as it was widely known to be especially horrible in all sorts of ways, including error messages which with notorious and widely disliked.


          As for the rest of the article: many would disagree Perl is “human friendly”, many people find the sigils needlessly hard/confusing, and “all of the ugly features of natural languages evolved for specific reasons” is not really the case; well, I suppose there are specific reasons, but that doesn’t mean they’re useful or intentional. I mean, modern English is just old Saxon English badly spoken by Vikings with many mispronunciations and grammar errors, and then the Normans came and further morphed the language with their French (and the French still can’t speak English!) Just as natural selection/evolution doesn’t always pick brilliantly great designs, neither does the evolution of languages.

          Why don’t we use hyphens for hyphenation, minus signs for subtraction, and dashes for ranges, instead of the hyphen-minus for all three?

          As if distinguishing -, −, and – is human friendly… “Oh oops, you used the wrong codepoint to render a - character: yes we know it looks nearly identical and that we already know what you intended from the context, but please enter the correct codepoint for this character that is not on your keyboard.”

          Even if and were easy to type, visually distinguishing between the two is hard. I had to zoom to 240% just now to make sure I entered them correctly. And I’m 35 with fairly decent eye sight: in 20 years I’ll probably have to zoom to 350%.

          1. 2

            The author is clearly some kind of typography nerd, but there are good ideas embedded in this. French-style quote marks (which look like << and >>) are a much more visually distinguishable version of “smart quotes”, and the idea of having nestable quotes is good enough that many languages implement them (even if they do not implement them in the same way). Lua’s use of [[ and ]] for nestable quotes seems closest to the ideal here. Of course, you’ll still need either a straight quote system or escaping to display a lone quote :)

            1. 1

              Yeah, using guillemets might be a slight improvement, but to be honest I think it’s a really minor one. And you can’t just handwave away the input issue: I use the Compose key on X, and typing « means pressing Alt < <; even with this, three keystrokes for such a common character is quite a lot IMO.

              “Code is read more often than it is written”, yes yes, I know, but that doesn’t mean we can just sacrifice all easy of writing in favour of readability. Personally I’d consider it to be a bad trade-off.

              1. 1

                It’s a chicken-and-egg problem between hardware and software, and hardware is a little more centralized so it’s easier for a single powerful figure to solve this problem. If tomorrow Jony Ive decided that every macintosh keyboard would have extra keys that supported the entire APL character set from now on, we’d probably see these characters in common use in small projects within a year, and bigger projects within a couple years, and within ten years (assuming other keyboard manufacturers started to follow suit, which they might) we’d probably see them in common use in programming in any language that supported them (including as part of standard libraries, probably with awkward multi-character aliases for folks who still can’t type them).

                The same cannot be said for software, as the failure of coordinated extreme efforts by the python core devs to exterminate 2.x over the course of nearly a decade has shown.

                1. 1

                  Many languages never use guillemets; I never use them when writing Dutch or English, for example. They are used in some other European languages, but it’s nowhere near universal. Adding these characters to these keyboards only for programmers seems a bit too narrowly focused, IMO: most people typing on keyboards are not programmers. Hell, even as a programmer I probably write more English and Dutch than code.

                  This is why the French, German, and English keyboard layouts are all different: to suit the local language.

                  1. 2

                    Folks writing english prose also rarely use the pound, at, carat, asterisk, square braket, angle bracket, underscore, vertical bar, and back slash characters (and for half a century or more, have been discouraged by style guides from even learning what the semi-colon is for), but every US keyboard has them – mostly because programmers use them regularly. Because they are available, users (or developers of user-facing software) have invented justifications for using at, pound, and semicolon in new ways specific to computing. Any new addition to a keyboard that gets widespread adoption will get used because of its availability.

                    Even emoji are now being used in software now (even though not only are they difficult to type on basically all platforms but they don’t display consistently across them and don’t display at all on many).

                    1. 1

                      That’s true, but for one reason or the other they are on the keyboard and (much) easy to input, and people are familiar with them due to that.

                      Perhaps the keyboard layout should be changed; I wouldn’t be opposed. But I also don’t think this is something that can really be enforced from just the software community, but maybe I’m wrong.

          2. 3

            Wholeheartedly agree. This was a very annoying thing to read and moreover I think most of the ideas presented here are actually pretty terrible. I’d love to see the author implement their ideas and prove me either wrong or right.

          3. 3

            A lot of these things are difficult to change because they require changing multiple things at once. For example, I favour an indenting style that uses tabs for indenting and spaces for alignment. This looks good in a fixed-width font with any tab size, but is a lot more work to maintain (clang-format only just grew support for it and it’s somewhat flaky), but makes it quite easy to implement elastic tabs and proportional fonts in a text editor without also breaking things for everyone else. I often edit code in vim over an ssh session, so requiring a proportional font and fancy typesetting would require that you add these things to a terminal emulator and terminal-based editors to avoid breaking my workflow.

            I completely agree about the nice nesting properties of open and close quotes, but how do you type them? In Word, for example, I type a vertical quote and if figures out what I meant (and mostly gets it right) but if I write "hello "quoted" world" in a text editor, then how does it know that the first two are open quotes and the second two are close quotes? On a French keyboard with guillemets it’s easy, but not on any English-language keyboards. You could try to embed the LaTeX convention that a back-tick means an open quote and a vertical quote means a close quote, but that’s hard to retrofit to any language that already uses either or both of those symbols.

            There is a tension in adoption that I find quite interesting though: most programmers who will use any sucessful language written today have not yet learned to program and so would benefit from a language designed from scratch to be easy to learn. Most people who learn to program will learn a language that’s already popular. The first adopters of any new language tend to be experienced programmers, who are more likely to make the switch if the language is familiar to them.

            1. 2

              I want to reply to a decade-younger self who wrote a comment to the author on this post. It seems that I sketched three criticisms of the article’s argument.

              I claimed that programming is mathematics, and thus that UX concerns must bend before the needs of mathematical theorems. However, Past Corbin did not understand that Turing categories have arbitrary syntax; the Church-Turing Thesis is actually a family of proven theorems, not a hypothesis about maths. As a result, while programming is mathematical, the choice of language syntax is quite arbitrary and we have great latitude to choose to make our languages friendlier to humans.

              I claimed that the dialect of English used in legal writing has poor UX. Combined with the author’s argument that English has been well-worn and had its UX developed by generations of iterative usage, I think that I was trying to make a point about how formality and usage interact, but this is a red herring and not at all a dilemma.

              I am still very interested in my final argument, though. I pointed out that, unlike here and now at Lobsters, the comment box of past Blogger was configured to show text in a fixed-width font. The post contains a screed against fixed-width fonts in programming, and so I found it not just ironic, but a UX failure on Blogger’s part. Ignoring the irony, the author’s reply is important: They couldn’t change it. UX is thus intimately connected to Free Software and user freedoms.

              1. 2

                I claimed that the dialect of English used in legal writing has poor UX.

                One can argue (and this is almost definitely at least partially correct) that english legalese is better adapted to the purposes of a particular type of person (the professional lawyer whose fluency in legalese, familiarity with best practices around litigation, awareness of common pitfalls and loopholes, ability to navigate the complex formalized social environment of a court, access to databases of precedent and command over teams of paralegals to navigate it on their behalf, and specialized skills in rhetoric allow him to command high pay and special legal privileges) than it is to the normal use case (an untrained layman trying to understand a contract). Law is an interesting field, especially english-style law, since you’ve got an impossibly large pool of technical debt (precedent, which often conflicts and which often must be bent in order to claim that it’s applicable to a case, giving lawyers considerable flexibility in using it) that is sometimes weaponized against opponents but much of the time simply gets in the way.

                Computing is similar, except that there’s no bar exam for programmers (nor is there disbarrment, nor is there programmer-client privilege). We too inhabit a position of power over “regular people” that we got by spending a few decades studying arcane highly-formalized languages (like C++) and the strange formalized rules of conduct that go with (like “create a branch, make your edits, commit them, push them, and then file a pull request upstream, but search jira for a possibly overlapping bug report before submitting a patch”). As professional developers, this works out really well for us. And we’re not liable for malpractice either. It kind of sucks for people who didn’t start programming at age nine, though, or who started at 9 and then stopped at 12 and tried to pick it up again at 18. Those folks will literally never catch up, unless we change the tooling to be more accessible.

              2. 2

                Reading this with a ligature font somewhat undermines the author’s point. I’ll stick to my old-fashioned keyboard :)

                https://skuz.xyz/stash/get/1d771f4ddd0ca81029c6d0f96aa7cb5fb4c1cb1364d8a50820e4ab2b07d5229d

                That aside, I really hope no-one makes a programming language with number agreement and English-like sentence flow and that requires a charpicker to write. Many people learn programming languages long after they can quickly learn new languages (childhood). If the author’s dream programming language came true, it would either be exclusive to English speakers or take years to learn. A small language that can be written with few symbols is more accesible.

                Look at SQL for example. SQL is English-like but making it more “natural” would incur a lot of complexity and ambiguity.

                Plaintext just happens to work well for code, but many tools that can aid with expressivity can be stacked on top (ligature fonts for example). The author talks about his students finding <=unintuitive, but that’s hardly a problem if you do more than an hour of programming.

                1. 3

                  Even after thirty years of programming, and five years of professional programming, I find using a ligature font helps me distinguish between >= and =>. It’s a digraph like any other, and should be treated just as other old digraphs and trigraphs.

                2. 2

                  Yes, UX is important, but when UX conflicts with functionality functionality usually wins. I will give an example.

                  https://github.com/rust-lang/rust/pull/21406 is six years old Rust issue. It proposes using Unicode characters for compiler output, not even input. This failed, because Unicode output support isn’t universal, and proposal to detect Unicode output support using locale wasn’t convincing. This is the reality. And the author of this article wants us to consider Unicode input…

                  1. 4

                    Unicode input is slick in Julia. You just type \cup and get in any editor with auto completion.

                    1. 4

                      Julia is a great all-around example of how modern features can be integrated smoothly into a very “conventional” language. It integrates a JIT, good unicode support out of the box, a sane package & module system, an extremely usable REPL (including online documentation and color/highlighting)… It has replaced python as my go-to example of a language that cares about user experience, since the language designers also make an effort to use consistent naming and expose consistent interfaces.

                      Since being exposed to Julia, I’ve set a new bar for myself. When I develop new programming languages (which I do, from time to time, as experiments – they are always very fringe, and often there isn’t even enough interest to justify fixing major bugs), I won’t tell anybody about it unless I have a REPL that looks as polished as Julia’s.

                      At the same time – Julia is extremely conventional in its syntax. I think this accounts for much of its popularity, but I also think this conventional ALGOL-68-style syntax will hold it back from the full potential of a more flexible language. As far as I can tell, you can’t dump a couple macro rules into Julia to turn it into a reverse polish notation language or an APL-like (the latter being potentially very nice).

                      1. 2

                        If you want to write Julia but in a different syntax you can use string macros, which sounds a bit shit, but there’s good support for just taking your string macro and making a REPL out of it, which is very nice.

                        The base syntax is more flexible than it might first appear, too. If you overload e.g. broadcasting or use juxtaposition multiplication.

                        Edit: for practical examples, check out APL.jl and LispSyntax.jl

                        1. 1

                          Fantastic!

                          I will need to get back into Julia, once work calms down a bit.

                        2. 1

                          I feel like the conventional syntax is a necessary evil for adoption, to bring Fortran/MatLab/Python/R developers under the same roof as type theorist/PL nerds/lisp hackers. I really wish I could do pattern matching/de-structuring without macros though.

                          1. 1

                            Agreed. I wish we, as an industry, weren’t subject to so much time pressure that an interesting new syntax is a negative though. I dunno about y’all but back before college, learning new programming languages with bizarre syntaxes was literally what I did for fun.