I want to echo the mention in the comments of point-free style. The article tends to focus on domains other than programs—source control branches, CSS classes—though it does invoke anonymous functions as evidence. The example:
Klabnik (rightly) points out that the argument to filter is anonymous, and that’s good; we don’t have to encapsulate the functionality of this function into a named entity; it is only and exactly its behaviour.
But it’s worth noting that there are still two names in the sample function: there’s calculate, of course, but the argument to the closure requires the name i in order to work. Why i? Why not n? Presumably because it’s an integer?
We can raise the boogeyman: what if we need to change the type? Then i won’t make sense any more. But even more directly—and I think this is what Klabnik gets at also—simply introducing another named thing into your universe of objects is some cognitive overhead, and sometimes it’s nice to do without it.
This is where point-free operators/combinators are useful. I won’t try to ape Rust syntax, but in K we could write a function 0=2! that precisely means “mod 2 is equal to 0” without needing to name its argument. K syntax makes this particularly efficient and simple, but with a simple combinators library we can accomplish the same behaviour: for instance, here’s an example in Janet (using a combinators library of mine), that does the same thing:
(comp (partial = 0) (partial (flip mod) 2))
It is undoubtedly more verbose. We would shorten it by picking shorter names for partial and comp for instance, but I want to highlight that there’s a particular value worth exploring of avoiding names, entirely orthogonal to the question of character count.
simply introducing another named thing into your universe of objects is some cognitive overhead, and sometimes it’s nice to do without it.
Yes, this is well said, thank you. I think why I didn’t expressly say this in the post is because, well, writing helps you think through something, and I kinda came up with this bit as I was writing the post. This plus the “but you do name components” stuff shifted what I assumed the original point of the post would be. Could probably re-do it and make it clearer. Anyway.
Point-free is a good thing to point at (pun intended) here, but also, de Bruijn indices, like Ruby’s “numbered parameters”
Or you just us a predefined name like “it” in groovy and kotlin. And for multiple parameters, you can have a second one. E.g. a fold in Scala would be myList.fold(_ + _) where the second _ refers to the second parameter.
I appreciate this article, but I think it slightly (but importantly) misses the point.
Array languages have a capacity for incredibly terse code, and one of the ways to attain that level of terseness is by exploiting the puns and constraints covered here.
At the same time, the truly valuable thing about array languages is not really their concision, or at least not that you can write a program of that size in that few characters. There are more fundamental design choices—like arrays and conforming operations, or verb trains—that produce code that is concise, if not as concise as the famous Game of Life snippet, but still flexible and fast to boot.
In other words: what we should be reckoning with is not that actually APL is rubbish for writing real programs, but that we have allowed the most extreme and unrealistic depictions of its expressive power and utility to completely dominate how it’s marketed and discussed. There is, in fact, a version of Game of Life to be written in J that’s just as flexible and performant as the naive Java version that Hillel cites, and still rather simpler and more concise, because—for instance—there are no stinkin’ loops, and maybe because there are some natural opportunities for hooks and trains. It just isn’t in an absurdly impressive and unrealistically few number of characters.
I arrive at the same conclusion—it’s one of the rules of thumb I feel strongest about—but I think it can be expressed much more simply and practically.
Try to make as many things private as possible.
This is for the simple reasons that:
(why private symbols exist in the first place) you can change the interface of a private symbol at any time and know the blast radius of your change.
private symbols are trivially subject to static usage analysis, and so you can guarantee that a private symbol is unused and remove it entirely;
A natural recommendation stemming from the above dictum is to, when practical, prefer larger modules with fewer public symbols, rather than creating (for instance) “helper” modules whose logical function is entirely secondary to some other business logic module, for the sake of smaller, easier-to-navigate files.
But I prefer to emphasize private symbols rather than module size because that above recommendation only holds in programming languages with the traditional private/public two-way distinction. I have often wished, for example, for a language to expose a only the following modules can import from here pragma. If you had one of those, you could trivially define a small, easy-to-navigate file, whose contained functions were all naturally grouped with each other, and which nevertheless had “private” semantics, such that, for instance, static usage analysis would still be tractable. Another way of putting this: the traditional 1:1 mapping between files and modules hinders us here.
I think rust kind of does what you are describing. A crate is a hard encapsulation boundary.
A module (part of a crate and correponds to one file) can be used for encapsulation but is mostly an implementation detail of how you organize your code. You can use them for privacy but unless there are some good reasons (unsafe code) to I tend to make everything public to the crate (pub(crate)) and use crates as hard encapsulation boundaries. You can choose to make a module public to a crate (or just export specific functions/structs/traits at the root). If you want to use a crate you need to declare it as a dependency so its easy to figure out where a crate/set of modules can be used (either with grep but cargo can also has a command to print thjs directl. So this kind of fits with what you describe regarding restricting where a module can be imported.
My applications are made up of a small/medium amount of crates that correspond to conciously choosen encapsulation boundaries (and libraries are usually a single crate).
You can also write your application/big project on a single crate and use modules for exampsulation. I found that to not work well in practice. So you can still do the “wrong” thing but the tools to enable this are there.
Yes, that does sound like it accomplishes the same thing. Unfortunately it seems like it also couples this functionality with dependency/package management, though. So for instance: if you had a single api endpoint handler with some helper functions, would you ever make that into its own crate? Or are they too heavyweight?
Most large codebases use cargo workspaces where you have multiple crates in a single repository. They depend on each other with path dependencies (simply <name> = { path = ../<name* } in your Cargo.toml file). So there is no overhead like publishing to some registry or downloading dependencies from somewhere (or having to switch repositories, you just open up the repo root in your editor and can edit any of the crates in the workspace, running nuuld/test commands also operstes on the entire worskapcr at once). It looks a bit like package management but it doesn’t have any operational overhead attached to it (splitting up applications into crates also happens to reducd compiletimes). It generally feels almost the same as working on a single crate.
The main overhead is that a crate requires it’s own directory (src files in /src) so it can fragment the codebase a bit when there is a huge number of them. Having to declare which crates depend on each other also adds a bit of ceremony since it’s a seperate (toml) file and not right in the sourcecode. Then again this is exactly what provides thos extra rigidity with regards to module dependencies that you describe.
I work in a field where I never really write something like an API endpoint so I don’t really have an intuition for that.
I would say it depends on how complex what you are doing is. For example if you have 100 endpoints and each endpoint has only maybe 2 or 3 functions with a few 100 loc total I probably wouldnt add one crate for each (maybe there is a useful way to group them). Once you get to about 1k loc that are self-contained it definitly makes sense IMO.
I guess the post this post was about making modules big. I use crates as big modules/encapsulation bkundaries not for small ones.
What a disappointingly shallow article. Not a single interview or historical fact. I’d actually love a deeply researched article about the rise of functional programming in the mainstream!
I was at least expecting a take of some kind with a title that provocative. But it reads like it was written by an LLM after the LLM was prompted with maybe a paragraph of text from the Haskell Wikipedia page. All this says is that Haskell is terse (and terseness is a land of contrasts, sometimes scary and sometimes cool!) and that Haskell had vague influences on other unspecified programming languages. I don’t get the point.
I understand the use of JavaScript to try to bridge some lingua franca, but the examples are much more verbose than languages where these concepts are first-class that it feels like a diservice to the ideas as looking complicated when they aren’t. Languages that prioritize this style, lambdas are often so brief there isn’t this need to name all of these variables & functions as the usage of these combinators is sufficiently terse, expressive, & readable.
That is to say, I would have loved to see each example in an array and/or functional language next to the JavaScript as a comparison to highlight the concepts in their best light. …That & a small-viewport-friendly stylesheet.
Thank you, and I agree about the better-suitedness of array languages. That said, I specifically avoided adding array examples because that sort of post (rather understandably) becomes read as a sell on array languages, and the subsequent discussion tends to be about the languages rather than this one little bit.
I do agree that, if someone were to be totally sold on this style of programming, JavaScript is probably the worst language to do it in, which is an unfortunate tension.
I’m mostly responding to say that I’ve fixed the style sheet and it should now be much better on mobile!
Certain readers (including myself) would appreciate the Rosetta Stone of examples. It was examples like that that led me into to function programming a decade ago. Seeing the ideas I wanted to use it but shoehorning the concept into JavaScript it work just made everything nastier & now coworkers couldn’t follow it. Eventually I realized my path was to leave JS & if these patterns clicked as they did, I need to be pursuing work in languages that support it first-class. Seeing the ‘real’ examples then not only showed the concepts in the optimal light for me to understand what a code base like that would look like, but also I could re-reference posts to either translate syntax I didn’t yet understand from JavaScript or translate to JavaScript to help explain concept to those outside my space. I think all thru history we have relied on said translations as a time capsule for our history of languages—spoken or not. Recently I’ve had on-&-off interest in the array languages & these posts that show in in Haskell or SML or whatever are the ones that to me click the fastest getting to see the mapping from Language α to Language β.
Nice overview of the basics. I appreciate the honesty about the traditional names. It’s too bad that Smullyan’s names never caught on.
For comparison, I have a table of names here. I should add some tacit vocabularies; I haven’t found category theorists or logicians identifying combinators like under.
I was slightly disappointed to discover that this piece was not about tag systems, the abstract machines. I recommend them, too; they’re really fun to think about and make a fun project when learning a new language. https://en.wikipedia.org/wiki/Tag_system
Another solution, not mentioned here, which we used in my last place: one-off tags for the right side of the with clauses to help differentiate. It worked pretty well.
with {:get_user, {:ok, user}} <- {:get_user, get_user()},
{:get_permissions, {:ok, permissions}} <- {:get_permissions, get_permissions(user)} do
...
else
{:get_user, :err} -> ...
{:get_permissions, :err} -> ...
end
I ended up doing something similar in my most recent Elixir/LiveView project, except that instead of using tag-tuples, I used functions that’d return known invalid atoms.
Not sure I get the point here. The division of vocabulary into “content” and “grammar” words is non-scientific, to say the least; the assertion that this is what makes a language would be surprising to linguists. Then again, has anybody ever argued that emoli is a language?
I’ve got a tiny program I wrote: er. I use it for zettelkasten: when you run it, every text file in the directory has outgoing links added to the bottom of the file and incoming links added to the top. The actual viewing and editing, you can just use vim with gf to navigate. Works pretty well!
Is this the note-taking “app” you wrote the spec for in pantagruel? I thought that was so dang cool, I now watch the pantagruel repo. I’ll have to check out this implementation.
This was more language-specific than I expected from the title. It’s more saying, “Implementing singletons by modifying class instantiation behavior in Python is a bad idea, so here are some better ways to do it in Python,” than, “The idea of singletons is inherently bad.”
I see it as a general argument against singletons which happens to be presented via Python code examples. Switching the language wouldn’t change the core argument, which is that singletons conflate “this class models that kind of thing” and “here’s a limit on how many instances the program is allowed to have of that class”.
The author themselves tagged the article as python, which is why I suggested adding the same tag here. This is about as language agnostic as the article gets:
The first problem with Singleton is that it encourages you to mix together two different ideas into one class.
Even that statement assumes the paradigm in question is a class-based paradigm. Singletons are possible in non-class paradigms as well, which this article does not address.
Miško Hevery wrote a similar article, singletons are pathological liars back in 2008. His premise was basically the same: it creates too much confusion between the class instance and the class itself. Shortly after that, he designed AngularJS services as singletons, probably assuming that he sidestepped the class/instance problem by making users define singletons in factory functions that his framework would run only once.
As it turns out, the main problem with singletons is that they are basically global mutable states. For those who haven’t been bitten by global mutable state, the problem with it is that it erodes separation of concerns and creates situations where bugs caused by undesirable mutations can’t be traced because the mutations occurred in some downstream consumer of the singleton e.g. as a property assignment. One can’t really set a breakpoint or log anything in such situations, so diagnosing them often requires a great deal of searching. Subsequent front-end frameworks rightly went to great lengths to avoid such singletons. Elm and React went so far as to replace the application state entirely in their update processes.
The rest of the article is pretty Python specific, from functools to the details of which value types are mutable and which are not.
I don’t think the conflation is inherent in the concept of singletons, though, and it doesn’t show up as often in other languages. Taking Java as an example, a very common way to do singletons is to define an ordinary class with an ordinary constructor, where the class contains no logic whatsoever about enforcing a single instance, and rely on a dependency injection framework (Spring, Guice, etc.) to call the constructor and pass the instance to classes that need it.
I think the context that you (and I; I am inferring from his description of Singleton) are missing, is: the definition of the Singleton pattern is about defining a class such that whenever it is instantiated, it returns the same single object. In other words, a singleton is not an object, it’s a class and an object.
I have never heard of this definition of singleton; I tend to use the lower-case s form of the word to mean ‘a single global object’. It seems insane to me that you would denature a class to the extent that instantiation is not actually creating new objects. But it does seem like the sort of nuttiness that our OO-obsessed forebears would have got up to and I’m thankful to Ned for telling us never to do it again, if so.
Is the world being focused on value added of new things meaning that maintenance of old things is not researched as much? How many frameworks for the fast creation of a blag or a todo list exist when much of what I’ve done over my career is maintenance of large successful systems.
How much better would security, reliability, and maintainability be if more folks focused on this aspect of the craft?
What exactly are you responding to? This is the creator of a software system exploring a new design. The original is still maintained and quite stable. And needless to say, it has nothing to do with blogs or todo lists; it’s an iteration on a solution to an incredibly complex problem.
Backwards compatible (it acts as a drop-in replacement for Ninja)
Exploring a very different point in the design space, to the point that there would be very little shared code between the two implementations if they were in the same program.
The design of n2 looks really nice and addresses a lot of the problems with Ninja. In particular, the model should be trivial to extend to cloud-based caching and builds. Given the author’s track record, I have a pretty high confidence that it will have a massive impact.
I use Ninja a lot and it probably saves me at least a minute of waiting time in total each day that I use it. Even using it for 100 days a year, over the 15 years that I’ve been using it, that’s 25 hours of my productivity that Ninja has given me (probably more). Scaled up across all of the users of Ninja, that’s a huge win.
The FreeBSD package builders switched a while ago to using Ninja for all CMake-based projects, which shaved a pretty noticeable amount off the total time to build a complete package set.
If the author wrote wasted the rest of his life doing nothing productive or writing ‘frameworks for fast creation of a blag or a todo list’, he’d still probably have a more positive impact on the industry than most of the rest of us.
Several folks had expressed some interest in this project, so I wanted to post this release: the document checker now does type checking as well as binding checking.
For those unfamiliar: Pantagruel is an experimental language for specifying system behaviours.
Up till this point, it had a document checker that would ensure that a user had defined every symbol they used (though not necessarily before they used it).
This adds a type system. It doesn’t behave exactly like the type system of a programming language; because documents aren’t executed, it can be a lot more lenient. But it can be a very useful tool for ensuring that document authors haven’t accidentally introduced ambiguities or misunderstandings.
It’s extremely janky. EXTREMELY janky. And from an academic standpoint, I’m sure it’s completely unsound. But as an experiment, it’s been quite interesting so far.
I love coding interviews because they are so much fun to do as a candidate. That said, stuff like this:
firmly believe they are the most objective way to evaluate candidates
This presumes candidates have experience like yours (or that experience like yours is what you’re trying to measure). I had a very senior coworker fail an interview once due to lack of knowledge of parsing strategies / parse trees. This person is not just “can code” but also can architect, lead, etc, but failed due to a question with the assumption that “anyone smart must ha e been exposed to parsing”.
That’s why I always prefer a portfolio-based interview. Then the problem they are explaining to you is one they know how to solve because they have already solved it! Of course, this doesn’t work well if you want to be able to hire career-long corporatists who have no portfolio. There is no one size fits all interview.
I’ve only done a couple of coding interviews and it turns out I have pretty bad performance anxiety and forget basic stuff or get so worked up I can’t think straight.
Apparently this would mean that I have faked the last 20+ years, dozens of open source projects, ~15 conference presentations, and several peer-reviewed papers.
And frankly if you think that’s the case then you should hire me because I’m obviously an amazing social engineer. :)
Repeated claims of “interviews are so hard, I get anxious, etc. etc.” in order to get an easier-to-pass process are also social engineering.
(I’m not saying you’re doing this…just you jogged my brain with your phrasing at the end there.)
My experience has been that as we’ve tried to be more accommodating as an industry, we’re getting more and more bad or ineffective actors into our systems. Given that my favored fair approach is too ruthless for most companies–hire anybody with a pulse who can fog a mirror and open an IDE, fire them after a week if they suck, repeat as needed–I think we’re definitely in for some trouble. This is compounded so much more than people wish to admit with the current implementation of DEI efforts, employee resource committees (or whatever your local version of officially-sanctioned factional support groups is), and so forth.
I don’t know what the right answer is, but I’m pretty sure dropping interviews ain’t it.
That’s not too brutal, HR doesn’t like it because it’s more work for them; management doesn’t like it because they’re afraid to fail unconventionally. Also then they’d need to work out a good onboarding process. Obviously you have to tell people you need them to do a week long paid interview.
I do like those, but I think they should be paid and I usually think they’re poorly designed/scoped. And when you’re doing a lot of interviewing at the same time, it can get annoying having that pile of stuff in the background.
Doesn’t that produce way too much overhead with training the new hires? Someone has to do the onboarding, especially with complex projects (and aren’t they all, nowadays?) or junior hires! Usually it takes a while (a week to a month) before someone is properly up and running anyway. If you have to do that with your shotgun approach to hiring, that would put a big cap on overall productivity while you’re in “hiring mode”.
This presumes candidates have experience like yours (or that experience like yours is what you’re trying to measure).
Is there any interviewing technique that doesn’t?
That’s why I always prefer a portfolio-based interview.
This assumes that the candidate has time to build a portfolio, or is comfortable talking about their previous job’s likely-proprietary code. It also biases towards candidates with interests like yours.
And it’s not uniform, which allows a huge amount of subjectiveness to creep in.
There is no one size fits all interview.
And yet, giving different types of interviews to different candidates is a non-starter for hopefully obvious reasons.
Yes. @singpolyma is proposing a portfolio review, which encourages the candidate to teach the interviewers about a subject they are unlikely to be as familiar with as the candidate. That’s a very relevant skill that code challenges can’t demonstrate.
It also biases towards candidates with interests like yours.
I once had a candidate walk us through her ClojureScript project. We didn’t know ClojureScript or have any particular interest in it. But it was the only non-proprietary code she could show us. She was able to demonstrate both her skill in writing software and, just as importantly, an ability to explain her decision making and values in the process. We had to swallow our pride and ask some questions about syntax, but it was one of the best interviews I’ve ever conducted.
Yes, there was a lot of subjectivity in that interview. That’s life. But interviewers will practice an open or closed mind regardless of format. It’s the code challenge that’s the greater refuge for closed mindedness.
I would care to bet that part of the intention behind ‘with interests like yours’ is an interest in having programming projects at home. I know this is not a universal opinion, but I think that having personal projects is irrelevant to the performance of a senior engineer.
I guess that means you either get very few applicants or that you’re so selective that almost all get filtered out and your search ends as soon as you find one that passes.
On multiple occasions, I’ve found myself in the intermediate situation where multiple good candidates showed up at the same time, but I still had to choose one. In that case, even if you allow people to demonstrate themselves with some freedom, you also need a basis in order to carry out the unfortunate task of comparing them.
If you have 2 candidates who have impressed you, why does one have to be better or worse, and what makes you think you have the process or information to determine who is better? Just pick a winner at random. Or pick the one who used a sans serif font on their resume, or some other arbitrary factor.
Ironically, as strange as that sounds, both your suggestion of hiring them both and @fly’s suggestion of flipping a coin would’ve worked quite well in my case in hind sight…
There is, but I don’t know how useful it is yet. this directory contains some more docs, including a full (possibly out of date!) language reference.
In general though, there need to be deeper docs that more effectively communicate what the tool is actually designed for. The existing docs don’t do a good job of that yet. Most people get the idea that it’s supposed to be used to prove things, a la TLA+ or Alloy, or that it’s supposed to be executable. Both are reasonable assumptions. I need to continue to hone both my understanding of what this is good for and my ability to communicate that to others.
In addition to that, I am still very open to the possibility of being able to prove more things with it than you can prove right now. For instance, I’d be very open to adding a type system.
I suggest to the author, as I suggest to anybody who finds themselves imagining that everyone around them (except for themselves) must be pathologically stupid, that they suppose that all of the things they decry are motivated by sensible, relatable concerns, which nevertheless might not be the same concerns as are primary to the author, or which might be realized via methods which nevertheless might not be the ones that the author prefers or knows.
Devil’s advocate even though I think your approach is the right default one:
Does this mean it’s impossible for large groups of people to do “crazy” things? Empirically, historically, this seems false. How then do you distinguish between those cases and cases where you simply lack context?
“motivated by sensible, relatable concerns”
What if the relatable concern is wanting to feel cool, and use something fresh and new, or a thing that Google uses and people talk about at conferences? I’m not just being glib and dismissive. But if that is the impulse – and I think it drives deeper than most people admit – you don’t necessarily get sensible. Or… there’s plain old not knowing about simpler ways.
People don’t have to be dumb to partake of sub-optimal trends.
Does this mean it’s impossible for large groups of people to do “crazy” things? Empirically, historically, this seems false. How then do you distinguish between those cases and cases where you simply lack context?
I normally try creating a Fermi Estimate for the problem. If it differs substantially from the scope of work then I assume there’s some missing context.
I normally try creating a Fermi Estimate for the problem. If it differs substantially from the scope of work then I assume there’s some missing context.
For example, how would this work to answer a question like “Are far too many companies using Angular when something simpler could have solved their problem better and saved many engineering hours?”
I don’t know if the author really thinks that way about people or it’s just a way to emphasize the unnecessary growing complexity of some systems (given his personal experience).
This is- as others have noted- more or less complete nonsense. Every assertion is hyperbolic and none is justified.
That said, there is a tiny kernel of an interesting idea here.
Arguably, the current distribution of web browser usage is Bad. Google has done quite a few questionable, embrace-and-extend sort of things in the recent past, and even if you don’t think that they’re evil, you might want to avoid a monopoly in the browser market considering its centrality to modern life.
The interest point that the author glances across is that, because web development is so complex today, it’s highly unlikely that the incumbents will be realistically challenged… ever? Who can we imagine writing a new HTML rendering engine or JavaScript virtual machine in the year of our lord 2022?
On the other hand, if the standards for web development were simpler, if the standard method involved more reliance on vanilla HTML and CSS, we might imagine that it would be easier to compete and to diversify the browser market. Which would arguably be a good thing.
I want to echo the mention in the comments of point-free style. The article tends to focus on domains other than programs—source control branches, CSS classes—though it does invoke anonymous functions as evidence. The example:
Klabnik (rightly) points out that the argument to
filteris anonymous, and that’s good; we don’t have to encapsulate the functionality of this function into a named entity; it is only and exactly its behaviour.But it’s worth noting that there are still two names in the sample function: there’s
calculate, of course, but the argument to the closure requires the nameiin order to work. Whyi? Why notn? Presumably because it’s an integer?We can raise the boogeyman: what if we need to change the type? Then
iwon’t make sense any more. But even more directly—and I think this is what Klabnik gets at also—simply introducing another named thing into your universe of objects is some cognitive overhead, and sometimes it’s nice to do without it.This is where point-free operators/combinators are useful. I won’t try to ape Rust syntax, but in K we could write a function
0=2!that precisely means “mod 2 is equal to 0” without needing to name its argument. K syntax makes this particularly efficient and simple, but with a simple combinators library we can accomplish the same behaviour: for instance, here’s an example in Janet (using a combinators library of mine), that does the same thing:It is undoubtedly more verbose. We would shorten it by picking shorter names for
partialandcompfor instance, but I want to highlight that there’s a particular value worth exploring of avoiding names, entirely orthogonal to the question of character count.Yes, this is well said, thank you. I think why I didn’t expressly say this in the post is because, well, writing helps you think through something, and I kinda came up with this bit as I was writing the post. This plus the “but you do name components” stuff shifted what I assumed the original point of the post would be. Could probably re-do it and make it clearer. Anyway.
Point-free is a good thing to point at (pun intended) here, but also, de Bruijn indices, like Ruby’s “numbered parameters”
Or you just us a predefined name like “it” in groovy and kotlin. And for multiple parameters, you can have a second one. E.g. a fold in Scala would be myList.fold(_ + _) where the second _ refers to the second parameter.
I appreciate this article, but I think it slightly (but importantly) misses the point.
Array languages have a capacity for incredibly terse code, and one of the ways to attain that level of terseness is by exploiting the puns and constraints covered here.
At the same time, the truly valuable thing about array languages is not really their concision, or at least not that you can write a program of that size in that few characters. There are more fundamental design choices—like arrays and conforming operations, or verb trains—that produce code that is concise, if not as concise as the famous Game of Life snippet, but still flexible and fast to boot.
In other words: what we should be reckoning with is not that actually APL is rubbish for writing real programs, but that we have allowed the most extreme and unrealistic depictions of its expressive power and utility to completely dominate how it’s marketed and discussed. There is, in fact, a version of Game of Life to be written in J that’s just as flexible and performant as the naive Java version that Hillel cites, and still rather simpler and more concise, because—for instance—there are no stinkin’ loops, and maybe because there are some natural opportunities for hooks and trains. It just isn’t in an absurdly impressive and unrealistically few number of characters.
I arrive at the same conclusion—it’s one of the rules of thumb I feel strongest about—but I think it can be expressed much more simply and practically.
Try to make as many things private as possible.
This is for the simple reasons that:
A natural recommendation stemming from the above dictum is to, when practical, prefer larger modules with fewer public symbols, rather than creating (for instance) “helper” modules whose logical function is entirely secondary to some other business logic module, for the sake of smaller, easier-to-navigate files.
But I prefer to emphasize private symbols rather than module size because that above recommendation only holds in programming languages with the traditional private/public two-way distinction. I have often wished, for example, for a language to expose a
only the following modules can import from herepragma. If you had one of those, you could trivially define a small, easy-to-navigate file, whose contained functions were all naturally grouped with each other, and which nevertheless had “private” semantics, such that, for instance, static usage analysis would still be tractable. Another way of putting this: the traditional 1:1 mapping between files and modules hinders us here.I think rust kind of does what you are describing. A crate is a hard encapsulation boundary.
A module (part of a crate and correponds to one file) can be used for encapsulation but is mostly an implementation detail of how you organize your code. You can use them for privacy but unless there are some good reasons (unsafe code) to I tend to make everything public to the crate (
pub(crate)) and use crates as hard encapsulation boundaries. You can choose to make a module public to a crate (or just export specific functions/structs/traits at the root). If you want to use a crate you need to declare it as a dependency so its easy to figure out where a crate/set of modules can be used (either with grep but cargo can also has a command to print thjs directl. So this kind of fits with what you describe regarding restricting where a module can be imported.My applications are made up of a small/medium amount of crates that correspond to conciously choosen encapsulation boundaries (and libraries are usually a single crate).
You can also write your application/big project on a single crate and use modules for exampsulation. I found that to not work well in practice. So you can still do the “wrong” thing but the tools to enable this are there.
Yes, that does sound like it accomplishes the same thing. Unfortunately it seems like it also couples this functionality with dependency/package management, though. So for instance: if you had a single api endpoint handler with some helper functions, would you ever make that into its own crate? Or are they too heavyweight?
Most large codebases use cargo workspaces where you have multiple crates in a single repository. They depend on each other with path dependencies (simply
<name> = { path = ../<name* }in your Cargo.toml file). So there is no overhead like publishing to some registry or downloading dependencies from somewhere (or having to switch repositories, you just open up the repo root in your editor and can edit any of the crates in the workspace, running nuuld/test commands also operstes on the entire worskapcr at once). It looks a bit like package management but it doesn’t have any operational overhead attached to it (splitting up applications into crates also happens to reducd compiletimes). It generally feels almost the same as working on a single crate.The main overhead is that a crate requires it’s own directory (src files in /src) so it can fragment the codebase a bit when there is a huge number of them. Having to declare which crates depend on each other also adds a bit of ceremony since it’s a seperate (toml) file and not right in the sourcecode. Then again this is exactly what provides thos extra rigidity with regards to module dependencies that you describe.
I work in a field where I never really write something like an API endpoint so I don’t really have an intuition for that.
I would say it depends on how complex what you are doing is. For example if you have 100 endpoints and each endpoint has only maybe 2 or 3 functions with a few 100 loc total I probably wouldnt add one crate for each (maybe there is a useful way to group them). Once you get to about 1k loc that are self-contained it definitly makes sense IMO.
I guess the post this post was about making modules big. I use crates as big modules/encapsulation bkundaries not for small ones.
What a disappointingly shallow article. Not a single interview or historical fact. I’d actually love a deeply researched article about the rise of functional programming in the mainstream!
Agreed. Not even a single mention of burritos, shameful!
This joke is the best joke.
I was at least expecting a take of some kind with a title that provocative. But it reads like it was written by an LLM after the LLM was prompted with maybe a paragraph of text from the Haskell Wikipedia page. All this says is that Haskell is terse (and terseness is a land of contrasts, sometimes scary and sometimes cool!) and that Haskell had vague influences on other unspecified programming languages. I don’t get the point.
I understand the use of JavaScript to try to bridge some lingua franca, but the examples are much more verbose than languages where these concepts are first-class that it feels like a diservice to the ideas as looking complicated when they aren’t. Languages that prioritize this style, lambdas are often so brief there isn’t this need to name all of these variables & functions as the usage of these combinators is sufficiently terse, expressive, & readable.
That is to say, I would have loved to see each example in an array and/or functional language next to the JavaScript as a comparison to highlight the concepts in their best light. …That & a small-viewport-friendly stylesheet.
Thank you, and I agree about the better-suitedness of array languages. That said, I specifically avoided adding array examples because that sort of post (rather understandably) becomes read as a sell on array languages, and the subsequent discussion tends to be about the languages rather than this one little bit.
I do agree that, if someone were to be totally sold on this style of programming, JavaScript is probably the worst language to do it in, which is an unfortunate tension.
I’m mostly responding to say that I’ve fixed the style sheet and it should now be much better on mobile!
Certain readers (including myself) would appreciate the Rosetta Stone of examples. It was examples like that that led me into to function programming a decade ago. Seeing the ideas I wanted to use it but shoehorning the concept into JavaScript it work just made everything nastier & now coworkers couldn’t follow it. Eventually I realized my path was to leave JS & if these patterns clicked as they did, I need to be pursuing work in languages that support it first-class. Seeing the ‘real’ examples then not only showed the concepts in the optimal light for me to understand what a code base like that would look like, but also I could re-reference posts to either translate syntax I didn’t yet understand from JavaScript or translate to JavaScript to help explain concept to those outside my space. I think all thru history we have relied on said translations as a time capsule for our history of languages—spoken or not. Recently I’ve had on-&-off interest in the array languages & these posts that show in in Haskell or SML or whatever are the ones that to me click the fastest getting to see the mapping from Language α to Language β.
Well… you make a good argument! Maybe there’s a supplementary article in the making, then.
I would read it! 😃
Nice overview of the basics. I appreciate the honesty about the traditional names. It’s too bad that Smullyan’s names never caught on.
For comparison, I have a table of names here. I should add some tacit vocabularies; I haven’t found category theorists or logicians identifying combinators like
under.Thank you—fyi what I’m calling
underis Psi.I was slightly disappointed to discover that this piece was not about tag systems, the abstract machines. I recommend them, too; they’re really fun to think about and make a fun project when learning a new language. https://en.wikipedia.org/wiki/Tag_system
Fun fact, this is a proof vim is Turing complete:
Another solution, not mentioned here, which we used in my last place: one-off tags for the right side of the
withclauses to help differentiate. It worked pretty well.FWIW, I have always preferred nested
caseto this style. But it does work and it isn’t ambiguous.I ended up doing something similar in my most recent Elixir/LiveView project, except that instead of using tag-tuples, I used functions that’d return known invalid atoms.
https://github.com/yumaikas/dream_crush_score/blob/master/lib/dream_crush_score/room/room.ex#L375-L390
Not sure I get the point here. The division of vocabulary into “content” and “grammar” words is non-scientific, to say the least; the assertion that this is what makes a language would be surprising to linguists. Then again, has anybody ever argued that emoli is a language?
I agree, but the content (lexical) and grammar (function) word distinction is sometimes used by people in the domain of semantics.
I’ve got a tiny program I wrote:
er. I use it for zettelkasten: when you run it, every text file in the directory has outgoing links added to the bottom of the file and incoming links added to the top. The actual viewing and editing, you can just use vim withgfto navigate. Works pretty well!https://sr.ht/~subsetpark/erasmus/
Is this the note-taking “app” you wrote the spec for in pantagruel? I thought that was so dang cool, I now watch the pantagruel repo. I’ll have to check out this implementation.
Yes it is, and thank you for the kind words!
If there was a Nix Flake, I’d give it a go right now, but I don’t know enough about Zig tooling to check it out this second :(
This was more language-specific than I expected from the title. It’s more saying, “Implementing singletons by modifying class instantiation behavior in Python is a bad idea, so here are some better ways to do it in Python,” than, “The idea of singletons is inherently bad.”
I see it as a general argument against singletons which happens to be presented via Python code examples. Switching the language wouldn’t change the core argument, which is that singletons conflate “this class models that kind of thing” and “here’s a limit on how many instances the program is allowed to have of that class”.
The author themselves tagged the article as python, which is why I suggested adding the same tag here. This is about as language agnostic as the article gets:
Even that statement assumes the paradigm in question is a class-based paradigm. Singletons are possible in non-class paradigms as well, which this article does not address.
Miško Hevery wrote a similar article, singletons are pathological liars back in 2008. His premise was basically the same: it creates too much confusion between the class instance and the class itself. Shortly after that, he designed AngularJS services as singletons, probably assuming that he sidestepped the class/instance problem by making users define singletons in factory functions that his framework would run only once.
As it turns out, the main problem with singletons is that they are basically global mutable states. For those who haven’t been bitten by global mutable state, the problem with it is that it erodes separation of concerns and creates situations where bugs caused by undesirable mutations can’t be traced because the mutations occurred in some downstream consumer of the singleton e.g. as a property assignment. One can’t really set a breakpoint or log anything in such situations, so diagnosing them often requires a great deal of searching. Subsequent front-end frameworks rightly went to great lengths to avoid such singletons. Elm and React went so far as to replace the application state entirely in their update processes.
The rest of the article is pretty Python specific, from functools to the details of which value types are mutable and which are not.
I don’t think the conflation is inherent in the concept of singletons, though, and it doesn’t show up as often in other languages. Taking Java as an example, a very common way to do singletons is to define an ordinary class with an ordinary constructor, where the class contains no logic whatsoever about enforcing a single instance, and rely on a dependency injection framework (Spring, Guice, etc.) to call the constructor and pass the instance to classes that need it.
I think the context that you (and I; I am inferring from his description of Singleton) are missing, is: the definition of the Singleton pattern is about defining a class such that whenever it is instantiated, it returns the same single object. In other words, a singleton is not an object, it’s a class and an object.
I have never heard of this definition of singleton; I tend to use the lower-case s form of the word to mean ‘a single global object’. It seems insane to me that you would denature a class to the extent that instantiation is not actually creating new objects. But it does seem like the sort of nuttiness that our OO-obsessed forebears would have got up to and I’m thankful to Ned for telling us never to do it again, if so.
Is the world being focused on value added of new things meaning that maintenance of old things is not researched as much? How many frameworks for the fast creation of a blag or a todo list exist when much of what I’ve done over my career is maintenance of large successful systems.
How much better would security, reliability, and maintainability be if more folks focused on this aspect of the craft?
What exactly are you responding to? This is the creator of a software system exploring a new design. The original is still maintained and quite stable. And needless to say, it has nothing to do with blogs or todo lists; it’s an iteration on a solution to an incredibly complex problem.
And, in particular, it is a new design that is:
The design of n2 looks really nice and addresses a lot of the problems with Ninja. In particular, the model should be trivial to extend to cloud-based caching and builds. Given the author’s track record, I have a pretty high confidence that it will have a massive impact.
I use Ninja a lot and it probably saves me at least a minute of waiting time in total each day that I use it. Even using it for 100 days a year, over the 15 years that I’ve been using it, that’s 25 hours of my productivity that Ninja has given me (probably more). Scaled up across all of the users of Ninja, that’s a huge win.
The FreeBSD package builders switched a while ago to using Ninja for all CMake-based projects, which shaved a pretty noticeable amount off the total time to build a complete package set.
If the author wrote wasted the rest of his life doing nothing productive or writing ‘frameworks for fast creation of a blag or a todo list’, he’d still probably have a more positive impact on the industry than most of the rest of us.
I notice the word “relation” for functional dependencies. Is there a way to encode binary relations? I would think of trying something like:
Yep, that’s exactly what I would do!
Several folks had expressed some interest in this project, so I wanted to post this release: the document checker now does type checking as well as binding checking.
For those unfamiliar: Pantagruel is an experimental language for specifying system behaviours.
Up till this point, it had a document checker that would ensure that a user had defined every symbol they used (though not necessarily before they used it).
This adds a type system. It doesn’t behave exactly like the type system of a programming language; because documents aren’t executed, it can be a lot more lenient. But it can be a very useful tool for ensuring that document authors haven’t accidentally introduced ambiguities or misunderstandings.
It’s extremely janky. EXTREMELY janky. And from an academic standpoint, I’m sure it’s completely unsound. But as an experiment, it’s been quite interesting so far.
Examples are in the README.
I love coding interviews because they are so much fun to do as a candidate. That said, stuff like this:
This presumes candidates have experience like yours (or that experience like yours is what you’re trying to measure). I had a very senior coworker fail an interview once due to lack of knowledge of parsing strategies / parse trees. This person is not just “can code” but also can architect, lead, etc, but failed due to a question with the assumption that “anyone smart must ha e been exposed to parsing”.
That’s why I always prefer a portfolio-based interview. Then the problem they are explaining to you is one they know how to solve because they have already solved it! Of course, this doesn’t work well if you want to be able to hire career-long corporatists who have no portfolio. There is no one size fits all interview.
I’ve only done a couple of coding interviews and it turns out I have pretty bad performance anxiety and forget basic stuff or get so worked up I can’t think straight.
Apparently this would mean that I have faked the last 20+ years, dozens of open source projects, ~15 conference presentations, and several peer-reviewed papers.
And frankly if you think that’s the case then you should hire me because I’m obviously an amazing social engineer. :)
Repeated claims of “interviews are so hard, I get anxious, etc. etc.” in order to get an easier-to-pass process are also social engineering.
(I’m not saying you’re doing this…just you jogged my brain with your phrasing at the end there.)
My experience has been that as we’ve tried to be more accommodating as an industry, we’re getting more and more bad or ineffective actors into our systems. Given that my favored fair approach is too ruthless for most companies–hire anybody with a pulse who can fog a mirror and open an IDE, fire them after a week if they suck, repeat as needed–I think we’re definitely in for some trouble. This is compounded so much more than people wish to admit with the current implementation of DEI efforts, employee resource committees (or whatever your local version of officially-sanctioned factional support groups is), and so forth.
I don’t know what the right answer is, but I’m pretty sure dropping interviews ain’t it.
That’s not too brutal, HR doesn’t like it because it’s more work for them; management doesn’t like it because they’re afraid to fail unconventionally. Also then they’d need to work out a good onboarding process. Obviously you have to tell people you need them to do a week long paid interview.
For me, instead of a live coding interview, I very much prefer a 1-2 hour take home project. I knock those out of the park pretty well.
It would be nice to have the option.
I do like those, but I think they should be paid and I usually think they’re poorly designed/scoped. And when you’re doing a lot of interviewing at the same time, it can get annoying having that pile of stuff in the background.
Doesn’t that produce way too much overhead with training the new hires? Someone has to do the onboarding, especially with complex projects (and aren’t they all, nowadays?) or junior hires! Usually it takes a while (a week to a month) before someone is properly up and running anyway. If you have to do that with your shotgun approach to hiring, that would put a big cap on overall productivity while you’re in “hiring mode”.
Is there any interviewing technique that doesn’t?
This assumes that the candidate has time to build a portfolio, or is comfortable talking about their previous job’s likely-proprietary code. It also biases towards candidates with interests like yours.
And it’s not uniform, which allows a huge amount of subjectiveness to creep in.
And yet, giving different types of interviews to different candidates is a non-starter for hopefully obvious reasons.
Yes. @singpolyma is proposing a portfolio review, which encourages the candidate to teach the interviewers about a subject they are unlikely to be as familiar with as the candidate. That’s a very relevant skill that code challenges can’t demonstrate.
I once had a candidate walk us through her ClojureScript project. We didn’t know ClojureScript or have any particular interest in it. But it was the only non-proprietary code she could show us. She was able to demonstrate both her skill in writing software and, just as importantly, an ability to explain her decision making and values in the process. We had to swallow our pride and ask some questions about syntax, but it was one of the best interviews I’ve ever conducted.
Yes, there was a lot of subjectivity in that interview. That’s life. But interviewers will practice an open or closed mind regardless of format. It’s the code challenge that’s the greater refuge for closed mindedness.
I would care to bet that part of the intention behind ‘with interests like yours’ is an interest in having programming projects at home. I know this is not a universal opinion, but I think that having personal projects is irrelevant to the performance of a senior engineer.
Not obvious at all. Why not allow candidates to choose between (eg) coding interview and portfolio review?
Presumably because comparing candidates who went through radically different interviews would be tremendously difficult.
Hmm. I guess I’ve never been a low demand high supply enough situation to be “comparing candidates”, at least for full time.
I guess that means you either get very few applicants or that you’re so selective that almost all get filtered out and your search ends as soon as you find one that passes.
On multiple occasions, I’ve found myself in the intermediate situation where multiple good candidates showed up at the same time, but I still had to choose one. In that case, even if you allow people to demonstrate themselves with some freedom, you also need a basis in order to carry out the unfortunate task of comparing them.
You could just flip a coin.
If you have 2 candidates who have impressed you, why does one have to be better or worse, and what makes you think you have the process or information to determine who is better? Just pick a winner at random. Or pick the one who used a sans serif font on their resume, or some other arbitrary factor.
Exactly. Either just hire them both (are you really not gonna need another any time soon?) Or else it doesn’t really matter which you pick.
Ironically, as strange as that sounds, both your suggestion of hiring them both and @fly’s suggestion of flipping a coin would’ve worked quite well in my case in hind sight…
I don’t think that’s ironic. I think that’s the point.
I don’t have a hat, but: I wrote this! AMA.
Excellent work. Is there further documentation available?
There is, but I don’t know how useful it is yet. this directory contains some more docs, including a full (possibly out of date!) language reference.
In general though, there need to be deeper docs that more effectively communicate what the tool is actually designed for. The existing docs don’t do a good job of that yet. Most people get the idea that it’s supposed to be used to prove things, a la TLA+ or Alloy, or that it’s supposed to be executable. Both are reasonable assumptions. I need to continue to hone both my understanding of what this is good for and my ability to communicate that to others.
In addition to that, I am still very open to the possibility of being able to prove more things with it than you can prove right now. For instance, I’d be very open to adding a type system.
Hey thanks for this cool contribution! Keep up the awesome work, I hope you can tell how much we need tools like this around.
To whoever might be interested, here’s the implementation I came up with for this spec: https://git.sr.ht/~subsetpark/erasmus
This is also my first Zig program ever, so apologies for any lousy code!
Heh, I succumbed to the temptation as well: https://merveilles.town/@akkartik/107778241367837041
It was also inspired by Zek: https://merveilles.town/web/statuses/107742821323590471
Let a thousand gardens bloom!
I suggest to the author, as I suggest to anybody who finds themselves imagining that everyone around them (except for themselves) must be pathologically stupid, that they suppose that all of the things they decry are motivated by sensible, relatable concerns, which nevertheless might not be the same concerns as are primary to the author, or which might be realized via methods which nevertheless might not be the ones that the author prefers or knows.
Devil’s advocate even though I think your approach is the right default one:
Does this mean it’s impossible for large groups of people to do “crazy” things? Empirically, historically, this seems false. How then do you distinguish between those cases and cases where you simply lack context?
What if the relatable concern is wanting to feel cool, and use something fresh and new, or a thing that Google uses and people talk about at conferences? I’m not just being glib and dismissive. But if that is the impulse – and I think it drives deeper than most people admit – you don’t necessarily get sensible. Or… there’s plain old not knowing about simpler ways.
People don’t have to be dumb to partake of sub-optimal trends.
I normally try creating a Fermi Estimate for the problem. If it differs substantially from the scope of work then I assume there’s some missing context.
For example, how would this work to answer a question like “Are far too many companies using Angular when something simpler could have solved their problem better and saved many engineering hours?”
I don’t think it would work for that kind of question; I use this approach for projects at work where there’s usually a missing context.
I don’t know if the author really thinks that way about people or it’s just a way to emphasize the unnecessary growing complexity of some systems (given his personal experience).
Being loud, angry, and intolerant is UnixSheikh’s whole deal, so I don’t think it’s “just a way to emphasize”.
This is- as others have noted- more or less complete nonsense. Every assertion is hyperbolic and none is justified.
That said, there is a tiny kernel of an interesting idea here.
Arguably, the current distribution of web browser usage is Bad. Google has done quite a few questionable, embrace-and-extend sort of things in the recent past, and even if you don’t think that they’re evil, you might want to avoid a monopoly in the browser market considering its centrality to modern life.
The interest point that the author glances across is that, because web development is so complex today, it’s highly unlikely that the incumbents will be realistically challenged… ever? Who can we imagine writing a new HTML rendering engine or JavaScript virtual machine in the year of our lord 2022?
On the other hand, if the standards for web development were simpler, if the standard method involved more reliance on vanilla HTML and CSS, we might imagine that it would be easier to compete and to diversify the browser market. Which would arguably be a good thing.