Usually this involves consuming less-structured input and producing more-structured output. This is called parsing…
Per this definition, how is e.g. the object indexing example parsing?
>> const getNestedItem = (o) => o?.a?.b?.c; >> getNestedItem({ a: { b: { c: 1 } } }) <- 1 >> getNestedItem({}) <- undefined
What does “less structured” and “more structured” mean in this context?
What I meant by that is more like “turning unknown data into known”, and IMO that example fits the definition because the nested item may or may not be there, depending on the object. In general though that’s more of a building block for constructing more useful parsers, and when you’re using some parser combinators along with it you could also decide to transform the data that it returns, or if it fails you could switch to parsing a different field or a different kind of object.
Flakes has a lot of differences between classic Nix and it makes a lot of techniques and configuration non-transferable between the two. It has effectively soft split the community between people that use flakes and people that don’t use flakes. I personally don’t use flakes because I haven’t seen good arguments as for why I should.
@cadey, I read this blog post by Eelco Dolstra in which he introduces Flakes and describes the problems it’s intended to solve. It seems to me like an argument as to why one should use Flakes. Do you disagree with his description of those problems, or of Flakes as the solution to them? If so, could you elaborate on why? I have never used Nix but am very interested in learning it, and have heard a bit about how some people like Flakes and some people don’t so much. I’d love to understand people’s perspectives on it better!
The article is pretty good, but maybe I’m not following the initial github thread that sparked it?
I don’t know if Casey is right or wrong, but I missed the part at the end (that is crucial to the gist of the article), where somebody proves Casey is right with a working demo. Is it in another thread? Or did I miss his/somebody else’s benchmark example that would have satisfied the “it’s complicated” criticism?
Casey is the one who wrote the proof-of-concept terminal to support his claim. His first version was written in a weekend without an optimization work. (NB: there’s a bit of debate about this. Casey says he didn’t look at the output of a profiler, he just implemented the code with mechanical sympathy. Some would say that taking the hardware in consideration is optimization, others would say it’s reasonable use.) A couple weeks later, he demo’ed a second version of his terminal where he addressed some of the criticisms that he received.
Is that in the thread? I didn’t see that his examples address (or seem like it could reasonably attempt to ) all of their concerns. Yes his is faster (and kudos to him for proving it), but that’s not the only issue at play.
I’m extremely skeptical that they ignored his proof of concepts purely out of pride. It seems to me like they made an engineering decision (which might be right or wrong) that the proof of concept seemed to faster, but didn’t solve their other concerns/problems.
My larger point is that all of the above is a counter example to the thesis of the article. Sometimes stuff is hard and people show up and say “this is easy” when it is not.
Assuming Casey’s proof of concept is faster and addresses all of the factors then the article is well-served by the github thread. But I don’t see that.
I didn’t see that his examples address (or seem like it could reasonably attempt to ) all of their concerns.
I think Casey’s argument in the GitHub issue is that many of the “objections” raised are irrelevant to the essential question of whether the particular problem he identified is solvable with a reasonable amount of effort by someone with expertise in these areas. I don’t know enough about the problem space to personally be able to judge whether that claim is true or not, though.
I’m extremely skeptical that they ignored his proof of concepts purely out of pride.
I don’t think that’s what the author of this blog post was trying to say—my read was that they thought the Microsoft folks lacked the expertise to identify that the issue should be or is resolvable with a reasonable amount of effort, as Casey claimed.
Sometimes stuff is hard and people show up and say “this is easy” when it is not.
That’s absolutely true, and probably more often the case than not, but one must imagine that at least sometimes the opposite happens, too (someone shows up and says “this is easy”, and it actually is—or at least, it is for them).
Either way it’s a tricky situation—I felt like in general everyone in the thread was respectful to each other, but one side’s perspective on the issue was “this should be easy but no one seems to understand this and is talking circles around my efforts to solve the problem”, and the other’s was, “this person is claiming this thing is easy but they don’t seem to understand that actually it’s not”. I suppose a proof-of-concept does seem the most reasonable way to resolve a disagreement like this.
I don’t know if Casey is right or wrong, but I missed the part at the end (that is crucial to the gist of the article), where somebody proves Casey is right with a working demo.
This also confused me; but I think this is the working demo (it’s written by Casey): https://github.com/cmuratori/refterm
This entire article turns on whether Casey was right, so it’s important to double-check that evidence. They did write a demonstration terminal and publish an explanatory video explaining their approach. Personally, their argument is very convincing to me, because I’m familiar with similar techniques in Xorg for accelerating text rendering (see the “Glyph Rendering” section of Xorg RENDER protocol documentation for details.)
There’s also a followup video for Refterm v2.
I’m late to this story, so was unaware of any evidence outside of the thread. And it looks like the video was made after Casey closed the thread? Maybe it’s worth revisiting the github issue (either on Casy’s or MS’s part).
Thanks for sharing that! I look forward to watching it and getting a better grip on the technical issue.
We are never shown the contents of the UserAttributes
type, but I am suspicious that it contains (or would contain) fields which are expected to be null
if the user is either an admin
or a customer
. Which suggests to me that a better way to model this would be with a discriminated/tagged union, which would look more like
type User =
{ type: “admin”;
adminAttributes: AdminAttributes;
} |
{ type: “customer”;
customerAttributes: CustomerAttributes;
}
I’m also not really convinced that the type key “disappeared”—the code was just refactored so that instead of the high-level interface to user creation being one function which takes one of two values to distinguish the user type (ultimately two code paths), it is now two functions which take no “user type” arguments (still two code paths).
I read “Why be Like Elm?” roughly as “Why would you, NoRedInk, want to write your Haskell like you write your Elm?”
It was intended as “Why would I, a programming language, aspire to be like Elm?”.
A bit farfetched perhaps :).
Right off the bat, this post misunderstands the point of versioning. The author opens with:
Let’s set the stage by laying down the ultimate task of version numbers: being able to tell which version of an entity is newer than another.
That is not quite right. It’s true that that’s one thing that versioning does, but it is not its ultimate task. Its ultimate task is to communicate to users of the software what has changed between different releases, and what impact that has on them (i.e. “why should I care”). Otherwise, why does anyone care which release is newer? What does that matter to a user of the software?
The rest of the post seems to be reacting to people who believe that SemVer solves a lot of problems that it doesn’t, and throws out the baby with the bath water in doing so. SemVer is certainly imperfect. And maybe there are versioning schemes that are better! But it does have a legitimate claim to attempting to accomplish versioning’s “ultimate task”. And I think that this post fails to sufficiently recognize this fact.
Its ultimate task is to communicate to users of the software what has changed between different releases, and what impact that has on them (i.e. “why should I care”).
I’m sorry but that’s historically in that general sense just not true. There’s been a wild mixture of version schemes and they still exist and the only thing that they have in common is that you can order them.
I could start enumerating examples, but let’s assume you’re right because that’s not my point, what bothers me is this:
and throws out the baby with the bath water in doing so
How does the post do that? That was entirely not my intent and I state several times, that there’s value to SemVer as a means fo communication. As you correctly say that the rest goes to dispel some myths (thanks for actually reading the article!) so I’m a bit saddened that you came to that conclusion? I’ve got a lot of feedback in the form of “i like SemVer but the article is right”, so I’m a bit baffled.
There’s been a wild mixture of version schemes and they still exist and the only thing that they have in common is that you can order them.
You can’t though, not with the vast majority of large projects.
Which is more recent:
To be specific, many versioning systems only guarantee a partial ordering. This arises because they use a tree-like structure. (Contrast this with a total ordering.)
That’s a very good point and it depends how you define “newer”. It certainly doesn’t mean “released after”.
There’s been a wild mixture of version schemes and they still exist and the only thing that they have in common is that you can order them.
I do agree with that. But trying to establish what the “ultimate task” of a versioning scheme is means coming up with a description of what problem(s) versioning schemes are intended to solve. I don’t think that “being unable to figure out which software release is newer than another” is really a description of a problem, because it’s not yet clear why that is valuable. I can say personally as a user of software (thinking primarily of packages/libraries here) that I never just want to know whether some release is newer than another, I always want to know 1) what changed between subsequent releases and the one my current project uses, and 2) why or whether that matters to my project. I’d say then that the task of a versioning scheme is to help me solve those problems, and that we can judge different versioning schemes by how well they do that.
How does the post do that?
I think it’s a little hard to explain concisely because my read (and those of other commenters, I think) of the post as unfairly criticizing the value of SemVer (and maybe versioning schemes in general) is at least somewhat a consequence what is emphasized, and maybe exaggerated, and what’s not. But here’s an example—you say at one point, after talking about strategies “to prevent third-party packages from breaking your project or even your business,” that
There is nothing a version scheme can do to make it easier.
which I think is simply untrue. In fact, like I was saying above, I think that’s the whole point (task) of a versioning scheme—to make the process of upgrading dependencies easier/less likely to break your project. Just because they (including SemVer) sometimes fail at that task, or try to reflect things (e.g. breaking API changes) that aren’t necessarily enforceable by mathematical proof, doesn’t mean that they can’t do anything to help us have fewer problems when upgrading our dependencies.
and those of other commenters, I think
I mean this in the least judgy way I can summon: I don’t think most other commenters have read the (whole) article. Part of that is poor timing on my side, but I didn’t expect two other articles riffing on that happening appear around the same time. :(
Just because they (including SemVer) sometimes fail at that task, or try to reflect things (e.g. breaking API changes) that aren’t necessarily enforceable by mathematical proof, doesn’t mean that they can’t do anything to help us have fewer problems when upgrading our dependencies.
I’m curious: how do you think does that I practice? Like how does that affect your Workflows?
I’m curious: how do you think does that I practice? Like how does that affect your Workflows?
For SemVer in particular, the MAJOR
.MINOR
.PATCH
distinction helps gives me a sense of how much time I should spend reviewing the changes/testing a new version of a package against my codebase. If I don’t want to audit every single line of code change of every package anytime I perform an upgrade (and I and many people don’t, or can’t), then I have to find heuristics for what subset of the changes to audit, and SemVer provides such a heuristic. If I’m upgrading a package from e.g. 2.0.0
to 4.0.0
, it also gives me a sense of how to chunk the upgrade and my testing of it—in this case, it might be useful to upgrade first to 3.0.0
and test at that interval, and then upgrade from there to 4.0.0
and test that.
Of course, as you note in your post, this is imperfect in lots of ways, and things could still break—but it does seem clearly better than e.g. a versioning scheme that just increments a number every time some arbitrary unit of code is changed.
How many dependencies do you have though? I understand this is very much a cultural thing but to give you a taste from my production:
It’s entirely untenable for me to check every project’s changelog/diff just because their major bumped – unless it breaks my test suites.
I fully understand that there’s environments that require that sort of diligence (health, automotive, military, …) but I’m gonna go out on a limb and say that most people arguing about SemVer don’t live in that world. We could of course open a whole new topic about supply chain attacks but let’s agree that’s an orthogonal topic.
P.S. All that said: nothing in the article said that SemVer is worthless, it explicitly says the opposite. I’m just trying to understand where you’re coming from.
When I’m “reviewing my dependencies” I certainly don’t look at indirect dependencies! I don’t use them directly, so changes to their interfaces are (almost) never my problem.
Like @singpolyma, I don’t bother with indirect dependencies either—I only review the changelogs of my direct dependencies.
The main project that I’m currently working on is an Elm/JS/TS app, and here’s the breakdown:
I definitely read the changelog of every package that I update, and based on what I see there and what a smoke test of my app reveals I might dig in deeper, usually from there to the PRs that were merged between releases, and from there straight into the source code if necessary—although it rarely is. Dependabot makes this pretty easy, and upgrading Elm packages is admittedly much safer than upgrading JS ones. But I personally don’t find it to be all that time-consuming, and I think it yields pretty good results.
Are you saying that my claim as to what “versioning’s ultimate task” is requires citation? Or that the author’s does? I’m making a claim about what that is, just as the author is—I’m not trying to make an appeal to authority here.
To those interested in these ideas, I strongly recommend checking out Unison.
The future is coming!
What’s wrong with tuples?
Seems there is some reasoning in https://github.com/gren-lang/compiler/issues/12 but it doesn’t seem particularly convincing.
IIRC PureScript doesn’t have then either, but uses custom types instead
I kind of like this approach because it reduces parens to only be used for grouping, reducing the overall syntax variety.
TypeScript does have tuple types: https://www.typescriptlang.org/docs/handbook/2/objects.html#tuple-types
Although I’m fairly convinced by the rationale for excluding them.