This essay is confusing to me. It seems unkind and part of some new conversation I’m not part of.
For a bit of history, Roy’s thesis was finished in 2000. Around 2001 I was doing some high profile work at Google with SOAP and RPC-style Internet APIs. The REST folks gave us a bit of a hard time (kindly) saying there was a better way, to just use HTTP verbs and a particular paradigm for API design. Roy’s thesis was a helpful articulation of that paradigm. And broadly speaking, it won. SOAP is certainly dead. There’s still plenty of RPC style APIs on the Internet but they tend to be designed with state and documents in the center rather than RPC verbs.
I think part of the problem is touched on in the post – REST was seen as promising a universal API that didn’t require people to design and program one-off clients for interacting with specific sites’ APIs, and especially for the case of unattended programs/machines interacting with APIs. That’s what SOAP/Web Services was supposed to be for, so that’s what the alternative must be for, too, was the thinking I remember.
But it never delivered on that. No matter how perfectly, accurately, true-to-the-thesis “RESTful” your API is, clients are still going to be one-off and/or need manual up-front programming. Because REST is more or less “just build web pages”. Even if what you’re returning is JSON with relative URLs in it instead of HTML with anchor tags in it, there’s nothing in-band that can tell a machine when it should follow those links, or which ones it should follow, or any of the other things that are key behaviors for automated clients/consumers.
And so that information is out of band and either encoded in some other server-side system (some kind of metadata about the API), or encoded in the client, and we’re back to square one for the thing people were trying to do in the first place, with either needing to agree on several large tomes’ worth of metadata specs (the old-school approach from the SOAP days) or building one-off clients for everything.
The fact that we got slightly better ergonomics out of it is nice. It is a lot easier to whip up a one-off client than it used to be, and the “RESTful” schema specs are somewhat nicer to produce and consume than the alternatives, though neither of those really is due to “REST”. So it’s hard to say that “REST” was the thing that won – it feels more like JSON and YAML won out over XML, and simpler schema/metadata specs won out over the SOAP/WS-* stack, but not in a way that fundamentally changed us all over to a hypermedia-native approach.
Absolutely. If nothing else, the paper, and what was distilled from it has helped many folks understand HTTP better, and use it for what it is, an incredibly well thought out protocol at the application level, not just the transport level. If nothing else, that is a fabulous side effect.
I mean, almost every API out there today is RPC style, often including “api wrappers” which are needed because the RPC of each API is bespoke.
Though SOAP style RPC is also back in the form of GraphQL
This article worked for me in that it was, in equal measure, nonsense, funny, intriguing, wrong, right and lots more (not to mention the (deliberate?) two different spellings of “antimemetic”). A rant is allowed to be all those things and more.
Very cool! I especially like how when you have multiple generators, jq backtracks and produces all combinations. (Leftmost varies fastest.) Look at this!
# -n = --null-input
# -c = --compact-output
# objects
jq -nc '{a: (1, 2), b: (10, 20)}' -c
{"a":1,"b":10}
{"a":1,"b":20}
{"a":2,"b":10}
{"a":2,"b":20}
Generators inside arrays are flattened …
jq -nc '[(10, 20), range(3)]'
[10, 20, 0, 1, 2]
… but variables bind one value at a time, and you can use that to produce arrays. This also demonstrates that the generator backtracking happens across pipes.
jq -nc '(1, 2) as $a | (10, 20) as $b | [$a, $b]'
[1,10]
[1,20]
[2,10]
[2,20]
# Or use this: jq -c -n '[[0, 1], [3, 4]] | combinations'
Funnily enough: when I use string concatenation, the output varies leftmost-fastest instead!
jq -nc '("0", "1") + ("0", "1")'
"00"
"10"
"01"
"11"
Funnily enough: when I use string concatenation, the output varies leftmost-fastest instead!
Huh, interesting. It looks like it evaluates arguments right-to-left, so maybe that’s why?
$ jq -nc '(0 + "left") + (1 + "right")'
jq: error (at <unknown>): number (1) and string ("right") cannot be added
If it evaluates the right-hand generator first, then it makes sense that the right-hand would end up being the outer loop.
Wow, lots to explore there, thank you for sharing! I’m still trying to get used to thinking in terms of jq’s generators and streaming in general. This will provide food for thought.
Does jq have pattern matching?
And can you turn arrays into generators?
I want to see how Prologgy things can get. Ultimately not very, I know, because backtracking and unification are only two parts of Prolog; the third part is its efficient inference algorithm. Nevertheless.
For example, somethings like this would be fun to cook up. Please forgive syntax mistakes, I am not fluent in jq.
def siblings(parentage):
generator(parentage) as $var1
| generator(parentage) as $var2
| select( (var1.parent == var2.parent) and (var1.kid != var2.kid) )
| [var1.kid, var2.kid]
[{parent: "Ann", kid: "Bob"},
{parent: "Ann", kid: "Che"},
{parent: "Don", kid: "Eve"},
{parent: "Don", kid: "Fay"}] as $x
| siblings(x)
# desired output [["Bob", "Che"], ["Eve", "Fay"]]
This is cool, writing a condition this way is unfamiliar to me but arguably very natural. A beginner mistake, in say Python, is to write thing == ("a" or "b")
when you meant thing == "a" or thing == "b"
. It must happen by transliterating from English too closely: “it equals ‘a’ or ‘b’”. But in jq you can correctly write thing == ("a", "b")
and everything distributes over the commas automatically.
The choice operator in Verse is like this too:
And I think Verse and jq do a similar cross product when both arguments are a generator, like (10, 20, 30) + (8, 9)
gives you 6 results.
Ooh thanks for the reminder about that Verse video, I’ve been meaning to watch it, but haven’t yet. This is the prompt I need.
I haven’t wanted to flood this lovely place with all my recent posts on jq, but I thought this short one might be of interest as it hopefully explains some subtleties of the comma as generator, and streaming.
I thought this was a mildly interesting article. There is some discourse on the idea of HTTP status codes being a bit dated for the modern web, and that anything beyond the idea of a 200 or 404 should just be communicated in the response body.
The simplicity and immediate understandability of many of these status codes certainly has its charm though.
Many of these codes are still extremely important, although often at lower levels than what a typical web-app developer sees. They are by no means merely “simple” or “charming”, they are the bedrock of the Web.
For example, conditional requests and codes like 412, 304 are crucial for caching by user agents and proxies, and for techniques like MVCC and optimistic concurrency.
201 vs 200 conveys information in the response to a PUT about whether the resource already existed, and 409 indicates a PUT conflicts with existing data. 405 indicates a method isn’t useable with a resource, and 400 means the request is syntactically invalid.These are important parts of REST.
206 is used for partial GETs and range requests, which allow browsers to do things like resuming an interrupted download.
301, 302, 307 all enable redirects to work.
The HTTP/1.1 RFC explains all these in detail; it’s not abstruse, although it’s very long and there is a ton of stuff to keep track of.
It’s cute, but as @snej points out it’s easy to get these wrong or just for people to have differing interpretations of what they mean for a particular API. I still think it’s better to use specific error codes in the body.
It’s not just about their interpretation. They can never map to the wide range of domain specific errors your application can return. And more importantly any proxy between client and server can return them.
So if your query returns a 404 status, you have no idea which software actually returned the error, which resource is “not found”, and what not found actually means precisely
HTTP is a transport protocol, and HTTP status codes should only be used to signal errors in the transport layer. Infortunately most HTTP APIs happily mix application and transports, but it does not make it right.
Hmm, I would suggest that HTTP is very much at the application layer, and not merely a transport layer protocol. This (transport protocol) view of HTTP, in my estimation, led to, or at least abetted many initiatives* that also deliberately ignore the status codes available, and how HTTP works, and as such, led to the loosening of general understanding (which in turn begets these “charming” views mentioned earlier in this interesting thread).
*I’m thinking of early SOAP, WS-Deathstar, etc.
While I learned to code on a PDP-11 at school my first personal computer was an Acorn Atom, arguably a very popular home PC back then, and I was curious not to see it in the list. Then I saw the discrepancy: the title of the article includes “7 key British PCs” whereas the intro has “the top seven most significant platforms”. Quite a difference. This kind of loose (I won’t say “lazy”) writing irks me more than it should.
I suppose that, as it was introduced in 1979, it doesn’t qualify as a “1980s PC”, and was in any case rapidly superceded in popularity by the Proton (BBC Micro). As it is, Acorn has 3 machines in this article already. Personally I don’t remember it as being particularly popular, having a “quirky” BASIC implementation - the only time I saw or used one was driving a physics lab experiment at university circa 1989.
I hope folks don’t mind me sharing more jq
content. If, with these posts, I can encourage others to write about and share similar content (which I would eagerly devour in an instant), then that makes it worthwhile. I’m sure there are others out there with far more valuable thoughts to share on jq
!
This is the first post in a small series; rather than create a super long single post I thought I’d experiment with breaking it up. I’ve tagged the parts so you can see them all together too jq-series-top-beer-types.
This was after my first & recent foray into looking at my Untappd checkin data in Untappd data with jq - my top brewery countries.
Definitely don’t mind since I’ve just learnt about unique
from that post and that’ll help me in the script I was just twiddling (downloads all your Pocket URLs.)
Markdown will never get beyond developers
I think they underestimate how many young people use discord and markdown for formatting their text.
there’s no modern semantic elements such as main, article, section, nav, header, footer, figure, picture
Well yes, but also, I don’t need that in reddit or discord. And the places that do need it, are better off doing said post processing. Also I’m very glad that I can actually use it instead of a badly behaving WYSIWYG editor (looking at you reddit trashfire that ate my text too many times). Same goes for github/gitea/gogs/gitlab entries.
It’s a modern BBCode for “forums”, which is also better in being readable without rendering.
BBCode is an excellent comparison, because I remember the web being riven with unclosed BBCode tags and bad formatting.
What Markdown does not do is probably more important than what it does.
What Markdown does not do is probably more important than what it does.
This. Markdown’s strength is that it doesn’t try to be all things. There’s a megaton of content in the space that sits between scientific paper / book scale organisation, and, well, let’s go for tweets as an example at the other end of the spectrum. What Markdown does really well is cater for that content. If anyone is going to argue that Markdown isn’t good enough, I’ll ask them to show me how many tools they have in their toolbox, and if there’s more than one, their argument is bunk.
Also: WYSIWYG is tremendously overrated and I’d argue that in some circles it has hindered content creation rather than supported it.
Finally: I can’t help but smile to see the Markdown formatting available just winking at me below this textarea input box in Lobsters.
It’s just a shame Org syntax didn’t beat out Markdown syntax. Org supports such simplicity and much complexity, in a readable plaintext format.
It might be a shame, but it’s a predictable consequence of the fact that org-mode doesn’t have an independent implementation that most people can use. Markdown had a simple perl program that you can call from a makefile or any sort of script to render your text to html. These days there are loads of libraries to choose from, to render on the client, server, GUI, etc. dynamically.
I strongly agree. And that is still a little disappointing, because org-mode was so well suited for that kind of thing. org-babel had so much potential for literate programming. I wish that had broken differently.
While I appreciate Org mode’s outlining features, Org mode’s syntax has some limitations that make me glad it didn’t catch on.
#+BEGIN_SRC javascript
and #+END_SRC
instead of Markdown’s ```javascript
and ```
. That syntax uses three different punctuation characters – it’s hard to remember and type.
<s
Tab (using the s
structure template), but that only works in Emacs.~#example~
are harder to visually distinguish from code than Markdown’s inline code delimiters `#example`
.There’s a lot of reasons why Org-mode didn’t end up winning out because of complexity.
Orgmode is wonderful and terrible in a lot of the same ways Emacs is.
“Please don’t ask your readers to workaround Twitter’s crappy design!”
I’m not a fan of Twitter thread based writing either, but I know there are reasons folks do it. Nor am I an apologist for Twitter, but I’m going to say here that (while there are some design issues) the fact that Twitter makes for a poor UI for writers and readers of longer form content, it’s perhaps more because it is not what Twitter was designed for. Yes, we can get into the debate about how usage evolves, but this is not crappy design - I’d suggest it is merely the use of an inappropriate tool.
I’m less interested in immediate feedback to the query I’m writing. I’d rather have a tool that helps me write the cryptic dsl, like a symantic editor or something with suggestions. Maybe like those regex editors online
I get that. For me, it’s all about visualising the input data and what’s produced with what I express in the jq
filter. The shape of the data and how it morphs is a key part of my understanding of jq
and the data itself.
I am a fan of ijq
and use it a lot. But I’m always happy to see alternatives, and I’m looking forward to seeing how this one turns out. Apart from the layout (input at the top instead of the bottom), the main difference I’ve found so far is in the composition; with ijq
, every character change causes a re-execution, which has its pros and cons, but with jqp
one must send the line, when ready, with Enter. It’s early days for jqp
(it’s at its first release of 0.1.0 right now) and I would for example love to see more pass-through parameters, so I could specify jq
parameters on invoking jqp
. Overall though, a 👍 from me.
Edit: oh, and and some bonus points for making an arm64 build available (so I can use it in my devcontainer I run on my Pi).
The book that accompanied the software, and that we see in the advert, played a big part in my career. I was a mainframe (IBM) person with SAP R/2 experience (along with all sorts of MVS based OS and tool experience) about to move to SAP R/3 that ran on various Unixes, of which at the time I knew nothing. I got Coherent, not sure that I even installed it immediately, but took the book on holiday with me and read it, pretty much cover to cover. A few weeks later I was in Heidelberg staring at the (green terminal) screen of an HP-UX based early SAP R/3 installation. And I was ready.
So, the manual turned you into a UNIX disciple? So, they did make their mark, just not in the way they wanted.
I hope you folks don’t mind me sharing another explanatory post about jq
. My theory is that the more examples and explanations the better (though I could be wrong!).
Having it accidentally falling into your face won’t cause extreme pain. It can be put in almost all pockets and bags without a problem, and won’t scratch your clothes or pull your beach pants down.
More hardware project descriptions should highlight real world benefits like this.
Definitely no later than the 23rd century, given that ST TOS communicators are clamshell form factors, and any of the computing stuff people do on phones now is done on PADDs.
This is why I created https://fx.wtf, didn’t wanted to learn a new language =)
Got solution in under 1m:
fx entities.json '.value.map(x => Object.values(x).map(JSON.stringify)).join("\n")'
I do like fx
, especially it’s “exploration editor” but also the fact that I can write in JS. But there’s something about jq
’s language that draws me in.
Sorry, I wasn’t clear. It’s not that I see improvement requirements in fx, it’s that I’m attracted to jq because of the language. This is why I’m using jq rather than JS-in-fx to explore JSON, i.e. not for any negative reason.
I’m continuing my efforts to understand the basics of jq
by writing posts that a past version of myself would have appreciated. I hope this sort of post helps folks to get their head around core jq
mechanisms such as iterators, and also to become more comfortable with the shapes of JSON data that jq
reads and writes.
Slightly off topic, but I couldn’t help but notice at this point in the linked YT video that Crockford casually uses the noun “evangel”, which one might describe as “archaic” at best. Lovely word though.
And looking at what he said as a whole, I wasn’t sure whether he was talking about JavaScript specifically, or humanity in general.