Something bugs me quite a bit about this comparison: very little space is dedicated to comparing the actual formats from the first principles, it’s almost 100% look at the derived artifacts — size&format of the spec, historical circumstances leading to creation, popularity, benchmarks. The closest to fundamentals’ analysis is itself second-order — reference to someone else’s summary opinion on HackerNews.
Now, the derived stuff is hugely important, especially for serialization formats, which sit on the interoperability boundary, but it still feels very wrong to not look at them in the context of fundamentals. From the writing style, it does seem that the author knows what they are doing, and I guess I should update in the direction of CBOR a bit, but, still, I am surprised just how little I was able to extract from the article in terms of what a good serialization format should look like.
I really really wanted to include more examples but this I had trouble justifying spending so much time on a “sidequest”. I’m hoping to include a deeper dive in my flat scraps documentation
Side thread: personally I don’t feel like Github stars are a good metric for the popularity of a projects. What do you think?
I don’t have a better way to estimate project popularity; just saying that Github stars seem not useful to me. In about 16 years of using Github I have starred less than 30 projects, but I’ve probably used ten times as many Github projects (probably much more). Look like I just don’t star projects :-) .
And there might actually be a bias in the star counts, in that some projects attract users that are more likely to hand out stars.
What makes you give a star to a Github project? Do you give stars for any project that sounds interesting, or any project that you use, or any project that you feel exceptionally thankful for?
agreed, they are pretty useless as a metric for anything. I think they mostly measure “how much reach on social media/reddit/HN/…. has this ever gotten” in many cases, and that’s not informative of anything. (I personally star a lot, but really treat it as a bookmark along the lines of “looks vaguely interesting from a brief skim”, its not an endorsement in any way)
I’m pretty sure I’ve never starred a project on GitHub, or at least I haven’t in the past decade, and I don’t know why anyone would! It’s an odd survival of GitHub’s launch era, when “everything has to be social” was the VC meme of the moment and “social open source” was GitHub’s play.
I don’t get why popularity is so important here. Isn’t it effectively an implementation detail of your language? Even if I’m misunderstanding that and it is not, isn’t the more important question “are there good implementations”, not “are the implementations more popular than the ones for the other thing”?
One huge use-case for my language is sending and receiving and storing programs, so yes, it’s an implementation detail, but it’s also a very important one that will be impossible to change later.
But you’re totally right – that is the main question. I’m still exploring the space of serialization candidates, and these two particularly stood out to me.
I mostly care about popularity because convincing people to trust/adopt technology is way harder than actually implementing it. Extending a trusted format seems less risky than other options
I haven’t implemented them from scratch, but I’ve used them extensively while designing scrapscript and building smel shell :) In this writeup, I tried to balance my opinions with existing discussions
Fair; sorry if I was a bit snippy. The writeup just seemed to involve a lot of looking at what other people say/think and I expected more of your own thoughts.
As I mentioned there, I wonder if terminals/shells are stuck in the past because the efforts are so diffuse. That is kind of natural in open source, but also a problem
On static typing, there is almost certainly something that can be improved about the shell, but this page doesn’t address a key problem – static typing doesn’t scale to the sofware on even a single machine (on Linux or Windows), let alone multiple machines:
Hey thanks for all the links! Some great stuff I hadn’t seen before
As I mentioned there, I wonder if terminals/shells are stuck in the past because the efforts are so diffuse. That is kind of natural in open source, but also a problem
On static typing, there is almost certainly something that can be improved about the shell, but this page doesn’t address a key problem – static typing doesn’t scale to the sofware on even a single machine (on Linux or Windows), let alone multiple machines:
scrapscript is specifically built so that all machines share the same types/data/code :)
This is one major reason I think scrapscript specifically has something new to offer to shell design! You don’t have to install libs/apps or copy/paste to access the scrapyard. You can imagine something like this:
silent (cat file.csv)
|> andyc/customers
-- ok [ andyc::customer { id = 123, name = "Joe" } ]
But I don’t know. This might just be an advanced case of NIH syndrome.
It would be cool if there was some survey of all those projects, to see what they did “wrong” and right. And what they have in common and where they differed
I think it’s a hard design / bootstrapping problem because it’s IPC, not something you can ever break (e.g. why the terminal itself is so crufty, why CSV is, etc.)
I should probably rename / refactor the wiki a bit to make it clear it’s an editable survey, and make it clear which ones are apps and which ones are protocols, etc.
This thread gave me the thought that there should be a CANONICAL site that hosts descriptions of all the main shells and is a jumping point to more info.
And then I thought “what a stupid idea - very few topics have canonical web sites” …
And THEN I thought “but wait actually most of the shells that are being actively developed at the moment are being developed by lobsters!” So you could actually do this.
(There’s someone from the fish team here too but I’ve forgotten their username.)
I’d like to see it taken further: make how I use my comp version controlled/saved. When I want to see how I did x some years ago, open up that notebook and look at the file diffs embedded, the commands executed with their output, etc. and let me page up and see what I’d tried previously.
I do this to some extent with org mode files today, but the downsides are significant enough that it’s just not easy to achieve.
Great idea! One of the nice things about the platform design is that it naturally supports elm-style time-travel debugging. It should be pretty cheap to store messages for later replay.
I’ve got a longer vision for how scrapscript is going to tackle diffs and version control, and smel will hopefully get all of that for free once implemented. Stay tuned!
You can record everything you do in any shell using asciinema. You can also configure your terminal such way that it always starts asciinema by default instead of bash. So everything will always be recorded automatically. (Unfortunately, as well as I understand, you will lose ability to change size of your terminals, because asciinema doesn’t support this.)
I’m curious: Why are expressions converted to a linear RPN form, rather than something like a parse tree? Is it for performance?
This reminds me of the way queries are handled in Couchbase Lite (my day job). The various language bindings have “fluent” APIs for describing & building a query; these produce an intermediate JSON representation that the cross-platform core code traverses like an AST to generate SQL.
The JSON representation is mostly composed of arrays whose first elements are strings identifying the operation — very much inspired by S-expressions. For example, [“<“, [“$X”], 5] to represent $X < 5.
Later we added support for writing queries in N1QL, aka SQL++, a SQL extension that operates on JSON-like data instead of columnar, and it was just a matter of implementing a PEG grammar that produces our JSON format.
Fantastic question! Yep, the first version I designed was a parse tree. I roughly had a msgpack ext type for each AST node. For example, I had a dedicated “function” ext structure that mapped args to bodies.
The problem with this approach was that msgpack generally incurred a heavy 3-byte overhead for each ext type. 3-bytes at every AST node was really not great design.
But even worse, the parse tree was more complicated to explain and implement. To give you an idea, the docs were roughly 4x longer before I switched to RPN.
Anyway. To make things simpler, I had the idea of just encoding the tokens in an “expr” msg ext type. To make the implementation simpler, I wanted to get rid of operator precedence. And once I landed on RPN, a lot of other nifty features got unlocked.
Are you making a joke with linking to “sound systems”? idgi :(
But also it’s false that a sound system isn’t a sound system. A sound system isn’t a complete system, which is a very different property!
I call these holes “soundless”, since they are neither sound nor unsound.
The hole itself isn’t “sound nor unsound”, soundness is a property of the logic. It’s like saying “5 is not an imperative programming language.” Values aren’t languages, languages are languages.
Lately, I’ve been tilting at “irresponsible” servers. If servers could offer guarantees about uptime/protocol, correctness could percolate down to my program at compile time.
You may be interested in “session types”, which are meant to offer protocol guarantees across programs. Still a research field but we might see them in production in a decade or two. Sadly uptime guarantees are (probably) statically impossible; you can’t typecheck away an asteroid strike!
Are you making a joke with linking to “sound systems”? idgi :(
Yes, it was a bad joke – I will update it for clarity.
The hole itself isn’t “sound nor unsound”, soundness is a property of the logic. It’s like saying “5 is not an imperative programming language.” Values aren’t languages, languages are languages.
Good catch. Can I say the hole is “soundless wrt its system”? Kinda like saying that S is independent of F?
You may be interested in “session types”, which are meant to offer protocol guarantees across programs. Still a research field but we might see them in production in a decade or two. Sadly uptime guarantees are (probably) statically impossible; you can’t typecheck away an asteroid strike!
You are spot on! My next post on this topic will be a brief comparison of session types, temporal type theory, TLA+, and a few other tools.
Your amazing TLA+ guide has been super helpful btw! Anybody reading this comment should buy your book :)
When I started my career I wanted to help the most people. Instead of going to grad school and tinkering on something a few hundred people might use, I went to Apple. (I could have gone to Microsoft, but I have standards.)
Focusing on rough-edged homemade tools for other hackers, and calling that better than the stuff the other 8 billion people can use, seems a bit myopic IMO.
Claiming 8 billion people can use any product is a stretch, especially ones that cost thousands of dollars and need a credit card, and don’t work without internet access and iTunes, that delete accessibility tools every other update, and that give root on your device to governments if they ask nastily enough.
I love my Apple products! Thank you for helping make them.
I didn’t mean to belittle you or your employer. I don’t think wigwams are “better” than battleships. I just think wigwams are fun and I want people to work together
The author and I have quite a very different idea about what qualifies as a small project. Like…designing a whole new programming language and its corresponding compiler toolchain is not a small project in my world.
Nor is selling consumer hardware (like the Playdate as taylor.town’s post linked above suggests) or rebuilding cities (“tools for converting suburban sprawl into human-scale walkable infrastructure”) – the selection criteria for projects seems very vibes-based
The playdate is not the opposite of a playstation, it’s a hipster gameboy built by Teenage Engineering, known for overpriced overhyped toys in small numbers for those with lots of expendable income. I think the Arduboy is a lot closer to the vibe you’re going for.
I think this is a succinct example of one of my frustrations with the related polemic article, bemoaning literally everything that has been built by more than a handful of people at a time.
There are two very different properties here: that it be affordable; that it be “DIY”.
I am completely in favour of things being more affordable. Whether that means the price is lower, or that there are tax breaks or even grants and safety nets for things that people need but which they cannot afford. I want to live in a society where people aren’t made to suffer just because they’re not financially well off for whatever reason. So, affordable, in the most all encompassing sense, seems valuable and laudable.
Let us set aside the notion that maybe better public transit and urban structure would be a greater service for folks getting to work (emphatically not something you can knock together on your own), and assume that cars are the goal. Why on earth would DIY cars be an improvement? Cars are necessarily complicated machines. Modern cars are vastly safer and require vastly less regular and less critical maintenance than they did even twenty years earlier. Are you going to DIY antilock brakes? Traction control and collision avoidance systems? Multiple airbag systems? Seatbelts? I cannot imagine road safety is going to be improved by people knocking together DIY vehicles, whether you’re the one in the DIY vehicle or not.
Are you going to be able to get any kind of fuel economy (and thus reduce emissions) without computerised engine control systems? Should you have to be capable of writing your own firmware? If we’re not talking about petrol-driven cars, but rather battery electric systems, then: are you going to design the charging system? Are you correctly ventilating the batteries? EVs have less moving parts but they are still complex machines that require a lot of skills and precision manufacturing.
I just don’t understand why things built by organisations of people, over many years and with a lot of investment, would be prima facie inferior to something you can do yourself. This is not a defence of capitalism, or wealth inequality; I don’t think either of those things are required to get modern motor vehicles as we currently build them.
Thanks for the well thought-out response. I actually agree with you for the most part. I’m on mobile right now so I’m limited in what I can write
By “DIY motor vehicles” I was actually just thinking of variations of e-bikes that have standardized battery packs. Hope that clears some of the confusion!
DIY battery banks are a deep rabbit hole with plenty of makers (myself included) making them. they tend to exist at 2^n multiples of lithium battery voltages and have terminals. What would a standard standardize that hasn’t already been locked in by physics and the shapes of bolts? The variance in shape, size, mass, capacity, current delivery, etc are a feature, not a bug.
Standardization also kind of implies some governing committee, which contradicts the whole “stuff is cool if it’s got good vibes and it made by 8 people” shtick.
18650 cells are very standard, BMSs are very standard and comically cheap, voltages are very standard, what’s left to do other than package them and sell them? (Which plenty of companies also do). What’s left in the todo column for “standardized battery packs for ebikes?”
Indeed - the 100 Rabbits folks turned their boat into a house and have built software on the wigwam list. Perhaps this is the path to the conceptual through-line?
These distinctions are indeed important. A house is a structure like a wigwam. And as they say, home is where the heart is.
The metaphor would probably be more clear if it just referenced particular types of living and working spaces. For example, could Common Lisp be a Mies van der Rohe structure full of incredible modularity and audible leakiness built on top of a complex substrate of steel and glass?
Perhaps a barge or a houseboat is a clearer analogy (folks I’ve known who live on houseboats definitely feel like the DIY type!) I get there are two dimensions here (size and kind) but the simplicity of just measuring it in one feels quicker to understand at first glance (without needing to dive into comments!)
love the idea though, I may shoot you an email with some ideas!
My best guess is they were going for a size and sturdiness analogy, rather than one based on seaworthiness or “boatishness.” Not to say I totally understand why they went that direction…
Failed my most recent saving throw, so beginning to wrap up at my current role, then I’m on the job hunt.. Trying to leverage the moment to make a shift into either: Software wrangler for even bigger nerds (researchers/scientists etc) for tools/visualisation/workflow automation. Or QA Automation / Tools engineer in the game dev world.
The former prove difficult to find, the latter extremely difficult to breach…
Well I’ve gotten some pretty good work done on Garnet the last few weeks by procrastinating on tackling monomorphization again. The code formatter now outputs valid (if poorly formatted) code, the type parameter syntax for functions is now less cursed if still not as pretty as I wanted, there are module-level doc comments that only have a 35% chance of scourging my soul, some syntax ambiguities have been resolved at the cost of making newlines significant, stuff like that. Soooooo I should really suck it up and do/redo/unheck monomorph, so I can say that the current type system Works and can try to start adding more features. Like, you know, a borrow checker, which is ostensibly the point of this entire language. That’s gonna be an adventure.
Also I guess work. I feel a little spoiled by my team lead giving me a list of things that need doing on the project and saying “just choose whatever you feel like”, but I guess it all has to be done anyway, and what I consider fun and the rest of the team considers fun tends to be pretty different. So those suckers get to do like, actual robotics code stuff, and I get to screw around with cool sensors.
Also play Tunic. Turns out it looks like a Zelda clone with cute little foxes, but it’s actually halfway between Dark Souls and Fez. …with cute little foxes. Translating the in-game language has been more challenging than expected, but I wanna keep grinding away at it a bit more before I inevitably give up and find a guide.
Also, help my partner jobhunt. Money is gonna get painful soon. Anyone out there know of anyone who wants to hire a React dev?
Psyching myself up to work on the steel bank common lisp compiler, python (no relation). It is really poorly abstracted and its design outdated, so this is not such a fun time. But sprucing it up seems like the best practical way to get better-performing common lisp code, so I have not much recourse.
One piece is avx-512 support. This is mostly busy-work, but it’s a bit frustrating, because I had just about done adding mask register support when I had a data-loss oopsie, so now I have to re-do that work.
The other, a lot more significant, is improved concurrency support. There are a number of pieces to this, all of them annoying:
I have to define a concurrency semantics. I really appreciate the work done by c++ folk in this area—really, it’s the only remotely good thing to come out of c++—but there are issues. One is the raft of issues with relaxed atomics—oota and rfub; joy! The other, more easily solvable, is that races for plain, unsequenced accesses are undefined behaviour, which I consider to be utterly unconscionable. But I don’t want to make plain accesses the same as c++11 relaxed-atomic ones, because those are too strong, and inhibit some desirable optimisations; so something in-between is needed. I will probably not ultimately expend too much effort on this because I cba, but it’s annoying.
I have to make atomic operations work uniformly on all places—if you can read and write it, you should also be able to cas, exchange, or increment it. Unfortunately, python is not set up to do a good job of this. Atomic cas and increment are currently defined in a somewhat ad-hoc fashion, and code for reads and writes is duplicated with macros. The ideal solution would be something uniform, similar to llvm’s getelementptr (coupled with a representation—e.g., both specialised arrays and structures can contain unboxed integers), to avoid an m*n problem. But this would require a lot of work on python, which I’m not sure I want to do, so I am hoping I can just figure out how to squish the new cas/increment/exchange to the existing mass of macros and slip this past the maintainers.
On a related note, I need to attach ordering information to these memory ops, hopefully in a uniform fashion, which will be annoying.
Improved double-word atomics. Ok, this one actually sounds fine to implement; the main issue is just to figure out the right user-level interface for associating two slots of a structure, so it’s possible to operate on both of them atomically. But I think I have something reasonable in mind, and am not too worried about that.
Finally, having done all the requisite work to represent memory operations and their ordering, I want a code motion pass. This I can hopefully just copy out of muchnick, but there remains the issue of alias analysis for memory ops. Which would really be helped by a uniform getelementptr-alike, so I may just punt and say all memory ops might alias, leaving the alias analysis to someone else.
There is one final issue with code motion. Suppose an array is indexed by a variable index in a loop. On arm64, this has to be implemented with two instructions: the first one removes the tag from the array, and the second is an indexed load. (On x86, the whole thing can be done with one addressing form). Ideally, having implemented code motion, I would be able to represent that as two instructions in the ir, and then hoist the former. (Similarly, atomic operations can have neither an index nor a displacement; so there is always something to hoist if you have an atomic in a loop.) But this has some untoward implications wrt the gc, so I may skip it :\
Something bugs me quite a bit about this comparison: very little space is dedicated to comparing the actual formats from the first principles, it’s almost 100% look at the derived artifacts — size&format of the spec, historical circumstances leading to creation, popularity, benchmarks. The closest to fundamentals’ analysis is itself second-order — reference to someone else’s summary opinion on HackerNews.
Now, the derived stuff is hugely important, especially for serialization formats, which sit on the interoperability boundary, but it still feels very wrong to not look at them in the context of fundamentals. From the writing style, it does seem that the author knows what they are doing, and I guess I should update in the direction of CBOR a bit, but, still, I am surprised just how little I was able to extract from the article in terms of what a good serialization format should look like.
I really really wanted to include more examples but this I had trouble justifying spending so much time on a “sidequest”. I’m hoping to include a deeper dive in my flat scraps documentation
Side thread: personally I don’t feel like Github stars are a good metric for the popularity of a projects. What do you think?
I don’t have a better way to estimate project popularity; just saying that Github stars seem not useful to me. In about 16 years of using Github I have starred less than 30 projects, but I’ve probably used ten times as many Github projects (probably much more). Look like I just don’t star projects :-) .
And there might actually be a bias in the star counts, in that some projects attract users that are more likely to hand out stars.
What makes you give a star to a Github project? Do you give stars for any project that sounds interesting, or any project that you use, or any project that you feel exceptionally thankful for?
agreed, they are pretty useless as a metric for anything. I think they mostly measure “how much reach on social media/reddit/HN/…. has this ever gotten” in many cases, and that’s not informative of anything. (I personally star a lot, but really treat it as a bookmark along the lines of “looks vaguely interesting from a brief skim”, its not an endorsement in any way)
I’m pretty sure I’ve never starred a project on GitHub, or at least I haven’t in the past decade, and I don’t know why anyone would! It’s an odd survival of GitHub’s launch era, when “everything has to be social” was the VC meme of the moment and “social open source” was GitHub’s play.
I find it useful as a bookmark, a way to search a curated portion of GitHub later on.
I use it as a “read later” flag when I see a link to a project there but don’t have time to fully consider it in the moment.
Note that I also tried to use Google trends, but both keywords fell under the threshold for tracking over time !
You can also compare download count from package managers like NPM, but I didn’t have an easy way to do that for so many libraries
I don’t get why popularity is so important here. Isn’t it effectively an implementation detail of your language? Even if I’m misunderstanding that and it is not, isn’t the more important question “are there good implementations”, not “are the implementations more popular than the ones for the other thing”?
One huge use-case for my language is sending and receiving and storing programs, so yes, it’s an implementation detail, but it’s also a very important one that will be impossible to change later.
But you’re totally right – that is the main question. I’m still exploring the space of serialization candidates, and these two particularly stood out to me.
I mostly care about popularity because convincing people to trust/adopt technology is way harder than actually implementing it. Extending a trusted format seems less risky than other options
Did you have looked at https://github.com/vshymanskyy/muon ? While it is not popular, I think the simplicity of it makes it very cool
Oh this looks VERY cool. It might make sense to fork this and combine it with Max’s serializer. Thanks!
Author should probably actually read/implement/use both to see which is better, not only research what other people say…
I haven’t implemented them from scratch, but I’ve used them extensively while designing scrapscript and building smel shell :) In this writeup, I tried to balance my opinions with existing discussions
Fair; sorry if I was a bit snippy. The writeup just seemed to involve a lot of looking at what other people say/think and I expected more of your own thoughts.
This page seems to be missing any reference at all to prior art, e.g. dozens of project like this:
https://domterm.org/index.html
https://github.com/unconed/TermKit - 14 years old, but looks better than the state of the art !!!
https://hyper.is/
https://github.com/ericfreese/rat
etc. etc.
https://github.com/oils-for-unix/oils/wiki/Interactive-Shell
Mentioned here:
https://lobste.rs/s/qwcov5/deja_vu_ghostly_cves_my_terminal_title#c_imqrsg
https://lobste.rs/s/xhcdg3/majjit_lsp#c_4agnik
As I mentioned there, I wonder if terminals/shells are stuck in the past because the efforts are so diffuse. That is kind of natural in open source, but also a problem
On static typing, there is almost certainly something that can be improved about the shell, but this page doesn’t address a key problem – static typing doesn’t scale to the sofware on even a single machine (on Linux or Windows), let alone multiple machines:
https://lobste.rs/s/sqtnxf/shells_are_two_things#c_pa4wqo
Hey thanks for all the links! Some great stuff I hadn’t seen before
scrapscript is specifically built so that all machines share the same types/data/code :)
This is one major reason I think scrapscript specifically has something new to offer to shell design! You don’t have to install libs/apps or copy/paste to access the scrapyard. You can imagine something like this:
But I don’t know. This might just be an advanced case of NIH syndrome.
Really appreciate the feedback!
For a while I was collecting “related work for new terminals/shells” here, so here are a bunch more links:
https://github.com/evmar/smash/blob/master/docs/related.md
Yup, that link is near the top of my wiki :-) https://github.com/oils-for-unix/oils/wiki/Interactive-Shell
It would be cool if there was some survey of all those projects, to see what they did “wrong” and right. And what they have in common and where they differed
I linked this survey from 2014, but I haven’t seen many others - http://waywardmonkeys.org/2014/10/10/rich-command-shells
I think it’s a hard design / bootstrapping problem because it’s IPC, not something you can ever break (e.g. why the terminal itself is so crufty, why CSV is, etc.)
I also remember this page about hyperlinks in terminals, which seemed to inspire a lot of argument at the bottom - https://gist.github.com/egmontkob/eb114294efbcd5adb1944c9f3cb5feda (adding this to the wiki)
I should probably rename / refactor the wiki a bit to make it clear it’s an editable survey, and make it clear which ones are apps and which ones are protocols, etc.
@evmar @andyc @ilyash @xiaq
This thread gave me the thought that there should be a CANONICAL site that hosts descriptions of all the main shells and is a jumping point to more info.
And then I thought “what a stupid idea - very few topics have canonical web sites” …
And THEN I thought “but wait actually most of the shells that are being actively developed at the moment are being developed by lobsters!” So you could actually do this.
(There’s someone from the fish team here too but I’ve forgotten their username.)
There are a bunch of detailed and comprehensive surveys here, and apparently people other than me actively edit them :-)
https://github.com/oils-for-unix/oils/wiki/Alternative-Shells
https://github.com/oils-for-unix/oils/wiki/Internal-DSLs-for-Shell
https://github.com/oils-for-unix/oils/wiki/Alternative-Regex-Syntax
https://github.com/oils-for-unix/oils/wiki/Survey-of-Config-Languages
The “Interactive Shell” one should be renamed / improved, as I mentioned … help is welcome
I somehow assumed that the first link here is the “canonical” de facto place. Thanks to Andy who already did the job!
And TIL that Steve Bourne is still with us, as in still living - I don’t know whether he’s on lobste.rs.
This is neat. Similar to some of my thoughts.
I’d like to see it taken further: make how I use my comp version controlled/saved. When I want to see how I did x some years ago, open up that notebook and look at the file diffs embedded, the commands executed with their output, etc. and let me page up and see what I’d tried previously.
I do this to some extent with org mode files today, but the downsides are significant enough that it’s just not easy to achieve.
Great idea! One of the nice things about the platform design is that it naturally supports elm-style time-travel debugging. It should be pretty cheap to store messages for later replay.
I’ve got a longer vision for how scrapscript is going to tackle diffs and version control, and smel will hopefully get all of that for free once implemented. Stay tuned!
Sounds cool! Wish you the best.
You can record everything you do in any shell using asciinema. You can also configure your terminal such way that it always starts asciinema by default instead of bash. So everything will always be recorded automatically. (Unfortunately, as well as I understand, you will lose ability to change size of your terminals, because asciinema doesn’t support this.)
Interesting idea. I’ll play with this sometime soon I think
I’m curious: Why are expressions converted to a linear RPN form, rather than something like a parse tree? Is it for performance?
This reminds me of the way queries are handled in Couchbase Lite (my day job). The various language bindings have “fluent” APIs for describing & building a query; these produce an intermediate JSON representation that the cross-platform core code traverses like an AST to generate SQL.
The JSON representation is mostly composed of arrays whose first elements are strings identifying the operation — very much inspired by S-expressions. For example,
[“<“, [“$X”], 5]to represent$X < 5.Later we added support for writing queries in N1QL, aka SQL++, a SQL extension that operates on JSON-like data instead of columnar, and it was just a matter of implementing a PEG grammar that produces our JSON format.
Fantastic question! Yep, the first version I designed was a parse tree. I roughly had a msgpack ext type for each AST node. For example, I had a dedicated “function” ext structure that mapped args to bodies.
The problem with this approach was that msgpack generally incurred a heavy 3-byte overhead for each ext type. 3-bytes at every AST node was really not great design.
But even worse, the parse tree was more complicated to explain and implement. To give you an idea, the docs were roughly 4x longer before I switched to RPN.
Anyway. To make things simpler, I had the idea of just encoding the tokens in an “expr” msg ext type. To make the implementation simpler, I wanted to get rid of operator precedence. And once I landed on RPN, a lot of other nifty features got unlocked.
Prediction: Scrapscript 2.0 becomes a concatenative language ;-)
Are you making a joke with linking to “sound systems”? idgi :(
But also it’s false that a sound system isn’t a sound system. A sound system isn’t a complete system, which is a very different property!
The hole itself isn’t “sound nor unsound”, soundness is a property of the logic. It’s like saying “5 is not an imperative programming language.” Values aren’t languages, languages are languages.
You may be interested in “session types”, which are meant to offer protocol guarantees across programs. Still a research field but we might see them in production in a decade or two. Sadly uptime guarantees are (probably) statically impossible; you can’t typecheck away an asteroid strike!
Hi Hillel!
Yes, it was a bad joke – I will update it for clarity.
Good catch. Can I say the hole is “soundless wrt its system”? Kinda like saying that S is independent of F?
You are spot on! My next post on this topic will be a brief comparison of session types, temporal type theory, TLA+, and a few other tools.
Your amazing TLA+ guide has been super helpful btw! Anybody reading this comment should buy your book :)
When I started my career I wanted to help the most people. Instead of going to grad school and tinkering on something a few hundred people might use, I went to Apple. (I could have gone to Microsoft, but I have standards.)
Focusing on rough-edged homemade tools for other hackers, and calling that better than the stuff the other 8 billion people can use, seems a bit myopic IMO.
Claiming 8 billion people can use any product is a stretch, especially ones that cost thousands of dollars and need a credit card, and don’t work without internet access and iTunes, that delete accessibility tools every other update, and that give root on your device to governments if they ask nastily enough.
I love my Apple products! Thank you for helping make them.
I didn’t mean to belittle you or your employer. I don’t think wigwams are “better” than battleships. I just think wigwams are fun and I want people to work together
Hey, I apologize for the tone of that comment. It was late and I was kind of grumpy. Sorry!
The author and I have quite a very different idea about what qualifies as a small project. Like…designing a whole new programming language and its corresponding compiler toolchain is not a small project in my world.
Nor is selling consumer hardware (like the Playdate as taylor.town’s post linked above suggests) or rebuilding cities (“tools for converting suburban sprawl into human-scale walkable infrastructure”) – the selection criteria for projects seems very vibes-based
Author here! Very very much vibes based. I think I was doing relative comparisons, ie playdate vs ps5. I’ll give “smallness” some more thought
The playdate is not the opposite of a playstation, it’s a hipster gameboy built by Teenage Engineering, known for overpriced overhyped toys in small numbers for those with lots of expendable income. I think the Arduboy is a lot closer to the vibe you’re going for.
I think this is a succinct example of one of my frustrations with the related polemic article, bemoaning literally everything that has been built by more than a handful of people at a time.
There are two very different properties here: that it be affordable; that it be “DIY”.
I am completely in favour of things being more affordable. Whether that means the price is lower, or that there are tax breaks or even grants and safety nets for things that people need but which they cannot afford. I want to live in a society where people aren’t made to suffer just because they’re not financially well off for whatever reason. So, affordable, in the most all encompassing sense, seems valuable and laudable.
Let us set aside the notion that maybe better public transit and urban structure would be a greater service for folks getting to work (emphatically not something you can knock together on your own), and assume that cars are the goal. Why on earth would DIY cars be an improvement? Cars are necessarily complicated machines. Modern cars are vastly safer and require vastly less regular and less critical maintenance than they did even twenty years earlier. Are you going to DIY antilock brakes? Traction control and collision avoidance systems? Multiple airbag systems? Seatbelts? I cannot imagine road safety is going to be improved by people knocking together DIY vehicles, whether you’re the one in the DIY vehicle or not.
Are you going to be able to get any kind of fuel economy (and thus reduce emissions) without computerised engine control systems? Should you have to be capable of writing your own firmware? If we’re not talking about petrol-driven cars, but rather battery electric systems, then: are you going to design the charging system? Are you correctly ventilating the batteries? EVs have less moving parts but they are still complex machines that require a lot of skills and precision manufacturing.
I just don’t understand why things built by organisations of people, over many years and with a lot of investment, would be prima facie inferior to something you can do yourself. This is not a defence of capitalism, or wealth inequality; I don’t think either of those things are required to get modern motor vehicles as we currently build them.
Author here!
Thanks for the well thought-out response. I actually agree with you for the most part. I’m on mobile right now so I’m limited in what I can write
By “DIY motor vehicles” I was actually just thinking of variations of e-bikes that have standardized battery packs. Hope that clears some of the confusion!
DIY battery banks are a deep rabbit hole with plenty of makers (myself included) making them. they tend to exist at 2^n multiples of lithium battery voltages and have terminals. What would a standard standardize that hasn’t already been locked in by physics and the shapes of bolts? The variance in shape, size, mass, capacity, current delivery, etc are a feature, not a bug.
Standardization also kind of implies some governing committee, which contradicts the whole “stuff is cool if it’s got good vibes and it made by 8 people” shtick.
Huh? Standards work both ways. I imagine plenty of garage tinkerers benefitted from the 20th century battery size standards.
I don’t think one should have to invent ASCII to create a useful, small program.
But I’ll trust you that non-interoperable e-bike batteries have more to do with engineering than trying to capture a market. This can also be true.
18650 cells are very standard, BMSs are very standard and comically cheap, voltages are very standard, what’s left to do other than package them and sell them? (Which plenty of companies also do). What’s left in the todo column for “standardized battery packs for ebikes?”
I don’t understand this metaphor at all. A wigwam is a house. Are they thinking wigwam is another word for canoe (which is itself an Arawakan word)?
No wigwam has ever become a battleship or even a boat.
Author here! I wanted to illustrate that wigwams are both different in kind and degree from battleships
Houses rarely become boats, but there’s no reason why they can’t in principle
Indeed - the 100 Rabbits folks turned their boat into a house and have built software on the wigwam list. Perhaps this is the path to the conceptual through-line?
Maybe I am nitpicking now, but didn’t they turn their boat into a home, and not a house?
It’s still a boat, in the water.
On the linked page, they compare it to a house, as something different:
These distinctions are indeed important. A house is a structure like a wigwam. And as they say, home is where the heart is.
The metaphor would probably be more clear if it just referenced particular types of living and working spaces. For example, could Common Lisp be a Mies van der Rohe structure full of incredible modularity and audible leakiness built on top of a complex substrate of steel and glass?
Please forgive me for my own 1/2 backed analogy.
Perhaps a barge or a houseboat is a clearer analogy (folks I’ve known who live on houseboats definitely feel like the DIY type!) I get there are two dimensions here (size and kind) but the simplicity of just measuring it in one feels quicker to understand at first glance (without needing to dive into comments!)
love the idea though, I may shoot you an email with some ideas!
My best guess is they were going for a size and sturdiness analogy, rather than one based on seaworthiness or “boatishness.” Not to say I totally understand why they went that direction…
Doing lots of interactive charting work for my employer
Home improvement projects and writing in between
Strangeloop!
Strangeloop!
Strangeloop!
Strangeloop!
Not strangeloop :(
Just wrapped up my online hexagonal chess site over the weekend!
This week I’ll be decluttering, launching my meme newsletter, and catching up on client work.
Failed my most recent saving throw, so beginning to wrap up at my current role, then I’m on the job hunt.. Trying to leverage the moment to make a shift into either: Software wrangler for even bigger nerds (researchers/scientists etc) for tools/visualisation/workflow automation. Or QA Automation / Tools engineer in the game dev world.
The former prove difficult to find, the latter extremely difficult to breach…
I’m super curious to hear more, if you’re willing to share.
Well I’ve gotten some pretty good work done on Garnet the last few weeks by procrastinating on tackling monomorphization again. The code formatter now outputs valid (if poorly formatted) code, the type parameter syntax for functions is now less cursed if still not as pretty as I wanted, there are module-level doc comments that only have a 35% chance of scourging my soul, some syntax ambiguities have been resolved at the cost of making newlines significant, stuff like that. Soooooo I should really suck it up and do/redo/unheck monomorph, so I can say that the current type system Works and can try to start adding more features. Like, you know, a borrow checker, which is ostensibly the point of this entire language. That’s gonna be an adventure.
Also I guess work. I feel a little spoiled by my team lead giving me a list of things that need doing on the project and saying “just choose whatever you feel like”, but I guess it all has to be done anyway, and what I consider fun and the rest of the team considers fun tends to be pretty different. So those suckers get to do like, actual robotics code stuff, and I get to screw around with cool sensors.
Also play Tunic. Turns out it looks like a Zelda clone with cute little foxes, but it’s actually halfway between Dark Souls and Fez. …with cute little foxes. Translating the in-game language has been more challenging than expected, but I wanna keep grinding away at it a bit more before I inevitably give up and find a guide.
Also, help my partner jobhunt. Money is gonna get painful soon. Anyone out there know of anyone who wants to hire a React dev?
I’ve been having the most luck with these sites:
Psyching myself up to work on the steel bank common lisp compiler, python (no relation). It is really poorly abstracted and its design outdated, so this is not such a fun time. But sprucing it up seems like the best practical way to get better-performing common lisp code, so I have not much recourse.
One piece is avx-512 support. This is mostly busy-work, but it’s a bit frustrating, because I had just about done adding mask register support when I had a data-loss oopsie, so now I have to re-do that work.
The other, a lot more significant, is improved concurrency support. There are a number of pieces to this, all of them annoying:
I have to define a concurrency semantics. I really appreciate the work done by c++ folk in this area—really, it’s the only remotely good thing to come out of c++—but there are issues. One is the raft of issues with relaxed atomics—oota and rfub; joy! The other, more easily solvable, is that races for plain, unsequenced accesses are undefined behaviour, which I consider to be utterly unconscionable. But I don’t want to make plain accesses the same as c++11 relaxed-atomic ones, because those are too strong, and inhibit some desirable optimisations; so something in-between is needed. I will probably not ultimately expend too much effort on this because I cba, but it’s annoying.
I have to make atomic operations work uniformly on all places—if you can read and write it, you should also be able to cas, exchange, or increment it. Unfortunately, python is not set up to do a good job of this. Atomic cas and increment are currently defined in a somewhat ad-hoc fashion, and code for reads and writes is duplicated with macros. The ideal solution would be something uniform, similar to llvm’s getelementptr (coupled with a representation—e.g., both specialised arrays and structures can contain unboxed integers), to avoid an m*n problem. But this would require a lot of work on python, which I’m not sure I want to do, so I am hoping I can just figure out how to squish the new cas/increment/exchange to the existing mass of macros and slip this past the maintainers.
On a related note, I need to attach ordering information to these memory ops, hopefully in a uniform fashion, which will be annoying.
Improved double-word atomics. Ok, this one actually sounds fine to implement; the main issue is just to figure out the right user-level interface for associating two slots of a structure, so it’s possible to operate on both of them atomically. But I think I have something reasonable in mind, and am not too worried about that.
Finally, having done all the requisite work to represent memory operations and their ordering, I want a code motion pass. This I can hopefully just copy out of muchnick, but there remains the issue of alias analysis for memory ops. Which would really be helped by a uniform getelementptr-alike, so I may just punt and say all memory ops might alias, leaving the alias analysis to someone else.
There is one final issue with code motion. Suppose an array is indexed by a variable index in a loop. On arm64, this has to be implemented with two instructions: the first one removes the tag from the array, and the second is an indexed load. (On x86, the whole thing can be done with one addressing form). Ideally, having implemented code motion, I would be able to represent that as two instructions in the ir, and then hoist the former. (Similarly, atomic operations can have neither an index nor a displacement; so there is always something to hoist if you have an atomic in a loop.) But this has some untoward implications wrt the gc, so I may skip it :\
Best of luck! It sounds like you understand the problem pretty well and just need to mash the keyboard a bit :)