I struggle to comprehend getting this upset about Apple quietly choosing to scale back, delay, or entirely cancel some of their LLM slop features. Even in the scripted demos all of it looks like useless poison, and the less of it that’s genuinely shipped in MacOS and iOS the better.
If anything, joining the GenAI hype-train is the part that damaged Apple’s credibility, and it’s heroic that someone at Apple leadership may be exhibiting the good sense to not release garbage that doesn’t work.
quietly choosing to scale back, delay, or entirely cancel some of their LLM slop features
I feel like the compliant in the blog post is less about that, and more about how they are doing the opposite - they are loudly promoting it, while they’ve delayed it twice without ever showing an actual demo of it working. Everything shown so far has been pre-recorded concepts.
In fact, I feel that Gruber doesn’t really care what’s promised, and more that they keep promising it without showing any sign of ever delivering.
They’re never actually going to do it! It’s a conspiracy to give MS FOMO to put more AI in Windows so it becomes worse quicker, and more people will want to use Apple!
In Lil, a scripting language which includes the concepts of a database-style table and a query language for it, I might tackle tackle this problem something like the following:
txs:insert date amount text with
"2022-05-13T11:01:56.532-00:00" -3.30 "Card transaction of 3.30 CAD issued by Milano Coffee Roasters VANCOUVER"
"2022-05-12T10:41:56.843-00:00" -3.30 "Card transaction of 3.30 CAD issued by Milano Coffee Roasters VANCOUVER"
"2022-05-12T00:01:03.264-00:00" -72.79 "Card transaction of 72.79 CAD issued by Amazon.ca AMAZON.CA"
"2022-05-10T10:33:04.011-00:00" -20.00 "e-Transfer to: John Smith"
"2022-05-11T17:12:43.098-00:00" -90.00 "Card transaction of 90.00 CAD issued by Range Physiotherapy VANCOUVER"
end
date_tags:raze insert k v with
"2022-05-12T00:01:03.264-00:00" "things"
end
text_tags:raze insert k v with
"*Coffee*" "eating-out"
"*Range Physio*" "medical"
end
txs:update tag:date_tags@date where date in date_tags from txs
each v k in text_tags
txs:update tag:v where text like k from txs
end
print["untagged"]
show[select where !tag from txs]
print["totals by tag"]
show[select tag:first tag total:sum amount by tag where tag from txs]
But Lil isn’t just a scripting language; it’s also part of the Decker ecosystem, which has its own facilities for representing tables as manipulable and automatically serialized “grids”. A Decker-based user interface might contain a script something like this to compute the two output tables from the inputs:
on view do
txs:transactions.value
date_tags:raze dates.value
text_tags:raze texts.value
txs:update tag:date_tags@date where date in date_tags from txs
each v k in text_tags
txs:update tag:v where text like k from txs
end
untagged.value:select where !tag from txs
totals.value:select tag:first tag total:sum amount by tag where tag from txs
end
In the top example, i’m using Lil’s insert query form; a table literal much like your list-of-dicts in the Clojure version. It’s possible to hardcode tables like that in scripts within the Decker environment, but it’s more common for tables to be “stored” in grid widgets. The data within a grid can be directly edited:
I think this is still missing the point. If I’m looking at a row in the “untagged” view that you created, how do I add a tag to it? My goal was that you just directly edit the tag, and that flows back to the underlying data (demos in https://lobste.rs/s/jcsfbx/program_is_database_is_interface#c_i80teg). As opposed to having to navigate to the source table and scroll around for the row in question.
There are some previous research systems that do this using lenses, but I found that the lens laws are often violated by regular ui idioms and that just tracking per-value provenance was good enough most of the time.
I am intensely sympathetic to the ideas of “programs as a home-cooked meal” or the linked article about “barefoot software developers”; I think making personal programs to solve personal problems is very empowering, and building tools to make that sort of independence and “sovereignty” more accessible is very valuable.
I am, however, consistently disappointed and frustrated at how often the people interested in this ethos choose to link those ideas to code generation viz. LLMs. The most advanced and popular models are centralized, opaque, for-profit services which, like the other software Robin decries, constantly churn and enshittify. They are by design not knowable, not stable.
Even in the more limited domain of “open-weight” models that can be self-hosted and “pegged” at a frozen state if desired, I feel using these tools is actively disempowering: it discourages deep understanding and learning in favor of trusting and leaning upon an unreliable, inscrutable oracle shaped by the priorities and values of corporations with the capital to train those models.
Moreover, they represent a purely extractive form of participation in online communities. Asking a question in a public forum and getting suggestions from other people has the side-effect of enriching the body of knowledge that future people could stumble upon when faced with similar problems or interests. Asking an LLM to cobble together a statistical approximation of an answer to a question (itself usually derived from public exchanges) leaves no traces of value behind for others. A search engine brings people to new places; a scraper to feed LLMs only takes.
I have been a professional software developer for 20 years. I like to think of myself as being up on the world of web technologies. And I just truly don’t understand how I’m supposed to use a web app offline. How do I get to it to use it? Am I the only one who has this disconnect of usability? Because I love this, but it’s never made sense to me.
Theoretically, you use the web app once online. The app is cached forever, and works on local data from there on out, and syncs local data to the server when you’re online.
In practice, nothing is ever cached when you need it to be, so you have a store of offline data that you can only use when you’re online to reach the app. (This ought to be fixed by now, by putting everything the app needs to start up in the service worker, but it still seems like it always goes wrong somehow.)
It seems to me like browsers aren’t really made to handle “owning” data, it’s all set up and designed with the assumption that the source of truth is somewhere else. I’d never trust a browser to be the only place where something important was saved. But perhaps that’s just me?
I think the idea is generally that local storage is only a temporary cache, and gets synced with the server when online. “Local-first” isn’t local-only!
Often, as a user, I’d rather have a somewhat fragile local cache than no functionality at all when I go offline for a bit. But of course that depends on the app domain.
But, it’s an interesting idea… would you trust a browser-based app that used (say) a local sqlite db rather than LocalStorage, and didn’t have a server-based backing store at all?
I like it. Just “Save Page As…” and done. Add lightweight peer discovery and a replication protocol, and that could be a very interesting evolutionary niche.
Xememex is a multiuser TiddlyWiki from Intertwingled Innovations. It allows large groups of people to work together on intertwingled wikis that can share content. It is implemented as a serverless application on Amazon Web Services. […] Xememex is currently only available under commercial terms from Intertwingled Innovations.
Intertwingled, eh? Meanwhile the original Xanadu says “WE FIGHT ON”. I’d be more interested in the cult of Nelson if this community wasn’t so doggedly doubled-down on commercial licensing and weird ideas about copyright.
Somewhere between absolutely not and probably not. The browsers themselves is the part I don’t trust. Safari will nuke your site’s storage after some number of days without use, so it’s unacceptable to rely on local only approach for that browser. Beyond that their IndexedDB implementation is buggy and those bugs have been unfixed for years.
In Android device manufacturers sometimes purge the browser storage when the device is low on disk.
Firefox keeps “local files” locked inside the “origin private file system (opfs)” so you still can’t back up your data easily, even though they’re supposedly the browser of user freedom, allowing a user to store actual local files in the real file system is Too Dangerous.
Only chrome provides an API that will let a browser app read & write to a file or directory that actually exists in the real world.
So maybe I would trust that local only app in chrome.
You visit the url, but the browser doesn’t load those direct from the web these days but instead asks the service worker what to do first. The service worker provider the stored asset and it works even though you have no internet
This is one of the great benefits of arraylangs: the economy and expressive density of these languages makes them very amenable to expressing complicated ideas in a precise way, either mentally, on a small scrap of paper, or in verbal conversation with other programmers. It’s no small wonder given APL’s origins as a notation intended for hand-writing on a blackboard.
Folks who knee-jerk accusations of arraylangs as being “write only” would be shocked at how liberating it is to work at a company where most developers are fluent in an APL, and the semantic equivalent of dozens of lines of C, JavaScript, or Python can be conveyed precisely with a few words or symbols.
An HTTPS-only world is a world ruled by a handful of completely centralized certificate authorities, where the only usable protocols are tied to a neverending treadmill of new cipher suites that consume ever more compute resources and rest on ever more gigantic codebases that are never properly audited; where older or low-power devices cannot communicate at all by design.
Vendors for commercial operating systems, ad-driven web browser ecosystems, mobile devices with intended lifespans measured in quarters, and security consulting services collectively salivate at this opportunity for inexorable planned obsolescence and complexity growth, and someday they will doubtless get their way. Don’t kid yourself for a moment that this will mark a joyous new era of consumer empowerment and peace; it will be another ratchet toward making the entire web a walled garden.
For apps, there is no requirement that TLS use the current centralized PKI. It’s fine if your server has a self-signed key and your app uses cert pinning, for example. I’ve used TLS to implement P2P apps.
Newer asymmetrical ciphers tend to be more efficient, viz. Curve25519 vs RSA. Symmetric ciphers getting more expensive is mostly a result of needing a higher level of security as CPU speeds increase. (And hardware acceleration of AES is pretty ubiquitous today.)
There are non-gigantic TLS implementations. BearSSL’s README says “a minimal server implementation may fit in about 20 kilobytes of compiled code and 25 kilobytes of RAM.”
Every low-power embedded CPU with IP support that I know of supports TLS and has it available in its standard library, even the ESP8266s in the light bulbs in my house.
Given that I see posts here about people building networked apps on MacOS 9 and OS/2, I’m not sure which “older devices” are locked out of TLS.
I see posts here about people building networked apps on MacOS 9 and OS/2, I’m not sure which “older devices” are locked out of TLS.
“Locked out” might be a strong term, but “very difficult” isn’t. These devices have the RAM and CPU to perform modern TLS, but they need modern software to implement it, which is not going to come from vendors. People - typically those on lobste.rs - end up with impressive workarounds like crypto ancienne which works by having Mozilla send TLS requests to an unencrypted local proxy (meaning the browser can’t do any integrity checks.) That in turn hits a bootstrapping problem, because users need to download the latest security suite somehow. Unencrypted connections work out of the box, but encrypted connections require users to constantly be on the hunt for this year’s best hack.
At some point the choice is between ensuring that network operators don’t know which specific blog pages my readers read, or allowing them to be read on any old system out of the box. Networks are going to know the user accessed my blog via DNS and IP.
Ok, but I am mystified by the importance applied here to retro systems people run as a hobby. It’s not like the people running them don’t have access to, say, a cheap PC.
That’s fine, but I think it’s Internet_Janitor’s point in a nutshell. If you’re okay buying a relatively new device that is required to access the Internet, fine; just note that it’s giving the vendors of that device a lot of leverage, because you won’t be able to participate in the global network without accepting the terms they impose on you.
The question is when it’s okay to impose such a requirement on others, even if those are terms you would willingly accept.
I have no idea what this means. Are the terms “you must use TLS”? How is that different than “you must use TCP”?
I mean things like “the latest version of our product requires you to sign-in with an email address which we will use to track your local computing activity in a personally identifiable way”, something which all big tech vendors are currently pushing but didn’t exist ~10 years back. What they push will change year to year, but they’ll always have something to push. The ability of users to resist these pushes is in direct proportion to how many alternative options they have.
not much “importance” is required to outweigh the supposed benefit of forcing encryption rather than just supporting it and using it by default.
most sites that people visit are for leisure anyway, so I guess the importance of hobbyist enjoyment would be ranked similar to the ability to access a site like lobsters.
there’s also the other side of the equation, where potentially important materials may be only accessible on the web via plain HTTP or FTP.
I admire many of the philosophical and aesthetic choices in Gemini, but I remain frustrated at the choice of requiring TLS. Writing a rudimentary Gemini client is only “simple” if you can rely upon an existing TLS implementation, an enormous external dependency which attaches a similar sort of “complexity ratchet” to the protocol as encrypted HTTP.
Nevertheless, many Gemini enthusiasts choose to make their capsules also accessible to gopherspace; the existence of Gemini seems to have greatly increased enthusiasm for the protocol it was most interested in updating!
There was (and to a degree, still is) attempts to bring TLS to gopher but it’s not as easy as running it over TLS. To address this, Solderpunk thought of a new protocol, similar in nature to gopher, but starting with TLS. There is a version of Gemini sans TLS known as Spartan [1] if you want it [2].
During my time with Gemini, I found two groups of people who wanted to remove TLS. Group 1 wanted it at least optional, because TLS was too complex to handle and no one could implement TLS by themselves [3]. The second group because TLS was too complex and should be replaced by some bespoke encryption system they just read about (or developed). [4] I always found it amusing that the two groups wanted to remove TLS for different reasons. It sounds like you are in the first group.
[1] It’s technically not Gemini sans TLS, but close enough.
[2] I dont’ think it’s as popular as Gemini though.
[3] Without realizing that most people don’t bother with implementing TCP by hand. And yes, there were a few people who felt that TCP should have been optional.
[4] One person actually did implement their own encryption system for Gemini, only to realize after the fact that it was a mistake to do so.
TCP has a fixed set of requirements. TLS incorporates an open-ended and ever-growing collection of cipher suites and relies upon regularly updated certificate data from trusted authorities to function.
Implementing TCP correctly by yourself is merely difficult, and it’s possible on quite humble microcontrollers. Implementing TLS correctly by yourself is a herculean task with a limitless unavoidable maintenance treadmill and comparatively quite demanding resource requirements. These types of dependencies are not the same.
Had TLS been optional for Gemini, then there would have been significant complaints that it should have been TLS-only from the start (a blog entry of mine made it to the Orange Site and half the conversation was about the lack of HTTPS my site had at the time). So there is no winning.
On TCP, I never said TCP was easier or harder than TLS, just that every design choice at the time was decried by someone somewhere. Also, TCP does change, it’s just that changes made to it are smaller, backwards compatible, and slower:
0007 Transmission Control Protocol (TCP). W. Eddy, Ed.. August 2022.
(Obsoletes RFC0793, RFC0879, RFC2873, RFC6093, RFC6429, RFC6528,
RFC6691) (Updates RFC1011, RFC1122, RFC5961) (Also RFC9293)
Note all the Obsolete RFCs there. And TCP is assumed to run on top of several other protocols (IP and Ethernet for instance)—how deep down the implementation rabbit hole do you want to go?
My final comment on this—just do it! Make the Gemini protocol without TLS and start pushing it. See how far it goes. My other major complaint about the development of Gemini (at the time) was the amount of talking going on, with no one bothering to try the things they were arguing for or against.
Yeah, I meant without the HTML. A reasonable markdown-like markup language with an unambiguous definition that multiple people can implement from the spec and get the same results. Something simpler, more regular, and more composable than CommonMark would appeal to me — I think djot without the HTML would be nice — but everybody knows markdown, there are multiple existing implementations, it’s an easier sell.
Sometimes, depending on what I’m working on. Test fixtures, REPLs, profilers, linters, logs, printlns, and even “debuggers” have their advantages and disadvantages.
If everyone you work with thinks one specific tool or technique is indispensable, you may not have a very diverse team, or a very diverse set of problems to tackle. (Or both!)
Most APL-family languages use idiomatic compositions and implicit mapping instead of a first-class comprehension syntax. For example, in K the @& composition (“at-where”) can be used to filter a list by a boolean vector, which in turn is produced by = implicitly “spreading” the scalar 30 to compare it to every element of a list:
l:,/inputlist
l[;`name]@&30=l[;`age]
Lil has a SQL-like syntax which can be seen as a generalization of Q’s qSQL query templates. The most natural way to approach the given problem would be to form a table from the combined input list-of-list-of-dicts and then query it; execution flows right-to-left as in many APL-family languages:
extract name where age=30 from table raze inputlist
In terms of expressiveness, this is quite similar to LINQ, but semantically it’s worth noting that as in the K example above, the computation of the filter expression (age=30) occurs eagerly and across the entire column at once, instead of row-by-row.
Giving Lil query syntax helps compensate for its relatively verbose function syntax; the following would be valid Lil, but it is both bulkier and much slower than a query:
raze map[inputlist
on _ sublist do
flatmap[sublist
on _ user do
if user.age~30 user.name else () end
end
]
end
]
Always nice to see practical examples of arraylangs!
In Lil, the first part is reasonably compact if I fuse together the data-cleaning and sound-finding operations, using a shifted vector comparison since Lil lacks eachprior. Takes about 17 seconds to process 133,855 lines of dictionary data on my laptop, which is slower than I’d like:
d:56 drop "\n" split read["cmudict-0.7b.txt"]
followers:raze each x in d
t:2 drop " " split x
v:t in "R","L"
extract where (0,v)&!v from t
end
mostcommon:extract orderby v desc from
select k:first value v:count value by value from followers
The second part can leverage Lil’s query syntax, and is pretty much instantaneous for a mere 3004 lines of CSV data:
t:readcsv[read["ICC Test Bowl 3003.csv"] "ssiiiiisffiiis"]
sorted: select orderby Wkts desc orderby Ave asc where Wkts from t
best: select where !gindex by Wkts from sorted
bestInClass: select where each v i in Ave v~min (i+1) take Ave end from best
allWkts: sorted.Wkts
mostCompetitive: extract where (gindex=0)&15<count gindex by value from allWkts
mostCompetitiveBowlers: select where Wkts in mostCompetitive from best
gap: min allWkts drop 1+range max allWkts
I’m most looking forward to proper pen support and the more powerful clipboard API, but having more portable abstractions for filesystem traversal and standard system dialogs are a nice bonus; SDL3 neatly sews up nearly everything that I previously had to write tedious platform-specific polyfills for in my SDL2 applications.
(Serious observation: IMO you need a compelling reason to use K8s, and an experienced and skilled team running it, even managed options like EKS. Most places I’ve worked would have been far better off with something simpler and easier.)
Yeah don’t get me wrong: K8s is amazing tech. And I’ve worked places where it’s a fit for what they’re doing.
But!
If you don’t have a compelling reason to use it, you’ll just be burning money and building a stack that’s less reliable than you’d get from a simpler alternative.
My experience is that you’ll need a team of specialists setting it up and keeping it running smoothly - essentially a platform team building tooling, docs, skeleton apps, etc. More people during setup than maintenance, for sure, but it’s expensive.
Lil expressions flow right-to-left, like most members of the APL family. A minor quality-of-life feature I’ve adopted in the various Lil REPLs is the convention of binding the last expression of each request’s result to the name _, which allows you to make “forward progress” interactively without backtracking to define intermediate variables every time. If you realize you’ll need something again, you can name it after evaluating the expression which produced it:
oh wow this is kinda nuts, I used to play around with Earnest’s oK implementation a lot for esolang / code golf stuff when I was younger, and until now I hadn’t realized he was the one behind Decker & Lil. Need to play around with that sometime for sure…
Lil probably won’t be that attractive to code golf enthusiasts (taking, for example extract orderby value asc from x in Lil versus x@<x in k), but I do hope that folks who enjoy array languages nevertheless find things to like about it. :)
Huh, the highlight for me was the “staff pick” that mentions TIddlyWiki. I thought it was just a wiki, but it seems that you can develop simple apps with it?
I’m constantly looking at quirky open formats for these things; I am a light org-mode user, I am adopting Ledger, I have written tools for doing stuff with Markdown files (I wrote a small thingie to embed SQL queries that operate on Markdown tables; I think this concept has legs)… TiddlyWiki seems like another thing that should be on my radar. (I thought Ikiwiki was it.)
If you like “quirky”, you might enjoy Decker. Just like TiddlyWiki, Decker can function as a self-contained single-file web application that can be customized and tweaked live by the user. The whole project is FOSS and the file format is designed to be diff-able and generally human-readable.
Interesting how he claims Awk is the best general purpose scripting language in Posix. it does make sense to avoid the Bourne shell at all costs, I suppose…
Indeed. If you need to perform general-purpose floating-point arithmetic with sh or bash, (as one might within the interpreter for a scripting language with such arithmetic operators like Lil) your options are to shell out to bc / dc / awk / etc. or to go through intensely painful and excruciatingly slow contortions to perform equivalent calculations with nothing but integers and string manipulation. As I try to explain in this podcast episode, dc and bc are often unavailable, and if I’m already obligated to push some of the work into AWK there’s no longer any advantage to using a shell script at all.
It was a fine explanation, and it was very inspiring to listen to, I regret leaving such shallow comments; there is much more profound things to be said. Looking at the Deck-Month 2 entries now.
I’m glad you enjoyed the discussion. If you (or anyone else) have questions about Decker, Lil, or any of the other topics that came up I’ll do my best to answer.
I think they do , that’s why it was surprising to me. In the podcast they talk about bash, but John claims that support for dictionaries/maps isn’t universal. Surely it isn’t POSIX so if you evaluate Bash , why not include Python then? Surely all the major distros have it, like they used to have Perl.
If “new enough bash to have dictionaries” isn’t universal in the targets he was looking at, “surely all the major distros have it” doesn’t seem like a good argument for why Python supposedly is wider spread? Either way, awk for sure is more common.
It’s important to observe that the original context of the “Blub Paradox” was Paul Graham (noted Lisp enthusiast) explaining why Lisp is the best language and every other language is for shortsighted dummies who have not yet accepted Lisp into their hearts. Amending his definition to treat language power as a lattice, rather than a linear ordering, is being quite generous to the original essay. There isn’t even much self-awareness in there with respect to the possibility that more expressively powerful languages than Lisp might exist.
I think that the “Blub Paradox” belongs in the wastebin. Some languages are more expressive than others, but in general they cannot be ranked against one another outside the context of a task you’re trying to accomplish; programming in-the-small is different from in-the-large, rapid prototyping is different from writing software that runs on a space shuttle or a medical device, a shader program has different constraints than software for a low-power microcontroller, and so on. When the stakes are low enough, any language will do, but with strong constraints comes strong selective pressure for a language with a corresponding shape. A language feature can be “weird” and unappealing from the perspective of working in one of those domains if it isn’t useful there, or causes more problems than it’s worth.
Much like testing methodology, practitioners with strong disagreements about the correct approach are often coming from experience in quite different domains, which in turn color their prejudices.
Decker is in some ways a sort of “fantasy console” like PICO-8.
I had a go at adapting this textscroller routine to work on an animated Canvas widget within Decker, accounting for lots of little differences, like using Decker’s default palette indices, accounting for varying sizes of canvas and bitmapped font, and reflecting that Lil’s sin operates in radians rather than the PICO-8 normalized angle system:
message:"scrolling the day away!"
colors:35,40,39,33
on view do
me.pattern:46
me.rect[]
xo:me.size[0]-(first me.size+me.font.size*count message)%sys.frame
yo:me.size[1]*.5
ci:0
each letter i in message
me.pattern:colors[ci]
me.text[letter (xo,yo)+(i*me.font.size[0]),29.9*sin .1*sys.frame-5*i]
ci:(count colors)%ci+!letter~" "
end
end
Or, in a form that can be directly pasted into web-decker:
%%WGT0{"w":[{"name":"c1","type":"canvas","size":[160,125],"pos":[21,37],"animated":1,"volatile":1,"script":"message:\"scrolling the day away!\"\ncolors:35,40,39,33\non view do\n me.pattern:46\n me.rect[]\n xo:me.size[0]-(first me.size+me.font.size*count message)%sys.frame\n yo:me.size[1]*.5\n ci:0\n each letter i in message\n me.pattern:colors[ci]\n me.text[letter (xo,yo)+(i*me.font.size[0]),29.9*sin .1*sys.frame-5*i]\n ci:(count colors)%ci+!letter~\" \"\n end\nend\n\n","pattern":33,"scale":1}],"d":{}}
I struggle to comprehend getting this upset about Apple quietly choosing to scale back, delay, or entirely cancel some of their LLM slop features. Even in the scripted demos all of it looks like useless poison, and the less of it that’s genuinely shipped in MacOS and iOS the better.
If anything, joining the GenAI hype-train is the part that damaged Apple’s credibility, and it’s heroic that someone at Apple leadership may be exhibiting the good sense to not release garbage that doesn’t work.
I feel like the compliant in the blog post is less about that, and more about how they are doing the opposite - they are loudly promoting it, while they’ve delayed it twice without ever showing an actual demo of it working. Everything shown so far has been pre-recorded concepts.
In fact, I feel that Gruber doesn’t really care what’s promised, and more that they keep promising it without showing any sign of ever delivering.
They’re never actually going to do it! It’s a conspiracy to give MS FOMO to put more AI in Windows so it becomes worse quicker, and more people will want to use Apple!
In Lil, a scripting language which includes the concepts of a database-style table and a query language for it, I might tackle tackle this problem something like the following:
But Lil isn’t just a scripting language; it’s also part of the Decker ecosystem, which has its own facilities for representing tables as manipulable and automatically serialized “grids”. A Decker-based user interface might contain a script something like this to compute the two output tables from the inputs:
But can you directly edit a row in the “untagged” view, or do you have to manually navigate back to the source data?
In the top example, i’m using Lil’s
insertquery form; a table literal much like your list-of-dicts in the Clojure version. It’s possible to hardcode tables like that in scripts within the Decker environment, but it’s more common for tables to be “stored” in grid widgets. The data within a grid can be directly edited:http://beyondloom.com/decker/tour.html#Grids
And modifying a grid manually can fire events which cause other “views” to be recomputed:
http://beyondloom.com/decker/tour.html#Scripting%203
I think this is still missing the point. If I’m looking at a row in the “untagged” view that you created, how do I add a tag to it? My goal was that you just directly edit the tag, and that flows back to the underlying data (demos in https://lobste.rs/s/jcsfbx/program_is_database_is_interface#c_i80teg). As opposed to having to navigate to the source table and scroll around for the row in question.
There are some previous research systems that do this using lenses, but I found that the lens laws are often violated by regular ui idioms and that just tracking per-value provenance was good enough most of the time.
I am intensely sympathetic to the ideas of “programs as a home-cooked meal” or the linked article about “barefoot software developers”; I think making personal programs to solve personal problems is very empowering, and building tools to make that sort of independence and “sovereignty” more accessible is very valuable.
I am, however, consistently disappointed and frustrated at how often the people interested in this ethos choose to link those ideas to code generation viz. LLMs. The most advanced and popular models are centralized, opaque, for-profit services which, like the other software Robin decries, constantly churn and enshittify. They are by design not knowable, not stable.
Even in the more limited domain of “open-weight” models that can be self-hosted and “pegged” at a frozen state if desired, I feel using these tools is actively disempowering: it discourages deep understanding and learning in favor of trusting and leaning upon an unreliable, inscrutable oracle shaped by the priorities and values of corporations with the capital to train those models.
Moreover, they represent a purely extractive form of participation in online communities. Asking a question in a public forum and getting suggestions from other people has the side-effect of enriching the body of knowledge that future people could stumble upon when faced with similar problems or interests. Asking an LLM to cobble together a statistical approximation of an answer to a question (itself usually derived from public exchanges) leaves no traces of value behind for others. A search engine brings people to new places; a scraper to feed LLMs only takes.
I have been a professional software developer for 20 years. I like to think of myself as being up on the world of web technologies. And I just truly don’t understand how I’m supposed to use a web app offline. How do I get to it to use it? Am I the only one who has this disconnect of usability? Because I love this, but it’s never made sense to me.
Theoretically, you use the web app once online. The app is cached forever, and works on local data from there on out, and syncs local data to the server when you’re online.
In practice, nothing is ever cached when you need it to be, so you have a store of offline data that you can only use when you’re online to reach the app. (This ought to be fixed by now, by putting everything the app needs to start up in the service worker, but it still seems like it always goes wrong somehow.)
It seems to me like browsers aren’t really made to handle “owning” data, it’s all set up and designed with the assumption that the source of truth is somewhere else. I’d never trust a browser to be the only place where something important was saved. But perhaps that’s just me?
I think the idea is generally that local storage is only a temporary cache, and gets synced with the server when online. “Local-first” isn’t local-only!
Often, as a user, I’d rather have a somewhat fragile local cache than no functionality at all when I go offline for a bit. But of course that depends on the app domain.
But, it’s an interesting idea… would you trust a browser-based app that used (say) a local sqlite db rather than LocalStorage, and didn’t have a server-based backing store at all?
Yet another alternative to having a server side is tiddlywiki-style self-replicating single-file programs that simply happen to run in a web browser.
I like it. Just “Save Page As…” and done. Add lightweight peer discovery and a replication protocol, and that could be a very interesting evolutionary niche.
Intertwingled, eh? Meanwhile the original Xanadu says “WE FIGHT ON”. I’d be more interested in the cult of Nelson if this community wasn’t so doggedly doubled-down on commercial licensing and weird ideas about copyright.
Somewhere between absolutely not and probably not. The browsers themselves is the part I don’t trust. Safari will nuke your site’s storage after some number of days without use, so it’s unacceptable to rely on local only approach for that browser. Beyond that their IndexedDB implementation is buggy and those bugs have been unfixed for years.
In Android device manufacturers sometimes purge the browser storage when the device is low on disk.
Firefox keeps “local files” locked inside the “origin private file system (opfs)” so you still can’t back up your data easily, even though they’re supposedly the browser of user freedom, allowing a user to store actual local files in the real file system is Too Dangerous.
Only chrome provides an API that will let a browser app read & write to a file or directory that actually exists in the real world.
So maybe I would trust that local only app in chrome.
You visit the url, but the browser doesn’t load those direct from the web these days but instead asks the service worker what to do first. The service worker provider the stored asset and it works even though you have no internet
This is one of the great benefits of arraylangs: the economy and expressive density of these languages makes them very amenable to expressing complicated ideas in a precise way, either mentally, on a small scrap of paper, or in verbal conversation with other programmers. It’s no small wonder given APL’s origins as a notation intended for hand-writing on a blackboard.
Folks who knee-jerk accusations of arraylangs as being “write only” would be shocked at how liberating it is to work at a company where most developers are fluent in an APL, and the semantic equivalent of dozens of lines of C, JavaScript, or Python can be conveyed precisely with a few words or symbols.
Bullshit.
An HTTPS-only world is a world ruled by a handful of completely centralized certificate authorities, where the only usable protocols are tied to a neverending treadmill of new cipher suites that consume ever more compute resources and rest on ever more gigantic codebases that are never properly audited; where older or low-power devices cannot communicate at all by design.
Vendors for commercial operating systems, ad-driven web browser ecosystems, mobile devices with intended lifespans measured in quarters, and security consulting services collectively salivate at this opportunity for inexorable planned obsolescence and complexity growth, and someday they will doubtless get their way. Don’t kid yourself for a moment that this will mark a joyous new era of consumer empowerment and peace; it will be another ratchet toward making the entire web a walled garden.
Bullshit.
For apps, there is no requirement that TLS use the current centralized PKI. It’s fine if your server has a self-signed key and your app uses cert pinning, for example. I’ve used TLS to implement P2P apps.
Newer asymmetrical ciphers tend to be more efficient, viz. Curve25519 vs RSA. Symmetric ciphers getting more expensive is mostly a result of needing a higher level of security as CPU speeds increase. (And hardware acceleration of AES is pretty ubiquitous today.)
There are non-gigantic TLS implementations. BearSSL’s README says “a minimal server implementation may fit in about 20 kilobytes of compiled code and 25 kilobytes of RAM.”
Every low-power embedded CPU with IP support that I know of supports TLS and has it available in its standard library, even the ESP8266s in the light bulbs in my house.
Given that I see posts here about people building networked apps on MacOS 9 and OS/2, I’m not sure which “older devices” are locked out of TLS.
“Locked out” might be a strong term, but “very difficult” isn’t. These devices have the RAM and CPU to perform modern TLS, but they need modern software to implement it, which is not going to come from vendors. People - typically those on lobste.rs - end up with impressive workarounds like crypto ancienne which works by having Mozilla send TLS requests to an unencrypted local proxy (meaning the browser can’t do any integrity checks.) That in turn hits a bootstrapping problem, because users need to download the latest security suite somehow. Unencrypted connections work out of the box, but encrypted connections require users to constantly be on the hunt for this year’s best hack.
At some point the choice is between ensuring that network operators don’t know which specific blog pages my readers read, or allowing them to be read on any old system out of the box. Networks are going to know the user accessed my blog via DNS and IP.
Ok, but I am mystified by the importance applied here to retro systems people run as a hobby. It’s not like the people running them don’t have access to, say, a cheap PC.
That’s fine, but I think it’s Internet_Janitor’s point in a nutshell. If you’re okay buying a relatively new device that is required to access the Internet, fine; just note that it’s giving the vendors of that device a lot of leverage, because you won’t be able to participate in the global network without accepting the terms they impose on you.
The question is when it’s okay to impose such a requirement on others, even if those are terms you would willingly accept.
How “relatively new” a device is required to make a TLS connection? 20 years?
I have no idea what this means. Are the terms “you must use TLS”? How is that different than “you must use TCP”?
my symbian phones from ~2009 and BB10 phones from ~2013 have TLS but can’t connect to most HTTPS-only sites.
I mean things like “the latest version of our product requires you to sign-in with an email address which we will use to track your local computing activity in a personally identifiable way”, something which all big tech vendors are currently pushing but didn’t exist ~10 years back. What they push will change year to year, but they’ll always have something to push. The ability of users to resist these pushes is in direct proportion to how many alternative options they have.
not much “importance” is required to outweigh the supposed benefit of forcing encryption rather than just supporting it and using it by default.
most sites that people visit are for leisure anyway, so I guess the importance of hobbyist enjoyment would be ranked similar to the ability to access a site like lobsters.
there’s also the other side of the equation, where potentially important materials may be only accessible on the web via plain HTTP or FTP.
THIS! TY VERY MUCH. I wish I could give awards or upvote 100 times.
I admire many of the philosophical and aesthetic choices in Gemini, but I remain frustrated at the choice of requiring TLS. Writing a rudimentary Gemini client is only “simple” if you can rely upon an existing TLS implementation, an enormous external dependency which attaches a similar sort of “complexity ratchet” to the protocol as encrypted HTTP.
Nevertheless, many Gemini enthusiasts choose to make their capsules also accessible to gopherspace; the existence of Gemini seems to have greatly increased enthusiasm for the protocol it was most interested in updating!
There was (and to a degree, still is) attempts to bring TLS to gopher but it’s not as easy as running it over TLS. To address this, Solderpunk thought of a new protocol, similar in nature to gopher, but starting with TLS. There is a version of Gemini sans TLS known as Spartan [1] if you want it [2].
During my time with Gemini, I found two groups of people who wanted to remove TLS. Group 1 wanted it at least optional, because TLS was too complex to handle and no one could implement TLS by themselves [3]. The second group because TLS was too complex and should be replaced by some bespoke encryption system they just read about (or developed). [4] I always found it amusing that the two groups wanted to remove TLS for different reasons. It sounds like you are in the first group.
[1] It’s technically not Gemini sans TLS, but close enough.
[2] I dont’ think it’s as popular as Gemini though.
[3] Without realizing that most people don’t bother with implementing TCP by hand. And yes, there were a few people who felt that TCP should have been optional.
[4] One person actually did implement their own encryption system for Gemini, only to realize after the fact that it was a mistake to do so.
TCP has a fixed set of requirements. TLS incorporates an open-ended and ever-growing collection of cipher suites and relies upon regularly updated certificate data from trusted authorities to function.
Implementing TCP correctly by yourself is merely difficult, and it’s possible on quite humble microcontrollers. Implementing TLS correctly by yourself is a herculean task with a limitless unavoidable maintenance treadmill and comparatively quite demanding resource requirements. These types of dependencies are not the same.
Had TLS been optional for Gemini, then there would have been significant complaints that it should have been TLS-only from the start (a blog entry of mine made it to the Orange Site and half the conversation was about the lack of HTTPS my site had at the time). So there is no winning.
On TCP, I never said TCP was easier or harder than TLS, just that every design choice at the time was decried by someone somewhere. Also, TCP does change, it’s just that changes made to it are smaller, backwards compatible, and slower:
Note all the Obsolete RFCs there. And TCP is assumed to run on top of several other protocols (IP and Ethernet for instance)—how deep down the implementation rabbit hole do you want to go?
My final comment on this—just do it! Make the Gemini protocol without TLS and start pushing it. See how far it goes. My other major complaint about the development of Gemini (at the time) was the amount of talking going on, with no one bothering to try the things they were arguing for or against.
My ideal would be gemtext-over-http.
How about markdown over http? More palatable than gemtext and still an easy task to implement for one person.
That sounds good in the sense that you meant it, but technically markdown is a superset of HTML, so it’s not THAT implementable.
But get rid of the HTML part and I’m all for it.
Yeah, I meant without the HTML. A reasonable markdown-like markup language with an unambiguous definition that multiple people can implement from the spec and get the same results. Something simpler, more regular, and more composable than CommonMark would appeal to me — I think djot without the HTML would be nice — but everybody knows markdown, there are multiple existing implementations, it’s an easier sell.
Sometimes, depending on what I’m working on. Test fixtures, REPLs, profilers, linters, logs, printlns, and even “debuggers” have their advantages and disadvantages.
If everyone you work with thinks one specific tool or technique is indispensable, you may not have a very diverse team, or a very diverse set of problems to tackle. (Or both!)
Most APL-family languages use idiomatic compositions and implicit mapping instead of a first-class comprehension syntax. For example, in K the
@&composition (“at-where”) can be used to filter a list by a boolean vector, which in turn is produced by=implicitly “spreading” the scalar 30 to compare it to every element of a list:Lil has a SQL-like syntax which can be seen as a generalization of Q’s qSQL query templates. The most natural way to approach the given problem would be to form a table from the combined input list-of-list-of-dicts and then query it; execution flows right-to-left as in many APL-family languages:
In terms of expressiveness, this is quite similar to LINQ, but semantically it’s worth noting that as in the K example above, the computation of the filter expression (
age=30) occurs eagerly and across the entire column at once, instead of row-by-row.Giving Lil query syntax helps compensate for its relatively verbose function syntax; the following would be valid Lil, but it is both bulkier and much slower than a query:
Always nice to see practical examples of arraylangs!
In Lil, the first part is reasonably compact if I fuse together the data-cleaning and sound-finding operations, using a shifted vector comparison since Lil lacks eachprior. Takes about 17 seconds to process 133,855 lines of dictionary data on my laptop, which is slower than I’d like:
The second part can leverage Lil’s query syntax, and is pretty much instantaneous for a mere 3004 lines of CSV data:
I’m most looking forward to proper pen support and the more powerful clipboard API, but having more portable abstractions for filesystem traversal and standard system dialogs are a nice bonus; SDL3 neatly sews up nearly everything that I previously had to write tedious platform-specific polyfills for in my SDL2 applications.
Here’s an alternative K8s comic I found, it’s much shorter but still conveys what you need to know:
Shorter Kubernetes comic
(Serious observation: IMO you need a compelling reason to use K8s, and an experienced and skilled team running it, even managed options like EKS. Most places I’ve worked would have been far better off with something simpler and easier.)
What would your simpler alternatives be?
Assuming you wanted something in the cloud, I’d reach for Fargate if you want containerised workloads, and bake AMIs or build packages for EC2 if not.
Fair play
Yeah don’t get me wrong: K8s is amazing tech. And I’ve worked places where it’s a fit for what they’re doing.
But!
If you don’t have a compelling reason to use it, you’ll just be burning money and building a stack that’s less reliable than you’d get from a simpler alternative.
My experience is that you’ll need a team of specialists setting it up and keeping it running smoothly - essentially a platform team building tooling, docs, skeleton apps, etc. More people during setup than maintenance, for sure, but it’s expensive.
One beige box underneath bob’s office desk.
AWK, which is surely available in any environment with bash, has far fewer associative array foot-guns; worth considering as a practical alternative.
Lil expressions flow right-to-left, like most members of the APL family. A minor quality-of-life feature I’ve adopted in the various Lil REPLs is the convention of binding the last expression of each request’s result to the name
_, which allows you to make “forward progress” interactively without backtracking to define intermediate variables every time. If you realize you’ll need something again, you can name it after evaluating the expression which produced it:Ivy does the same thing.
Nice! The Python REPL does the same thing, and it’s very handy.
oh wow this is kinda nuts, I used to play around with Earnest’s oK implementation a lot for esolang / code golf stuff when I was younger, and until now I hadn’t realized he was the one behind Decker & Lil. Need to play around with that sometime for sure…
Lil probably won’t be that attractive to code golf enthusiasts (taking, for example
extract orderby value asc from xin Lil versusx@<xin k), but I do hope that folks who enjoy array languages nevertheless find things to like about it. :)Oh yeah I wasn’t gonna do golfing with it, if anything I prefer the extra readability for actual projects I want other people to be able to read 😅
Huh, the highlight for me was the “staff pick” that mentions TIddlyWiki. I thought it was just a wiki, but it seems that you can develop simple apps with it?
I’m constantly looking at quirky open formats for these things; I am a light org-mode user, I am adopting Ledger, I have written tools for doing stuff with Markdown files (I wrote a small thingie to embed SQL queries that operate on Markdown tables; I think this concept has legs)… TiddlyWiki seems like another thing that should be on my radar. (I thought Ikiwiki was it.)
If you like “quirky”, you might enjoy Decker. Just like TiddlyWiki, Decker can function as a self-contained single-file web application that can be customized and tweaked live by the user. The whole project is FOSS and the file format is designed to be diff-able and generally human-readable.
Interesting how he claims Awk is the best general purpose scripting language in Posix. it does make sense to avoid the Bourne shell at all costs, I suppose…
I don’t find this statement controversial. What other general purpose scripting languages are defined in Posix?
I feel like I’m walking into a trap, but don’t most people reach for shell scripts before Awk?
Just because they do, doesn’t mean they should :D
I’m not big Awk user but I believe if nothing else its arithmetic support is much better than any shell’s.
Indeed. If you need to perform general-purpose floating-point arithmetic with sh or bash, (as one might within the interpreter for a scripting language with such arithmetic operators like Lil) your options are to shell out to
bc/dc/awk/ etc. or to go through intensely painful and excruciatingly slow contortions to perform equivalent calculations with nothing but integers and string manipulation. As I try to explain in this podcast episode,dcandbcare often unavailable, and if I’m already obligated to push some of the work into AWK there’s no longer any advantage to using a shell script at all.It was a fine explanation, and it was very inspiring to listen to, I regret leaving such shallow comments; there is much more profound things to be said. Looking at the Deck-Month 2 entries now.
I’m glad you enjoyed the discussion. If you (or anyone else) have questions about Decker, Lil, or any of the other topics that came up I’ll do my best to answer.
I think they do , that’s why it was surprising to me. In the podcast they talk about bash, but John claims that support for dictionaries/maps isn’t universal. Surely it isn’t POSIX so if you evaluate Bash , why not include Python then? Surely all the major distros have it, like they used to have Perl.
If “new enough bash to have dictionaries” isn’t universal in the targets he was looking at, “surely all the major distros have it” doesn’t seem like a good argument for why Python supposedly is wider spread? Either way, awk for sure is more common.
It’s important to observe that the original context of the “Blub Paradox” was Paul Graham (noted Lisp enthusiast) explaining why Lisp is the best language and every other language is for shortsighted dummies who have not yet accepted Lisp into their hearts. Amending his definition to treat language power as a lattice, rather than a linear ordering, is being quite generous to the original essay. There isn’t even much self-awareness in there with respect to the possibility that more expressively powerful languages than Lisp might exist.
I think that the “Blub Paradox” belongs in the wastebin. Some languages are more expressive than others, but in general they cannot be ranked against one another outside the context of a task you’re trying to accomplish; programming in-the-small is different from in-the-large, rapid prototyping is different from writing software that runs on a space shuttle or a medical device, a shader program has different constraints than software for a low-power microcontroller, and so on. When the stakes are low enough, any language will do, but with strong constraints comes strong selective pressure for a language with a corresponding shape. A language feature can be “weird” and unappealing from the perspective of working in one of those domains if it isn’t useful there, or causes more problems than it’s worth.
Much like testing methodology, practitioners with strong disagreements about the correct approach are often coming from experience in quite different domains, which in turn color their prejudices.
Decker is in some ways a sort of “fantasy console” like PICO-8. I had a go at adapting this textscroller routine to work on an animated Canvas widget within Decker, accounting for lots of little differences, like using Decker’s default palette indices, accounting for varying sizes of canvas and bitmapped font, and reflecting that Lil’s
sinoperates in radians rather than the PICO-8 normalized angle system:Or, in a form that can be directly pasted into web-decker: