Some highlights:
“[Microsoft] left everybody’s VB6 code completely stranded with no path forward to making modern apps on the latest versions of Windows. A lot of times you couldn’t even get your VB6 apps to install on the latest version of Windows,” recalls a Slashdot commenter.
Microsoft had broken the trust of its army of Visual Basic developers. Faced with the options of either starting over from scratch in VB.NET or moving to new web-native languages like JavaScript and PHP, most developers chose the latter—a brutal unforced error by Microsoft.
I’m not sure I agree with that characterization —as a kid who switched programming interests from VB6 to PHP/Web, the driver was the kind of applications I could build. Content management, knoledge sharing and community building were all the rage; i just didn’t have much interest in the kind of programs I could (easily) build with VB anymore, 6 or .NET. For the industry, I believe it was mostly about the rush to establish a marketing presence on people’s screens —something that VB wasn’t well-suited for (and, arguably, modern client-side apps still aren’t, regardless of the brands’s insistence of making us install their shitty apps onto our phones).
Almost all Visual Basic 6 programmers were content with what Visual Basic 6 did. They were happy to be bus drivers: to leave the office at 5 p.m. (or 4:30 p.m. on a really nice day) instead of working until midnight; to play with their families on weekends instead of trudging back to the office. They didn’t lament the lack of operator overloading or polymorphism in Visual Basic 6, so they didn’t say much.
The voices that Microsoft heard, however, came from the 3 percent of Visual Basic 6 bus drivers who actively wished to become fighter pilots. These guys took the time to attend conferences, to post questions on CompuServe forums, to respond to articles. Not content to merely fantasize about shooting a Sidewinder missile up the tailpipe of the car that had just cut them off in traffic, they demanded that Microsoft install afterburners on their buses, along with missiles, countermeasures and a head-up display. And Microsoft did.
Still, the abstractions at the infrastructure layer have perhaps outpaced that of the client-side: many of the innovations that Alan Cooper and Scott Ferguson’s teams introduced 30 years ago are nowhere to be found in modern development. Where developers once needed to wrestle with an arcane Win32 API, they now have to figure out how to build custom Select components to work around browser limitations, or struggle to glue together disparate SaaS tools with poorly documented APIs. This, perhaps, fuels much of the nostalgic fondness for Visual Basic—that it delivered an ease and magic we have yet to rekindle.
It seems to me we’re now at deep “afterburners” territory, and, as the article discuss, attempts to make the pendulus swing back keep falling short. Apple is pushing to replace Interface Builder with a React clone, for crying out loud. Is this just the way it’s going to be, forever?
Apple is pushing to replace Interface Builder with a React clone, for crying out loud. Is this just the way it’s going to be, forever?
Yeah, it does make you want to cry. First they let IB rot for some time, rather than improving it. Then they point to the rot and say it obviously needs to be replaced.
Glad to see I am not the only one who finds the current direction absurd, particularly because React’s core model is fundamentally incoherent, and the benefits are more accidental.
I also find the characterisation ahistorical for similar reasons.
I learned Perl and CGI because I could make photo galleries. I could play online games. I could make a website that let people upload and download video game save files.
The article notes that VB could be hosted in IE via an ActiveX control. That’s a frustratingly clear memory for me. ActiveX sucked and was terrible to work on.
VB lost because it wasn’t networked.
Now, VB.Net. That combination kept delivering value for DECADES.
I’m not sure I agree with that characterization —as a kid who switched programming interests from VB6 to PHP/Web, the driver was the kind of applications I could build.
The VB.NET transition also hit the IIS/VBScript web developers. I’m not aware of any straightforward migration path there either, although this was happening right at the start of my career, so there were many aspects that weren’t on my radar.
Every IIS developer I knew in Seattle in dot com 1 also knew Apache. IIS had traction in the SMB space, but with Linux eating the bottom up, the writing was on the wall from early on.
My first internship out of high school was with a media and design agency in London in the early ‘00s, and virtually all of the websites they built were VBScript on ASP/IIS. I think only one of the developers there knew much else. I wasn’t around for long enough to find out how they (presumably) eventually transitioned away.
I wrote a lot of VB as a teenager and then I learned OpenStep as an adult. After that VB felt like a poor copy of OpenStep that had completely missed the point (both were copies of Smalltalk-80, though OpenStep refined the ideas and VB copied the concepts only superficially). NeXT made a huge mistake in their pricing of OPENSTEP for Windows, if it (along with EOF) had been priced comparably to VB then I suspect that been the dominant development environment for Windows 95 and WebObjects would have been the framework that all of the .com startups built on.
Yeah, OpenStep was far superior and in many ways still is today. On the other hand, though I never developed with VB myself as I was already on NeXTstep when it appeared and my BASICs were of the AppleSoft and DEC BASIC+ kind, it did and does seem to have a level of approachability that OS never quite matched.
And that we are moving ever further away from. Aircraft Carriers everywhere.
One of the more surprising outcomes of my work on Objective-S is that focusing on removing architectural mismatches, both packaging mismatch and architectural/linguistic mismatch, appears to resurface the kind of approachable simplicity that I liked about BASIC, while at the same time improving power/expressiveness and structure beyond the OpenStep/Smalltalk levels.
Once you tackle the architectural mismatch, these goals are no longer in conflict.
In practice, Model-View-Controller resembles a monolith with two distinct subsystems—one for the database code, another for the UI, both nestled inside the controller.
Noooo….
Not “both nestled inside the controller”. I mean people definitely appear to do it this way, and then they complain about how MVC sucks and how we need various complex alternatives.
The Model is independent. It is your “headless app”. Think hexagonal/ports-and-adapters. The UI sits on top of the model, it queries the model to display the data that’s in the model to the user and manipulates the model on behalf of the user.
Controllers are adapters from input methods (mouse, keyboard) to the Views, which then talk to the model.
A Controller provides means for user input by presenting the user with menus or other means of giving commands and data. The controller receives such user input, translates it into the appropriate messages and pass these messages onto one or more of the views.
“Pass these messages onto one or more of the views”.
That’s all the controllers do. In most of today’s (and yesterday’s) UI toolkits, that is handled generically in the toolkit. So you should rarely if ever create a Controller.
My own interpretation is that React needs to change the Current Right Way of doing things all the time so that the community is always playing catch-up instead of spending that energy on evaluating and switching frameworks with shiny features.
That and the internal inconsistencies of the model.
UIs Are Not Pure Functions of the Model - React.js and Cocoa Side by Side
Feels like a straw man.
foo/12/04.1
, there’s just no match.start
and stop
are relevant, and for “requesting a file” filename
is also relevant. But with a reasonable schema you’d insert maybe two entries per test (using default or even random values for any field which is irrelevant for the test), and assert that the filter retrieves the entry with the relevant ID.(a fake paragraph to get us out of list mode)
test("redirect first month","2008/7-10",&(tumbler__s) {
.start = { .year = 2008 , .month = 7 , .day = 1 , .part = 1 },
.stop = { .year = 2008 , .month = 10 , .day = 31 , .part = 23 },
.ustart = UNIT_MONTH,
.ustop = UNIT_MONTH,
.segments = 1,
.file = false,
.redirect = true,
.range = true,
.filename = ""
});
test("redirect second month","2008/11-2009/2",&(tumbler__s) {
.start = { .year = 2008 , .month = 11 , .day = 1 , .part = 1 },
.stop = { .year = 2009 , .month = 2 , .day = 28 , .part = 23 },
.ustart = UNIT_MONTH,
.ustop = UNIT_MONTH,
.segments = 2,
.file = false,
.redirect = true,
.range = true,
.filename = ""
});
redirect
flag set true
, the other false
.[1] At my former job, I recall spending two full days (8 hours each) in a room full of engineers trying to eek out the minimum set of tests for a project we were working on (and then three work weeks writing out the “test plan” that once written, NO ONE ever referred to again, just because that’s what was expected). So what is it? Minimal tests? Maximum tests? 100% code coverage?
Sorry if it feels like I’m yelling at you, but I’m trying to figure out what the hell unit tests are, and now you’re telling me I did too many, and I didn’t use a proper framework.
Testing is the hardest thing I’ve ever learned - I’m very much still learning after 19 years as a programmer. I didn’t learn to appreciate TDD before actually working with someone who had been part of an extremely successful team at an earlier job, developing a somewhat famous system. So I wouldn’t be too frustrated about finding it confusing. Learning it on your own is a bit like learning woodturning by building your own lathe.
That said, the actual definition of a unit is one of the least interesting things about testing. It’s only interesting insofar as it allows you to have confidence in that the code does what it should be doing. That confidence then allows fearless refactoring, which means that every time you learn some way to improve any part of the code you can apply it everywhere without worrying about breaking anything. Some field is no longer needed? Simply snip it everywhere in your production code, run the tests, verify that the tests that should be dealing with that field fail (otherwise the tests are probably misnamed, overtesting, or defective), verify that no other tests fail (ditto), then update the tests. Similar with adding features - if any existing tests fail, make sure to understand why before continuing. Unfortunately it would take many weeks of blogging to try to explain this in detail, this post is already huge, and others have probably explained it better already.
Spending a bunch of time ahead of implementation trying to tease out the tests is just waterfall in action, and a terrible idea. While developing you’re bound to come up with a bunch more tests, and you’ll probably find that a bunch of the tests which you came up with ahead of time are redundant. That is, in TDD terms, adding those tests makes nothing fail in the implementation so far.
On a related note, if in doubt whether you need another test for a piece of code, try mutation testing. Nobody had told me about this side effect of it, but when you run mutation tests you’re basically checking the completeness of your test suite. If any of the mutations survive you might have a hole in your tests. But conversely, if more than one test kill a mutation then one of those tests might be redundant (or testing too much).
And yes, programmers (myself included) definitely aren’t capable of writing bug-free code. Tests (unit, integration, acceptance, performance, load, mutation, and so on) are a huge field, but a small part of the toolkit you can use to reduce the chance and severity of bugs.
You say you appreciate TDD, which I know as “Test Driven Design,” that is, you write your tests first to drive the design of the code. Then you say:
Spending a bunch of time ahead of implementation trying to tease out the tests is just waterfall in action, and a terrible idea.
So is my understanding of TDD incorrect? Or did you contradict yourself?
the actual definition of a unit is one of the least interesting things about testing.
And I’ve read scores of blog posts that state otherwise, hence my confusion here.
Similar with adding features - if any existing tests fail, make sure to understand why before continuing. Unfortunately it would take many weeks of blogging to try to explain this in detail, this post is already huge, and others have probably explained it better already.
You post isn’t huge, and no, others haven’t explained it better, else I wouldn’t be bitching about it.
Some field is no longer needed? Simply snip it everywhere in your production code, run the tests, verify that the tests that should be dealing with that field fail
So is compilation a form of testing? Because some have said so, yet I don’t agree with that—if you can’t compile the code, you can’t compile the tests (because that’s the language I work in—C), so it’s tough to run the tests.
You say you appreciate TDD, which I know as “Test Driven Design,” that is, you write your tests first to drive the design of the code.
The crucial error here is “tests”. TDD is one test at a time, each test moving the code towards some goal, each commit including some code change and (if necessary) the relevant, usually single, test which demonstrates that the code does something useful and new.
And I’ve read scores of blog posts that state otherwise, hence my confusion here.
I can’t answer for what others find interesting, but I consider the actual test techniques vastly more interesting than the definition.
Some field is no longer needed? Simply snip it everywhere in your production code, run the tests, verify that the tests that should be dealing with that field fail
So is compilation a form of testing? Because some have said so, yet I don’t agree with that—if you can’t compile the code, you can’t compile the tests (because that’s the language I work in—C), so it’s tough to run the tests.
This is where the discussion often gets into what I consider boring definitions. The fact that your code compiles is obviously a useful “test” in that it increases (some would say is required for) confidence that the code will work in production. It’s also a “test” in the sense that you could at least in principle write other tests which overlap with the work the compiler is doing for you. :shrug: :)
First, to op: I think you’re doing great, and on the “right” track.
Fwiw I’ve written rather similar “let a hundred sledgehammers hammer” [m] type tests when I had the great fortune to receive a project without any tests - as a first step to be able to refactor/fix bugs without introducing more than one bug per bug i fixed - and on occasion when working with a particularly narly and under-documented API (your custom SOAP API running on php is sensitive to element order, and wants it’s arguments in xml in a data-element, inside xml? OK, let me just hammer out what dark spells I want my code to output (hello, manual post via curl) - and record what horrors I receive back - record that in a few test cases and force my code to summon conforming monsters from the database).
The main thing is to write tests that give some value to your project. I often thought “simpler” tests didn’t (“clearly this route always return 200 ok!” Well, not if some middleware that was added five years ago by someone else is suddenly missing a dependency because we’re running on a supported version of Debian now, not some t’en year old rubbish..).
But the more importantly:
You say you appreciate TDD, which I know as “Test Driven Design,” that is, you write your tests first to drive the design of the code. Then you say:
Spending a bunch of time ahead of implementation trying to tease out the tests is just waterfall in action, and a terrible idea.
So is my understanding of TDD incorrect? Or did you contradict yourself?
Yes, no they didn’t. Strict test driven design is quite simple:
You have a new feature (list blog posts)
Write the smallest test that fails (“red” test)
In this case, with a green field app - that might be: GET /posts and expect to receive a json object containing two posts
Eg: ‘return { “post”: {}, “post”: {}}’
Refactor. Keep test green - but eg get post data from test database / test fixtures.
Goto 0
In this case, maybe add filtering on year - expect 2 posts for 2023, 1 for 2021, none for 2020.
Now, the code to return a static dummy post array might seem absurdly trivial - and it would be absurd to start there when you already have production code.
But notice that if you start with TDD, all your code will be amendable to test, and you will build up an always up-to-date set of test data/fixtures. When you add an “updated at”-field to your post, it goes into a test first.
As for the unit vs integration test - TDD by itself doesn’t really “give” you integration tests as such. But that doesn’t mean you don’t need them! Unit tests (try to) certify that the parts do what it says on the tin (potato soup, starter). Integration tests (try to) certify that the sum of your tested parts combine to do what it says on the box (full meal inside).
At one extreme TDD will give you a lot of throwaway tests - but leave you with a few useful ones, and with code that is modular, easy to test, easy to instrument. At another TDD can leave you with 100% test coverage and a solid set of test data.
Unit tests can combine with integration tests, when you encounter new data “in the wild” that breaks something. Maybe you end up adding a test for your input validation - catching another edgecase - but first you add the invalid data and watch how listing posts break when some field is NULL or half a valid utf8 or something.
Phew. Anyway, hope that might be of some value..
[m] to misquote Mao
So is my understanding of TDD incorrect?
If your understanding of TDD is “writing all tests before hand”, it is so incorrect It is not even funny
the point of testing was that programmers aren’t capable of writing bug free code
Nope.
so the more tests, the better?
Big nope. You need just enough tests to force you to write the production code.
One test is usually not enough, because then the “production code” can be “just return the hard-coded result the test wants”. And in fact doing that (hard-coding the response) is actually a good idea. Yes, sounds strange, but is true. Why? Because it counteracts the super strong tendency of programmers to overgeneralise. “I know how to do this!!”
With a second test case, just returning a hard-coded result no longer works. In order to make it work again, you’d either have to start testing the parameter values (if/switch) in order to figure out which hard-coded value to return. Or you can implement the actual computation.
Only one “non-numeric” test should be necessary, because there should only be a single integer parser function.
Probably a fair point here, but how do you know what the exact minimal data is that exposes a bug in the general case? That’s not possible, right? So we account for that by adding more test cases that might uncover an unknown unknown of ours. We don’t optimize test cases down to a minimum covering set, because we don’t know what that set is. We try and make test suites more robust by adding “multiple coats” of different data combinations.
[H]ow do you know what the exact minimal data is that exposes a bug in the general case?
Consider that there is always a trade-off. Every single test you write comes with one-off and repeated costs (writing it, maintaining it, running it), meaning you really want to know if a test is not adding any value. So it’s worth spending a bit of time making sure every test is adding actual value.
I agree with that. For me, that’s why I focus on generative testing, and supplement it with targeted test cases that I know are really important. I think we undersell how hard it is to come up with good test cases.
[knowing exact tests to expose bugs] That’s not possible, right?
Exactly. Fortunately, it is also not usually necessary. Both testing and static typing are theoretically ludicrously inadequate. However, both practically do a fairly good job. Why? Because most bugs are stupid. They are so stupid that we can’t see them (because we are looking for sophisticated errors), and so stupid that exposing them to even a minimal amount of reality will expose them.
There was a paper a while ago that talked about how even extremely low levels of code coverage were adequate for uncovering most bugs, and raising the level beyond didn’t help much.
Unfortunately I take a lot of these studies with a huge grain of salt. The linked paper here tests databases for example. It’s very rare for studies like this to be done on business applications, which have a lot more functionality.
I agree anecdotally that “simple” testing gets you quite far. Probably 80% or so. The issue is that correctness is binary - if one piece of functionality has a bug, users notice it and can’t get their job done.
I also thing a large chunk of bugs are simply a failure to express requirements correctly, I.e. are misspecifications. This is something that testing doesn’t always directly help with.
What you’re describing is heading towards fuzzing or property based testing.
It’s not usually possible to be sure that we have covered all possible inputs when writing tests like these by hand. Taking a wild stab at ideas that might possibly break the code is just going to leave you with a lot more tests that don’t have any logic behind their existence.
If I don’t know what the minimum set of tests to cover a function is, I decompose the function and write tests that cover the bounds of inputs to its simpler components first. I then use simple substitutes for its components so that I can write simple tests for the original function.
I say this, but I very rarely write unit tests these days. Usually only when I’m writing some very tricky code, or something ‘simple’ I wrote didn’t give the correct answer immediately so slapping a unit test on it means I get to quickly exercise it properly now - and not waste time later should I continue to get it wrong.
What you’re describing is heading towards fuzzing or property based testing.
That makes sense, because I’m very partial to property-based testing. Exactly because it finds data combinations for you, vs. relying on you to come up with an endless amount of data states, and not knowing which are important and which are not.
This is great! Multi-collection iteration is a powerful feature, and was always one of the neater aspects of Higher Order Messaging, where it falls out naturally from taking the focus away from the iteration and placing it on the thing you are doing:
a := [ '1.','2.','3.','4. ]
b := [ 'hello', 'there', 'what''s', 'up' ]
combined := a collect , b each
result:
( "1.hello","2.there","3.what's","4.up")
“We’re trying to run an application on some hardware in a reasonably efficient manner, where we can easily redeploy it when it changes. “
Why not deploy on bare metal then ? Kubernetes is ….complex, no sane person will ever deny this. The obvious question before using it is….do you really need it ? Kubernetes exists to solve a very complex problem that most people simply don’t have it.
Kubernetes exists to solve a very complex problem that most people simply don’t have it.
That pretty much sums up the blog post – avoid complex “enterprisey” solutions if your needs are simple. It seems that Kubernetes is the answer to everyone’s problems.
I opened this post expecting it to be about web components, but after reading it I’m not sure if it might be discussing some completely separate technology with the same name.
However this time there’s probably little (if any) advantage to using web components for server-rendered apps. Isomorphism is complex enough; web components adds extra complexity with the need to render declarative shadow DOM and the still-unknown solution for bundling CSS.
Server-side rendering is the idea that HTML should be generated on the server – the “old-school” approach used by everything from CGI.pm
to PHP to Rails. Web components are a natural fit for this because they provide a way for the server to write out a description of the page as HTML elements, without having to generate all the nested <div>
s and CSS rules and such.
This person is talking about rendering shadow DOM on the server, though, which … honestly it seems completely insane. I don’t know why you’d ever do that, it’s such a bizarre idea that I feel I must be misunderstanding.
The core flaw of component-oriented architecture is that it couples HTML generation and DOM bindings in a way that cannot be undone. What this means in practice is that: […]
- Your backend must be JavaScript. This decision is made for you.
Just absolutely befuddling. Why would using web components imply the presence of a backend at all, much less require that it be written in JavaScript?
My blog’s UI is written with web components. Its “backend” is a bunch of static HTML being served by nginx. If I wanted to add dynamic functionality and wrote a web component for it, then the client side would only need to know which URLs to hit – the choice of backend language wouldn’t be relevant to the client.
I’m building my version of this vision with Corset. I look forward to seeing what other solutions arise.
Oh, ok, this is some sort of spamvertising. I looked at the Corset website and it looks like it’s a JavaScript library for using CSS to attach event handlers to HTML elements. The value proposition over … well, anything else … isn’t clear to me. Why would I want to write event handlers in a CSS-like custom language? Even the most basic JS framework is better than this.
I re-read the article to see if the author was confused about “Web Components” vs. “components on the web”, and the answer is no. The author links to a WC library they wrote that mimics React, showing familiarity with both kinds of C/components. If you read closely, the terminology is consistent, and “web component” is used any time the author means using the customElement and Shadow DOM APIs, but “component” is used other times for the general idea of “something like React”. Frankly, it is extremely unfortunate that the customElement and Shadow DOM promoting crowd have hijacked the term “Web Component” for what they are doing, but the author is not a victim of this confusion.
It’s a straightforward argument:
However this time there’s probably little (if any) advantage to using web components for server-rendered apps. Isomorphism is complex enough; web components adds extra complexity with the need to render declarative shadow DOM and the still-unknown solution for bundling CSS.
“Server-rendered” in this case means the server sends you a webpage with the final DOM. Your blog is not server rendered. If you disable JS, most of the content on your blog goes away. This is a standard terminology in JS-land, but it’s a bit odd, since all webpages need to come from some server somewhere where they are “rendered” in some sense, but “rendering” in JS-land means specifically inflating to the final DOM.
Your backend must be JavaScript. This decision is made for you.
That follows from the idea that you want to have <blog-layout>
on the server turn into <div style="height: 100%"><div id="container">
… and also have the <blog-layout>
tag work in the browser. In theory, you could do something else with like WASM or recompiling templates or something, but so far no one has figured out an alternative that works as more than a proof of concept.
Oh, ok, this is some sort of spamvertising
Literally everything posted on Lobsters that does not have the “history” tag is spamvertising in this sense. It’s a free JS framework, and yes, the author is promoting it because they think it’s good.
I find the idea of a CSS-like declarative language interesting, but looking at the code examples, I still prefer Alpine.js which also has a declarative language but sprinkled in as HTML attributes. I’m glad someone is looking at new ideas though, and I hope the “write a language like CSS” idea sparks something worth using.
I’m still horribly confused even after your elucidations. Granted, I’ve never used Web Components, but I’ve read the full MDN docs recently.
The bit about “your backend must be JavaScript” confuses me the most. Why? My server can generate HTML with any template engine in any language. The HTML includes my custom component tags like blog-layout, entry-header, etc. At the top of the HTML is a script tag pointing to the JS classes defining those tags. Where’s the problem?
At the top of the HTML is a script tag pointing to the JS classes defining those tags. Where’s the problem?
I think this is the problem the author points to, if you turn off JS, you don’t have those tags anymore.
No, the problem has to do with requiring JS on the server.
I’m, frankly, uninterested in what happens if someone turns off JS in their browser. I imagine that, unsurprisingly, a lot of stuff stops working; quel horreur!. Then they can go use Gemini or Gopher or a TTY-based BBS or read a book or something.
It is both problems. The first view is blocked until JS loads, which means it is impossible to load the page in under one second. To remove the requirement that JS has loaded on the client you pre-render it on the server, but pre-rendering requires JS on the server (Node, Bun, or Deno).
I love you guys but this is a very old and well known topic in frontend. It’s okay to be backend specialists, but it’s not a confusing post at all. It’s just written for an audience that is part of an on going conversation.
Fair, frankly I had hard time deciphering the blog post.
I agree though that optimizing for no JS is not interesting to me either.
It’s important to know what the competition to WC is doing. Popular JS frameworks like Next for React and Nuxt for Vue and Svelte Kit for Svelte etc. let you write code that works both server side and client side. So if you write <NumberShower favorite="1" />
in Vue on server, the server sends <div class="mycomponent-123abc">My favorite number is <mark class="mycomponent-456def">1</mark>.</div>
to the browser, so that the page will load even with JS disabled. Obviously, if JS is turned off, then interactions can’t work, but the real benefit is it dramatically speeds up time to first render and lets some of the JS load in the background while displaying the first round of HTML. (See Everyone has JavaScript, right?.)
To do this with WC, you might write <number-shower favorite="1"></number-shower>
, but there’s no good way to turn it into first render HTML. Basically the only realistic option is to run a headless browser (!) and scrape the output and send that. Even if you do all that, you would still have problems with “rehydration,” where you want the number-shower to also work on the client side, say if you dynamically changed favorite to be 2 on click.
The Cloak solution is you just write <div class="number-shower">My favorite number is <mark class="number">1</mark>.</div>
in your normal server side templating language, and then you describe it with a CSS-like language that says on click, change 1 to 2. Alpine.js and Stimulus work more or less the same way, but use HTML attributes to tell it to change 1 to 2 on click.
To do this with WC, you might write
<number-shower favorite="1"></number-shower>
, but there’s no good way to turn it into first render HTML […] The Cloak solution is you just write<div class="number-shower">My favorite number is <mark class="number">1</mark>.</div>
in your normal server side templating language
It’s possible I’m still misunderstanding, but I think you’ve got something weird going on in how you expect web components to be used here. They’re not like React where you define the entire input via attributes. The web component version would be:
<number-shower>My favorite number is <mark slot=number>1</mark>.</number-shower>
And then when the user’s browser renders it, if they have no JS, then it renders the same as if the custom elements were all replaced by <div>
. It’s not super pretty unless there’s additional stylesheets, but it’s readable as a plain HTML page. Go to my blog in Firefox/Chrome with JS disabled, or heck use a command-line browser like w3m.
They’re not like React where you define the entire input via attributes.
No, that’s totally a thing in Web Components. It’s a little tricky though because the attributes behave different if you set them in HTML vs if you set them on a JS DOM Node. You use the lifecycle callbacks to have attributeChangedCallback
called whenever someone does el.favorite = "2"
.
I just saw Happy DOM, which can prerender Web Components server side without a headless browser. Cool! Still requires server side JS though, and there’s still a big question mark around rehydration.
The keyword is “isomorphic”, which in “JS land” means the exact same code is used to render HTML on the server side and on the client side. The only language (ignoring WASM) they can both run is JavaScript.
So their point is “if you want to use the same code on client and server it has to be JS”? Sounds like an oxymoron. But what does that have to do with Web Components?
If you want to be isomorphic, you can’t use Web Components™, because they are HTML elements designed to work inside a browser. However, you can use web components (lowercase), e.g., React components, because they are just a bunch of JavaScript code that generates HTML and can run anywhere.
Frankly, it is extremely unfortunate that the customElement and Shadow DOM promoting crowd have hijacked the term “Web Component” for what they are doing
Have they? I’m only familiar with the React ecosystem, but I can’t recall ever seeing React components referred to as “web components”.
I’m saying “Web Components” should be called “Custom Element components” because “Web Components” is an intentionally confusing name.
To be fair, Web Components as a name goes back to 2011, which is after Angular but before React.
If you disable JS, most of the content on your blog goes away.
Did you try it, or are you just assuming that’s how it would work? Because that’s not true. Here’s a screenshot of a post with JS turned off: https://i.imgur.com/jLFz6UV.png
“Server-rendered” in this case means the server sends you a webpage with the final DOM.
What do you mean by “final DOM”?
In the cast of my blog, the server does send the final DOM. That static DOM contains elements that are defined by JS, not by the browser, but the DOM itself is whatever comes on the wire.
Alternatively, if by “final DOM” you mean the in-memory version, I don’t think that’s a useful definition. Browsers can and do define some HTML standard elements in terms of other elements, so what looks like a <button>
might turn into its own miniature DOM tree. You’d have to exclude any page with interactive elements such as <video>
, regardless of whether it used JavaScript at all.
That follows from the idea that you want to have
<blog-layout>
on the server turn into<div style="height: 100%"><div id="container">
If I wanted to do server-side templating then there’s a variety of mature libraries available. I’ve written websites with server-generated HTML in dozens of languages including C, C#, Java, Python, Ruby, Haskell, Go, JavaScript, and Rust – it’s an extremely well-paved path. Some of them are even DOM-oriented, like Genshi.
The point of web components is that it acts like CSS. When I write a CSS stylesheet, I don’t have to pre-compute it against the HTML on the server and then send markup with <div style="height: 100%">
– I just send the stylesheet and the client handles the style processing. Web components serves the same purpose, but expands beyond what CSS’s declarative syntax can express.
Did you try it
Yes, actually. I tried it and got the same result as the screenshot, which is that all the sidebars and whatnot are gone and the page is not very readable. Whether you define that as “most of the content” is sort of a semantic point. Anyhow, it’s fine! There’s no reason you should care about it! But tools like Next and SvelteKit do care and do work hard to solve this problem.
What do you mean by “final DOM”?
Alternatively, if by “final DOM” you mean the in-memory version, I don’t think that’s a useful definition.
The DOM that the browser has after it runs all the JavaScript. “Server Side Rendering” is the buzzword to search for. It’s a well known thing in frontend circles. There are a million things to read, but maybe try https://www.smashingmagazine.com/2020/07/differences-static-generated-sites-server-side-rendered-apps/ first. The point about the browser having its own secret shadow DOMs for things like videos and iframes is true, but not really relevant. The point is, does the HTML that comes over the wire parse into the same DOM as the DOM that results after running JS. People have gone to a lot of effort to make them match.
If I wanted to do server-side templating then there’s a variety of mature libraries available.
Sure! But some people want to use the same templates on the server and the client (“isomorphism”) but they don’t want to use JavaScript on the server. That’s a hard problem to solve and things like Corset, Stimulus, and Alpine.js are working on it from one angle. Another angle is to just not use client side templating, and do the Phoenix LiveView thing. It’s a big space with tons of experiments going on.
My blog’s UI is written with web components.
<body>
<blog-layout>
<style>
...
<yuru-card>
<h2 slot=title>
...
Looks cool. Can you tell more?
Anything specific you’re interested in knowing?
I use a little shim library named Lit, which provides a React-style wrapper around web component APIs. The programmer only has to define little chunks of functionality and then wire them up with HTML. If you’ve ever used an XML-style UI builder like Glade, the dev experience is very similar.
After porting my blog from server-templated HTML to web components I wanted to reuse some of them in other projects, so I threw together a component library (I think the modern term is “design system”?). It’s called Yuru UI because the pun was irresistible.
The <yuru-*>
components are all pretty simple, so a more interesting example might be <blog-tableofcontents>
. This element dynamically extracts section headers from the current page and renders a ToC:
import { LitElement, html, css } from "lit";
import type { TemplateResult } from "lit";
import { repeat } from "lit/directives/repeat.js";
class BlogTableOfContents extends LitElement {
private _sections: NodeListOf<HTMLElement> | null;
static properties = {
_sections: { state: true },
};
static styles = css`
:host {
display: inline-block;
border: 1px solid black;
margin: 0 1em 1em 0;
padding: 1em 1em 1em 0;
}
a { text-decoration: none; }
ul {
margin: 0;
padding: 0 0 0 1em;
line-height: 150%;
list-style-type: none;
}
`;
constructor() {
super();
this._sections = null;
(new MutationObserver(() => {
this._sections = document.querySelectorAll("blog-section");
})).observe(document, {
childList: true,
subtree: true,
characterData: true,
});
}
render() {
const sections = this._sections;
if (sections === null || sections.length === 0) {
return "";
}
return html`${sectionList(sections)}`;
}
}
customElements.define("blog-tableofcontents", BlogTableOfContents);
const keyID = (x: HTMLElement) => x.id;
function sectionList(sections: NodeListOf<HTMLElement>) {
let tree: any = {};
let tops: HTMLElement[] = [];
sections.forEach((section) => {
tree[section.id] = {
element: section,
children: [],
};
const parent = (section.parentNode! as HTMLElement).closest("blog-section");
if (parent) {
tree[parent.id].children.push(section);
} else {
tops.push(section);
}
});
function sectionTemplate(section: HTMLElement): TemplateResult | null {
const header = section.querySelector("h1,h2,h3,h4,h5,h6");
if (header === null) {
return null;
}
const children = tree[section.id].children;
let childList = null;
if (children.length > 0) {
childList = html`<ul>${repeat(children, keyID, sectionTemplate)}</ul>`;
}
return html`<li><a href="#${section.id}">${header.textContent}</a>${childList}</li>`;
}
return html`<ul>${repeat(tops, keyID, sectionTemplate)}</ul>`
}
I’m sure a professional web developer would do a better job, but I mostly do backend development, and of my UI experience maybe half is in native GUI applications (Gtk or Qt). Trying to get CSS to render something even close to acceptable can take me days.
That’s why I love web components so much, each element has its own little mini-DOM with scoped CSS. I can just sort of treat them like custom GUI widgets without worrying about whether adjusting a margin is going to make something else on the page turn purple.
NeXTStep had filter services, and MacOS X retained them, at least for a while.
These would translate between file types automatically. For example my TextLightning PDF → RTF converter was also available as a filter service, and with it installed, TextEdit could open PDF files, getting the text content as provided by TextLightning.
It was pretty awesome.
I think that’s also exactly the kind of code that’s least amenable to improvements. “get something from this URL, post something to that URL” is quite specific and not likely to be any shorter than it already is without losing precision. No matter how you look at it, we still need to tell the computer what to do when it hasn’t already been written. And glue code is typically not already written.
s3:mybucket/hello.txt ← 'Hello World!'
Native-GUI distributed system in a tweet.
text → ref:s3:bucket1/msg.txt.
If you have the right abstractions, you can replace a lot of glue code with “→
”
That’s like those examples where a UNIX pipeline is so much shorter than a “real” program just because it happens to fit exactly the case it was designed for, and has a lot of the same caveats:
And on and on and on.
There’s a lot of complexity and not all of it can simply be swept under the rug.
That’s like those examples where a UNIX pipeline is so much shorter than a “real” program just because it happens to fit exactly the case it was designed for
As I explain in the post: yes, it looks like code golf. But it isn’t.
The key to getting compatibility is to constrain the interfaces, but constraining interfaces limits applicability. Unix pipes/filters are at one end of the spectrum: perfectly composable, but also very, very limited. ObjS has Polymorphic Write Streams, which also follow a pipe/filter model but are much more versatile.
Another example that is quite composable is http/REST servers, which is why we have generic intermediaries. Again, ObjS has an in-process, polymorphic equivalent that has proven to be extremely applicable and very versatile.
These two also interact very synergistically, and things like variables and references also play along. While I did build it, I was a bit blown away when I figured out how well everything plays together.
And ObjS is a generalisation, so call/return is still available, for example you can hook up a closure to the output of a textfield.
textfield → { :msg | stdout println:msg. }.
Of course it is easier to just hook up stdout
directly:
textfield → stdout
What if I want to store and retrieve structured data (like JSON)?
You pop on a mapping store and get dictionaries. You can also pop on an additional mapping store to convert those dictionaries to custom objects, or configure the JSON converter store to parse directly to objects.
#-countdockerimages
scheme:docker := MPWJSONConverterStore -> ref:file:/var/run/docker.sock asUnixSocketStore.
docker:/images/json count
What about authentication handling in S3?
Similarly, you set up your S3 scheme-handler with the required authentication headers and then use that.
What about error handling and recovery?
Always a tricky one. I’ve found the concept of “standard error” to be very useful in this context, and used it very successfully in Wunderlist.
People have been trying to make Unix-y desktop environments look like Mac OS for just about forever – I recall some truly hideous ones in the 2000s – but I’ve never seen one get close enough to even achieve an uncanny-valley effect. Looking at the screenshots, this is yet another one that’s not even close.
And in general the idea of “finesse of macOS” on a *nix desktop environment is contradictory. You can’t have both the “freedom of (insert your favorite OS here)” and the level of enforced conformity of interface design and other niceties that makes macOS actually nice to use. There simply are too many developers out there building too many applications which will refuse to adopt the necessary conventions, and the project almost certainly doesn’t have the resources to fork and maintain all of them.
I am not so sure.
The trouble is that people focus on trivial cosmetics rather than structure & function. Themes, for example: those are just not important.
Ubuntu’s Unity desktop got a lot of the OS X user-level functionality. A dock/launcher that combined open windows with app launchers, which had indicators to show not only that a window was open (Mac OS X style) but how many windows were open (a gain over OS X), and had a standardised mechanism for opening additional empty windows (another gain over OS X).
But people didn’t register that, because it was on the left. Apple’s is at the bottom by default (although you can move it, but many people seem not to know that.) NeXT’s was at the right, but that’s partly because NeXT’s scrollbars were on the left.
Cosmetics obscured the functional similarity.
Unity showed that a global menu bar on Linux was doable, and somehow, I don’t know how, Unity did it for Gtk apps and for Qt apps and for apps using other frameworks or toolkits. Install new software from other desktops and it acquired a global menu bar. Still works for Google Chrome today. Works for the Waterfox browser today, but not for Firefox.
On top of that, Unity allowed you to use that menu with Windows shortcut keystrokes, and Unity’s dock accepted Windows’ Quick Launch bar keystrokes. But that went largely unnoticed too, because all the point-and-drool merchants don’t know that there are standard Windows keystrokes or how to use them.
Other aspects can be done as well: e.g. Mac OS X’s .app
bundles are implemented in GNUstep, with the same filenames, the same structure, the same config files, everything. And simpler than that, AppImages provide much the same functionality. So does GoboLinux.
This stuff can be implemented on FOSS xNix: the proof is that it’s already been done.
But nobody’s brought the pieces together in one place.
There are a few things that are harder:
Making the same keyboard shortcuts work everywhere. Even little things like the navigation within a text field can behave differently between toolkits, though the big distros have configured at least GTK and Qt to behave the same way. Beyond that, on macOS, command-, will open preferences in any application that has preferences. Command-shift-v pastes and matches style in every application that has rich text. There are a load of shortcuts that are the same everywhere. Apple achieved this by having XCode (and, before that, Interface Builder) populate the default menu bar with all of these shortcuts preconfigured. This is much harder to do with a diverse tooling ecosystem.
Drag and drop works everywhere. It’s somewhat embarrassing that drag-and-drop works better between MS Office apps on macOS than Windows. There are a couple of things that make this work well on macOS:
There are some things that could be improved, but this model has worked pretty well since 1988. It’s significantly augmented by the fact that there’s a file icon in the title bar (newer macOS has hidden this a bit) that means that any document window has a drag source for the file. I can, for example, open a PDF in Preview and then drag from the title bar onto Mail.app’s icon to create a new file with that PDF file as an attachment.
The global menu bar mostly works with Qt and GTK applications but it requires applications to have separate app and document abstractions, which a lot of things that aim for a Windows-like model lack. Closing the last window in a macOS app doesn’t quit the app, it leaves the menu bar visible and so you can close one document and then create a new one without the app quitting out from under you. The Windows-equivalent flow requires you to create the new doc and then close the old one, which I find jarring.
The macOS model works particularly well on XNU because of the sudden termination mechanism that originated on iOS. Processes can explicitly park themselves in a state where they’re able to be killed (with the equivalent of kill -9
) at any point. They are moved out of this state when input is available on a file descriptor that they’re sleeping on. This means that apps can sit in the background with no windows open and be killed if there’s memory pressure.
Sudden termination requires a bunch of cooperation between different bits of the stack. For example, the display server needs to own the buffers containing the current window contents so that you can kill an app without the user seeing the windows go away. When the user selects the window, then you need the window server to be able to relaunch the app and give it a mechanism to reattach to the windows. It also needs the window server to buffer input for things that were killed and provide it on relaunch. It also needs the frameworks to support saving state outside of the process.
Apple has done a lot of work to make sure that everything properly supports this kind of state preservation across app restarts. My favourite example is the terminal, which will restore all windows in the same positions and a UUID in an environment variable. When my Mac reboots, all of my remote SSH sessions are automatically restored by a quick check in my .profile
to see if I have a file corresponding to the session UUID and, if so, reading it and reestablishing the remote SSH session.
I prefer not to engage with comments that resort to phrases like “point-and-drool”.
But I’ll point out that you’re not really contradicting here – Unity was a significant amount of work that basically required an entity with the funding level of Ubuntu to pull off even briefly, and in general Ubuntu’s efforts to make the desktop experience more “polished” and easier have been met with outright hostility. Heck, even just GNOME tends to get a lot of hate for this: when they try to unify, simplify, and establish clear conventions, people attack them for “dumbing down”, for being “control freaks”, for their “my way or the highway” approach, for taking away freedom from developers/users/etc., and that ought to be the clearest possible evidence for my claim about the general contradiction between wanting the “finesse” of macOS with the “freedom” of a Free/open-source stack.
I prefer not to engage with comments that resort to phrases like “point-and-drool”.
[Surprised] What’s wrong with it? It is fairly mildly condemnatory, IMHO. FWIW, I am not an American and I do not generally aim to conform to American cultural norms. If this is particularly offensive, it’s news to me.
I completely agree that Unity was a big project and a lot of work, which was under-appreciated.
But for me, a big difference is that Unity attempted to keep as many UI conventions as it could, to accommodate multiple UI method: mainly keyboard-driven and mainly pointing-device driven; Windows-like conventions (.e.g window and menu manipulation with the standard Windows keystrokes) and Mac-like conventions (e.g. a global menu bar, a dock, etc.)
GNOME, to me, says: “No, we’re not doing that. We don’t care if you like it. We don’t care if you use it. We don’t, and therefore, it’s not needed. You don’t need menu bars, or title bars, or desktop icons, or a choice of sidebar in your filer, or any use for that big panel across the top of the screen. All that stuff is legacy junk. Your phone doesn’t have it, and you use that, therefore, it’s baggage, and we are taking all that away, so get used to it, move on, and stop complaining.”
People have been trying to make Unix-y desktop environments look like Mac OS
Starting with Apple. :-). macOS is Unix.
And in general the idea of “finesse of macOS” on a *nix desktop environment is contradictory.
Considering the above, that seems … unlikely. Or maybe macOS is a contradiction onto itself?
NeXTstep arguably had even higher standards.
It’s funny, because I use the old “stable but ugly server os vs. pretty+easy-to-use but crashy client OS”-dichotomy as an example of things we used to believe were iron laws of nature, but that turned out to be completely accidental distinctions that were swept away by progress.
My phone is running Unix, and so is just about everybody’s. I think my watch is running Unix as well.
and the level of enforced conformity of interface design
Considering how crappy the built in apps are these days compared to 3rd party apps, and how little they conform to any rules or guidelines, I think that’s at best a highly debatable claim.
A lot of the “finesse” of macOS actually isn’t in superficial appearance, though, it’s in systemwide conventions that are essentially impossible to enforce in a useful way without also bringing in the level of platform control that’s alleged to be evil when Apple does it.
Right, there’s a lot of subtle things you’d have to enforce within the ecosystem to make it work even close - just the notion of NSApplication and friends is alien to a world where it’s assumed a process has to have a window to exist in the GUI.
People have been trying to make Unix-y desktop environments look like Mac OS
Starting with Apple. :-). macOS is Unix.
I think if you read “unix-y desktop environments” where “unix-y” modifies “desktop environment” as opposed to as “unixes that have a desktop environment” (eg as “X11 DEs/WMs such as GNOME/KDE/Englightenment”) I think this observation is more compelling. A common theme of most phones, most (all?) watches, NextStep and Mac OS is that they are very much not running “a unix-y desktop environment.”
If I want a Mac, I’ll buy a Mac. What’d be more interesting is trying to build something better. Despite Gnome 3’s flaws, I do appreciate it at least tried to present a desktop model other than “Mac OS” or “Windows 98”.
Yes, I have to give you that.
I personally hate the model it proposes, but you make a good point: at least it’s not the same old same old.
Oddly, Pop OS pleases me more, because it goes further. Pop OS, to me, says:
“OK, so, GNOME took away your ability to manage windows on the desktop and it expects you to just use one virtual desktop per app. Window management is for simps; use it like a phone, all apps full screen all the time. We don’t think that’s enough, so, we’re going to keep GNOME but slap a tiling window manager on top, because we respect that you might not have a vast desktop or want one app per screen, so let us help you by automating that stuff… without sacrificing a desktop environment and switching to some kind of alien keyboard-driving tiling window-manager thing that takes away everything you know.”
Isn’t SerenityOS an example of finesse of macos with some level of open source freedom?
It depends how you define freedom, but they seem to have a lot of different people working on different things entirely that are still built on a shared vision seemingly quite well, it is still early but i wouldnt give up on the idea all together.
I understand what you are saying, but I think its an interesting compromise thats surprisingly effective (from the outside perspective)
Objective-S describes itself as “possibly the first general purpose programming language.”, then compares other languages with this claim to be “actually DSLs for the domain of algorithms”.
And then, you see the language…and it’s…a DSL for writing applications using a native UI framework?
I mean, I don’t think it’s a bad-looking language, it just seems to have a really contradictory statement right in the introduction.
a DSL for writing applications using a native UI framework?
Not exactly sure where you are getting this from. Yes, a bunch of the examples are framework-specific, but of course for something that’s about better ways of gluing together components you need to start with some components. fib(n)
doesn’t cut it, that’s algorithmic. Unless you also want to create all the components from scratch, they’ll come from somewhere, but that doesn’t mean the mechanisms are in any way specific to that.
So I guess the question is why the mechanisms also look like they are specific to one UI framework, even though they are not.
I guess I need to add a lot more examples.
Thanks for the feedback, really useful!
The examples were pretty useful in showing me what the language actually was like. I’m glad you included them, because that sentence just totally confused me as to what the actual reasoning for Objective-S was. The introductory paragraph felt super academic, but then looking down I felt like I could see the practicality in the code. I’m not sure you need more/better examples, just a more down-to-earth introduction, showcasing the language’s strengths. After you explained it here I kinda do get what you mean, like those other languages are designed so you can write algorithms, and ergonomically are not truly designed for primarily writing UIs. Yet, that’s what we end up using a lot of these languages for.
I can only think of a couple other widely-used programming languages designed primarily to build UIs with: Elm, HTML, and Swift. And Swift is really a lot more complicated because it dabbles in other purposes over just UI development. It would be nice to see a Smalltalk-inspired programming language designed around UI programming, that could be used on any platform. So I’m excited to see where this goes!
I call clickbait. It’s not a “distributed system”, just an S3 uploader client. You could probably write one in 280 bytes of Python or Rust too…
I’d love to see that!
Initially I thought it should also be at least possible, if not trivial to do so in for example Python or Ruby (and of course the article never claims that you can’t do it). But looking more closely I was a bit more skeptical.
I hadn’t realized that in Smalltalk-72 each object parsed incoming messages. Every object had its own syntax. Ingalls cites performance as an issue then, but perhaps the idea is worth revisiting.
Also, here’s a link to Schorre’s META II paper that he mentions: https://doi.org/10.1145/800257.808896
Smalltalk-72 - in contrast to all later Smalltalks - had indeed true message passing. That’s what Kay is usually referring to; it was his language design. The Smalltalk we know today, starting with Smalltalk-74, was invented by Ingalls and had little in common with Smalltak-72. Smalltalk-74 and later have virtual methods instead of message passing; but performance was still an issue.
Each object having its own syntax turned out to be an almost bigger problem than performance. And I am not sure about the “almost”.
Because Windows has historically had several subsystems, including the OS/2 subsystem, POSIX subsystem, and, most famously, the Win16 subsystem, which were all called e.g. “Windows Subsystem for OS/2”. WSL1 built on roughly the same model, and so ended up with a similar name. WSL2 is entirely different, but we’ve got the name stuck now.
Note, I’m not really disagreeing with you, but just explaining that this naming convention is just how Windows has named its subsystems for a long time.
Would it have made more sense to call it “OS/2 Subsystem for Windows?” Or is there some reason the reverse made more sense?
Back in the 90s, when this showed up with the first versions of Windows NT, the word “applications” was either explicit or obviously implicit (I sincerely forget which) for all of these. So “Windows Subsystem for OS/2 Applications,” or “…for POSIX Applications,” if you will. At the time, Windows NT was trying to subsume minicomputer and (in some cases) mainframe environments, but not vice-versa, so the ambiguity the elision has in 2021 just didn’t exist in 92 or whenever this was.
One wonders why the word “Windows” was not implicit too. Of course it is a Windows subsystem. It is a subsystem on your Windows. You don’t have subsystems for other operating systems on your Windows than for Windows. Otherwise it would not be a _sub_system, right?
The Windows 2000 bootscreen would say “built on NT technology”. I always thought that was slightly amusing (I would have done the same though since not everyone knows that NT stands for “new technology”; most people in fact don’t know).
“NT” did stand for New Techology but I think by the time W2000 rolled around it was just its own term - “Windows NT” was the server version of Windows.
This joke was already running around cca. 2002 or so: https://imgur.com/a/UhSmCdf (this one’s a newer incarnation, I think). By that time the NT was definitely its own term. I remember people thinking it stood for “networking” as early as 1998 or so.
This is from the company that named the phone they pinned all their iPhone rivalry hopes to the “Windows Phone 7 Series” so honestly I don’t think we can ask too much.
Think of it this way: you’d like it to be
(Linux Subsystem) for Windows
But instead it is:
Windows (Subsystem for Linux)
There’s just a missing apostrophe to denote possession that would fix it all:
Windows’ Subsystem for Linux
but you don’t run linux (the kernel), you just run a GNU userland right?
(inb4 “I’d like to interject…”)
In this particular case, WSL2 literally is purely about the Linux kernel. You can use any distro you want, including those with a BSD or Busybox userland.
It is a Linux kernel running in a virtual machine. WSL1 was a binary compatibility layer, WSL2 is actually a VM running a Linux kernel.
My understanding is that by that point there were a few “Windows Subsystems” already, not necessarily implementing other OS APIs.
There were originally 5, I think: • Win32 • WOW – Windows on Windows – the Win16 subsystem • DOS • OS/2 (for text-mode apps only) • POSIX (the NT Unix environment)
OS/2 was deprecated in 3.51 and dropped in 4.
64-bit NT drops the old WOW16 subsystem, but replaces it with WOW32 for running Win32 apps on Win64.
The POSIX subsystem has now been upgraded to be Linux-kernel compatible.
WSL 2 is different; it’s a dedicated Hyper-V VM with a custom Linux kernel inside.
I usually really enjoy your writing, but this seems totally disingenuous.
You don’t even define what qualifies as “OOP” to you, so how could anyone NOT think that you’re going to “No true Scotsman” them if they show you what they believe to be good OOP?
Does OOP have to use class inheritance? Does it have to involve mutable global state? Can you write OOP in the C language?
Or, if you’re open to someone showing you a piece of code and you’ll just accept at face value that it’s OOP with no argument, then you should say THAT.
You need to address that. But even then, I wouldn’t actually expect any replies because of the point(s) that @singpolyma raised- nobody is going to volunteer code to someone who is basically promising to (publically) criticize it.
so how could anyone NOT think that you’re going to “No true Scotsman” them if they show you what they believe to be good OOP?
I’m a bit confused. Usually it’s the other way around. I have some concrete criticism against OOP, and and I hear “Oh, that’s just because you don’t know OOP. That’s not a good OOP.” And that keeps on going forever, at which point it seems that the good OOP is everything that books and articles say, except all the things that people actually are doing in practice, but surely there somewhere must be some pristine OOP code.
I haven’t thought that I could pull a “no true Scotsman” the other way, though I guess you’re right. That’s not my intention though.
Does OOP have to use class inheritance? Does it have to involve mutable global state? Can you write OOP in the C language?
I guess I’m open minded about it. The thing with OOP is not well defined. Look - the code I write in Java is not very far from OOP: always uses a lot of interfaces, DI, and yet I don’t consider it OOP for various subtle reasons, which usually get lost in abstract discussion.
Having read and written many articles in OOP debate, I think we as a community are just talking past each other now, criticizing/defending something that is not well defined and subjective.
So I think it would be more productive to go through some actual code and talk about, in relation to OOP debate.
nobody is going to volunteer code to someone who is basically promising to (publically) criticize it.
You can volunteer someone else’s code, I don’t mind. :D
I actually thought that this is going to be a default, because people are usually too shy to think their code must be the best one.
That’s a thing with public OS projects. You put them out there, you have accept the fact that someone might… actually read the code and judge it.
I promise not to be a douche about it. There’s plenty of my own code on github, none of it pristine, anyone is free to retaliate. :D
If you do find examples of good OOP code, I predict they will mostly be written in either Smalltalk, Erlang, and Dylan. I haven’t used Smalltalk or Dylan in earnest, and it’s been far too long since I used Erlang, or I’d find some examples myself.
Edit: thinking more about this, it feels like the only way to find good OOP code is to try to find that mythical domain in which inheritance is actually a benefit rather than the enormous bummer it normally is, and one in which polymorphism is actually justified. The only example which comes to mind is GUI code where different kinds of widgets could inherit off a class hierarchy, which is why I expect you’d have the best luck by looking in Smalltalk for your examples.
But overall it’s a misguided task IMO; merely by framing it as being about “OOP” in the first place you’ve already gone wrong. OOP is just a mash-up of several unrelated concepts like inheritance, polymorphism, encapsulation, etc, some of which are good and some of which are very rarely helpful. Talk instead about how to do encapsulation well, or about in what limited domains polymorphism is worth the conceptual overhead.
But overall it’s a misguided task IMO; merely by framing it as being about “OOP” in the first place you’ve already gone wrong. OOP is just a mash-up of several unrelated concepts like inheritance, polymorphism, encapsulation, etc, some of which are good and some of which are very rarely helpful.
I have a very similar intuition, but the point of the exercise is to find whatever people would consider as good OOP and take a look at it.
If you do find examples of good OOP code, I predict they will mostly be written in either Smalltalk, Erlang, and Dylan.
I’m not sure about Dylan, but the rest fits my intuition of one “good” piece of OOP being message passing, which I tend to call actor based programming.
This is not quite true. OOP permits asynchronous message passing but actor-model code requires it. Most OO systems use (or, at least, default to) synchronous message passing.
This conflation is really problematic. I get that this is what it was supposed to mean long time ago. And then OOP became synonymous with Java-like class oriented programming. So now any time one wants to talk about contemporary common “OOP” there’s a group of people coming and “oh, but look at Erlang and Smalltalk, yada, yada - real OOP”, which while technically do have a point, are nowhere close to what I’ve seen a real life OOP looked like.
It’s also competely ahistorical. The actor model has almost nothing to do with the development of object-based programming languages, the actor model of concurrency is different from the actor model of computation, which is nonsensical (the inventor thinks he proved Turing and godel wrong with the actor model.) Alan Kay changed his mind on what “true oop was supposed to be” in the 1990s.
Alan Kay changed his mind on what “true oop was supposed to be” in the 1990s.
Any link what do you mean exactly? I’m aware of http://userpage.fu-berlin.de/~ram/pub/pub_jf47ht81Ht/doc_kay_oop_en .
OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things. It can be done in Smalltalk and in LISP. There are possibly other systems in which this is possible, but I’m not aware of them.
Is that what you have in mind?
“OOP means only messaging” is revisionism, his earlier writing was much more class-centric. I document some of this here: https://www.hillelwayne.com/post/alan-kay/
I just read that article, twice, and I still can’t figure out what the point is that it is trying to make, never mind actually succeeding at making it.
It most certainly isn’t evidence, let alone proof of any sort of “revisionism”.
The point it’s trying to make is that if you read Alan Kay’s writing from around the time that he made Smalltalk, so 72-80ish, it’s absolutely not “just messaging”. Alan Kay started saying “OOP means only messaging” (and mistakenly saying he “invented objects”, though he’s stopped doing that) in the 90s, over a decade and a half after his foundational writing on OOP ideas. It’d be one thing if he said “I changed my mind / realized I was wrong”, but a lot of his modern writing strongly implies that “OOP is just messaging” was his idea all along, which is why I call it revisionism.
I know what the point is that you (now) claim it makes. But that, to me, looks like revisionism, because the contents of the article certainly don’t support, and also don’t appear to actually make that claim in any coherent fashion.
First, the title of the article makes a different claim: “Alan Kay did not invent objects”. Which is at best odd (and at worst somewhat slanderous), because he has never claimed to have invented object oriented programming, being very clear that he was inspired very directly by Simula and a lot of earlier work, for example the Burroughs machine, Ivan Sutherland’s Sketchpad etc.
In fact, he describes how one of his first tasks, as a grad student(?), was to look at this hacked Algol compiler, and spreading the listing out in the hallway to try and grok it, because it did weird things to flow control. That was Simula.
Re-reading your article, I am guessing you seem to think of the following quote as the smoking gun:
I mean, I made up the term “objects.” Since we did objects first, there weren’t any objects to radicalize.
At least there was nothing else I could find, and you immediately follow that with “He later stopped claiming…”. The interview you cite is from 2012. Even the Squeak mailing list quote you showed was from 1998, and it refers to OOPSLA ‘97. His HOPL IV paper The Early History of Smalltalk is from 1993. That’s where he tells the Simula story.
So he “stopped” making the claim you insinuate him making, two decades before, at least according to you, he started making that claim. That seems a bit…weird. Of course, there were rumours on the Squeak mailing list that Alan Kay was actually a time-traveller from the future, partly fuelled by a mail sent from a machine with misconfigured system clock. But apart from that scenario, I have a hard time seeing how he could start doing what you claim he did two decades after he stopped doing what you claim he did.
The simpler explanation is that he simply didn’t make that claim. Not in that 2012 interview, not before 2012 and not after 2012. And that you vastly mis- and over-interpreted an off-the-cuff remark in a random interview.
Please read the first sentence carefully: “I made up the term ‘objects’”. (My emphasis). He is clearly, and entirely consistently both with the facts and his later writings, claiming to have coined the term.
And he is clearly relating this to the systems that came later, C++ and Java, relative to which the Smalltalk “everything is an object” approach does appear radical. But of course it wasn’t a “radicalisation” relative to the current state of the art, because the current state of the art came later.
Yeah, he could have mentioned Simula at that point, but if you’ve ever given a talk or an interview you know that you sometimes omit some detail in order to move the main point along. And maybe he did mention it, but it was edited out.
But the question at hand was what OO is, according to Alan Kay, not whether he claimed to have invented it. Your article only addresses the second question, incorrectly it turns out, but doesn’t say anything whatsoever about the former.
And there it helps to look at the actual artefacts. Let’s take Smalltalk-72, the first Smalltalk. It was entirely message-based, objects and classes being a second-order phenomenon. In order to make it practical, because Smalltalk was a means to an end for them, not an end in itself, this was made more like existing systems over time, culminating in Smalltalk-80. This is a development Alan has consistently lamented.
In fact, in the very 2012 interview you misrepresent, he goes on to say the following:
The first Smalltalk was presented at MIT, and Carl Hewitt and his folks, a few months later, wrote the first Actor paper. The difference between the two systems is that the Actor model retained more of what I thought were the good features of the object idea, whereas at PARC, we used Smalltalk to invent personal computing
So Actors “retained more of the good features of the object idea”. What “good features” might that be, do you think?
In fact, there’s Alan’s famous quip at OOPSLA ’97 keynote, The Computer Revolution hasn’t Happened Yet.
Actually I made up the term “object-oriented”, and I can tell you I did not have C++ in mind.
I am sure you’ve heard it. Alas, what he said next is hardly reported at all:
The important thing here is, I have many of the same feelings about Smalltalk.
And just reiterating the point above he goes on:
My personal reaction to OOP when I started thinking about it in the sixties.
So he was reacting to OOP in the sixties. The first Smalltalk was in the seventies. So either he thinks of himself as a time-traveller, or he clearly thinks that OOP was already invented, just like he always said.
Anyway, back to your claim of “revisionism” regarding messaging. I still can’t get a handle on it, because all the sources you absolutely harp on the centrality of messaging. For example, in the “Microelectronics and the Personal Computer” article, the central idea is a “message-activity” system. Hmm…sounds pretty message-centric to me.
The central idea in writing Small talk programs, then, is to define classes which handle communication among objects in the created environment.
So the central idea is to define classes. Aha, smoking gun!! But what do these classes actually do? Handle communication among objects. Messages. And yes, in a manual describing what you do in the system, what you do is define classes. Because that’s the only way to actually send and receive messages.
What else should he have written, in your opinion?
Please read the first sentence carefully: “I made up the term ‘objects’”. (My emphasis). He is clearly, and entirely consistently both with the facts and his later writings, claiming to have coined the term.
He didn’t coin the term, either. The Simula 67 manual formally defines “objects” on page 6.
Anyway, you’re way angrier about this than I expected anybody to be, you’re clearly more interested in defending Kay than studying the history, and you’re honestly kinda scaring me right now. I’m bowing out of this whole story.
OK, you just can’t let it go, can you?
First you make two silly accusations that don’t hold up to even the slightest scrutiny. I mean, they are in the “wet roads cause rain” league of inane. You get shown to be completely wrong. Instead of coming to terms with just how wrong you were, and maybe what your own personal motivations were for making such a silly accusations, you just pile on more silly accusations.
What axe do you have to grind with Alan Kay? Because you are clearly not rational when it comes to your ahistorical attempts to throw mud at him.
As I have clearly shown, the only one who hasn’t adequately studied history here is you. Taking one off-the-cuff remark from 2012 and defining that has “having studied history” is beyond the pale, when the entire rest of history around this subject contradicts your misinterpretation of that out-of-context quote.
I’m sympathetic, but I mean, at some point you have to just give up and admit that the word has been given so many meanings that it’s no longer useful for constructive discussion and just switch to more precise terminology, right? Want to talk about polymorphism? Cool; talk about polymorphism. Want to talk about the actor model? Just say “actor”.
no longer useful for constructive discussion and just switch to more precise terminology, right?
I guess you’re right.
It’s just there’s still so many books, articles, talks mentioning and praising OOP, that it’s hard to resist. (Do schools still teach OOP?) It’s not useful to use for constructive discussion, but the ghost of vague amalgamate of ideas of OOP is still hunting us, and I can see some of these ideas in the code that I have to work with sometimes. People keep adding pointless getters and setters in the name of encapsulation and so on. Because that’s in some OOP book.
Ironically… when talking about OOP critically, in a way I’m only proliferating its existence. But talking about these ideas in isolation doesn’t seem like it’s making any damage to the ghost of OOP. Everyone keep talking about inheritance being tricky, and yet I keep seeing inheritance hierarchies where they shouldn’t be.
And this is what I was talking about in my top-level comment. It sounds like you’re requiring class inheritance as part of your definition of OOP. Which I think is bunk. I don’t care what Java does or has for features. Any code base that is architected as black-box, stateful, “modules” (classes, OCaml modules, Python packages, JavaScript modules, etc) should count as OOP. Inheritance is just a poor mechanism for code-reuse.
There’s no actual reason to include class inheritance in a definition of OOP anymore than we must include monads in our definition of FP (we shouldn’t).
I would say modules rendered black box by polymorphic composition, with state being optional but allowed it the definition of course.
I feel like mutable state is actually important for something to be an “object”.
And when I say mutable state, I’m also including if the object “represents” mutable state in the outside world without having its own, local, mutable fields in the code. In other words, an object that communicates with a REST API via HTTP represents mutable state because the response from the server can be different every time and we can issue POST requests to mutate the state of the server, etc. So, even if the actual class
in your code doesn’t have mutable properties, it can still be considered to “have” mutable state.
Anything that doesn’t have mutable state, explicitly or implicitly, isn’t really an “object” to me. It’s just an opaque data type. If you disagree, then I must ask the question “What isn’t an object?”
It’s just an opaque data type.
A data type rendered opaque by polymorphism, specifically. A C struct with associated functions is not an object, even if the fields are hidden with pointer tricks and even if the functions model mutable state, because I can’t make something another object out of something else where I can use those functions safely.
So are you saying that “objects” have some requirement to be polymorphic per your definition?
Polymorphism is orthogonal to my definition. Neither an opaque data type, nor an “object” need to have anything to do with polymorphism in my mind. An opaque data type can simply be a class with a private field and some “getters”.
it feels like the only way to find good OOP code is to try to find that mythical domain in which inheritance is actually a benefit rather than the enormous bummer it normally is
Oof. Inheritance is definitely not a feature I’d call out as a “good idea” part. I’ve been known to use it from time to time, but if you use it a lot and end up being “good OOP” that would be nothing short of a miracle.
so how could anyone NOT think that you’re going to “No true Scotsman” them if they show you what they believe to be good OOP?
I’m a bit confused. Usually it’s the other way around. I have some concrete criticism against OOP, and and I hear “Oh, that’s just because you don’t know OOP. That’s not a good OOP.” And that keeps on going forever, at which point it seems that the good OOP is everything that books and articles say, except all the things that people actually are doing in practice, but surely there somewhere must be some pristine OOP code.
I haven’t thought that I could pull a “no true Scotsman” the other way, though I guess you’re right. That’s not my intention though.
Does OOP have to use class inheritance? Does it have to involve mutable global state? Can you write OOP in the C language?
I guess I’m open minded about it. The thing with OOP is - it’s vague and not well defined. The real OO is what I would call actor programming, with real message passing, and OOP we do now is class-oriented programming, and that just starts the confusion.
Look - the code I write in Java is not all that much different from OOP: always uses a lot of interfaces, DI, and yet I don’t even consider it OOP for various subtle reasons.
Having read and written many articles in OOP debate, I think we as a community are just talking past each other now, criticizing/defending something that is not well defined and subjective.
So I think it would be more productive to go through some actual code and talk about, in relation to OOP debate.
nobody is going to volunteer code to someone who is basically promising to (publically) criticize it.
You can volunteer someone else’s code, I don’t mind. :D
I actually thought that this is going to be a default, because people are usually too shy to think their code must be the best one.
That’s a thing with public OS projects. You put them out there, you have accept the fact that someone might… actually read the code and judge it.
I promise not to be a douche about it. There’s plenty of my own code on github, none of it pristine, anyone is free to retaliate. :D
I’m a little bit suspicious of this plan. You specifically call out that you already have an anti-OOP bias to the point of even saying “no true Scotsman” and then say you plan to take anything someone sends you and denigrate it. Since no codebase is perfect and every practitioner’s understanding is always evolving, there will of course be something bad you can say especially if predisposed to do so.
If you actually want to learn what OOP is like, why not pick up a known-good text on the subject, such as /99 Bottles of OOP/?
I, for one, think this project seems very interesting. The author is correct that criticisms of OOP is often dismissed by saying “that’s just a problem if you don’t do OOP correctly”. Naturally, a response is to ask for an example of a project which does OOP “correctly”, and see if the common critiques still applies to that.
Maybe the resulting article will be uninteresting. But I think I would love to see an article which dissects a particular well-written code-base, discusses exactly where and how it encounters issues which seem to be inherent in the paradigm or approach it’s using, and how it solves or works around or avoids those issues. I just hope the resulting article is actually fair and not just a rant about OOP bad.
EDIT: And to be clear, just because there seems to be some confusion: The author isn’t saying that examples of good OOP aren’t “real OOP”. The author is saying that critiques of OOP is dismissed by saying “that’s not real OOP”. The author is complaining about other people who use the no true Scotsman fallacy.
Explicitly excluding frameworks seems to be a bit prejudiced, since producing abstractions that encourage reuse is where OOP really shines. OpenStep, for example, is an absolutely beautiful API that is a joy to use and encourages you to write very small amounts of code (some of the abstractions are a bit dated now, but remember it was designed in a world where high-end workstations had 8-16 MiB of RAM). Individual programs written against it don’t show the benefits.
Want to second OpenStep here, essentially Cocoa before they started with the ViewController nonsense.
Also, from Smalltalk, at least the Collection classes and the Magnitude hierarchy.
And yes, explicitly excluding frameworks is non-sensical. “I want examples of good OO code, excluding the things OO is good at”.
Maybe the resulting article will be uninteresting. But I think I would love to see an article which dissects a particular well-written code-base, discusses exactly where and how it encounters issues which seem to be inherent in the paradigm or approach it’s using, and how it solves or works around or avoids those issues. I just hope the resulting article is actually fair and not just a rant about OOP bad.
That’s exactly my plan. I have my biases and existing ideas, but I’ll try to keep it open minded and maybe through talking about concrete examples I will learn something, refine my arguments, or just have a useful conversation.
The author is correct that criticisms of OOP is often dismissed by saying “that’s just a problem if you don’t do OOP correctly”.
I know this may look like splitting hairs, but while “that’s only a problem if you don’t do OOP correctly” would be No True Scotsman and invalid, what I see more often is “that’s a problem and that’s why you should do OOP instead of just labelling random code full of if statements as ‘OOP’” which is a nomenclature argument to be sure, but in my view differs from No True Scotman in that it’s not a generalization but an attempt to make a call towards a different way.
I agree that a properly unbiased tear-down of a good OOP project by someone familiar with OOP but without stars in their eyes could be interesting, my comment was based on the tone of the OP and a sinking feeling that that is not what we would get here.
OOP simplifies real objects and properties to make abstraction approachable for developers, and the trade-off is accuracy (correctness) for simplicity.
So if you would try to describe the real world in OOP terms adequately, then an application will be as complex as the world itself.
This makes me think that proper OOP is unattainable in principle, with the only exception – the world itself.
One could argue in favor of Assembly and still be right, which doesn’t make “everyone program should be written in Assembly” a good statement. It sounds, to me, like saying “English will never have great literature”. It doesn’t make much sense.
Microsoft has great materials on Object-Oriented-Design tangled inside their documentation of .NET, Tackle Business Complexity in a Microservice with DDD and CQRS Patterns is a good example of what you want, but is not a repository I am afraid. Design Patterns has great examples of real world code, they are old (drawing scrollbars) but they are great Object-Oriented-Programming examples.
Good code is good, no matter the paradigm or language. In my experience, developers lack understanding the abstractions they are using (HTTP, IO, Serialization, Patterns, Architecture etc.) and that shows on their code. Their code doesn’t communicate a solution very well because they doesn’t understand it themselves.
you plan to take anything someone sends you and denigrate it.
I could do that, but then it wouldn’t be really useful and convincing.
If you actually want to learn what OOP is like, why not pick up a known-good text on the subject, such as /99 Bottles of OOP/
Because no real program looks like this.
Seems like a case of laziness. Writing boilerplate dispatch to a composed class is annoying but it’s a time limited activity. Even with something like 35 methods to implement, there’s no way it could take more than one afternoon to implement, and then you’re done. I think the problem is that the friction puts people off even if in the long run you would get the time back.
Or alternatively the problem is this language encourages inheritance by elevating it to a central place in the design, whereas in reality inheritance is very rarely useful, and places it’s justified are almost never about inheriting from one class to another. At the risk of oversimplifying: bad language design.
You’re entirely right. In E, the
extends
keyword provides composition by default; an extended object is given a wrapper which dispatches to new methods before old methods.Cooperative inheritance is possible, but requires an explicit recursive definition:
“laziness” isn’t really a good way to analyze these sorts of things. When programmers are being “lazy” they’re just doing what their tools guide them to doing. We can and should modify these tools so that the “lazy” solution is correct rather than try to fix programmers.
Such a shame I can vote this up only once. If you hold the hammer by the handle and swing the head, you’re being lazy: you’re using the tool in the way that minimises your effort. If your tool is easier to use incorrectly than correctly, then you should redesign the tool.
For programming languages, by the time that they have an ecosystem that exposes these problems, it’s often too late to fix the language, but newer languages can learn from the experience.
and every time the interface changes you need to adjust all dispatchers…
That’s the fragile base class problem. Any addition to the base class can ruin your day. At least with composition, you can limit the damage to compile time.
Also, it’s not like you need to override all 35 methods to conform to the map interface, it’s enough to implement a couple your code actually uses.
Rust doesn’t even have interfaces for things like Collection or Map, and it creates zero problems. Most code does not care about abstract interfaces.
From the article: “…and less work to maintain”.