Threads for mpweiher

  1. 3

    I opened this post expecting it to be about web components, but after reading it I’m not sure if it might be discussing some completely separate technology with the same name.

    However this time there’s probably little (if any) advantage to using web components for server-rendered apps. Isomorphism is complex enough; web components adds extra complexity with the need to render declarative shadow DOM and the still-unknown solution for bundling CSS.

    Server-side rendering is the idea that HTML should be generated on the server – the “old-school” approach used by everything from CGI.pm to PHP to Rails. Web components are a natural fit for this because they provide a way for the server to write out a description of the page as HTML elements, without having to generate all the nested <div>s and CSS rules and such.

    This person is talking about rendering shadow DOM on the server, though, which … honestly it seems completely insane. I don’t know why you’d ever do that, it’s such a bizarre idea that I feel I must be misunderstanding.

    The core flaw of component-oriented architecture is that it couples HTML generation and DOM bindings in a way that cannot be undone. What this means in practice is that: […]

    • Your backend must be JavaScript. This decision is made for you.

    Just absolutely befuddling. Why would using web components imply the presence of a backend at all, much less require that it be written in JavaScript?

    My blog’s UI is written with web components. Its “backend” is a bunch of static HTML being served by nginx. If I wanted to add dynamic functionality and wrote a web component for it, then the client side would only need to know which URLs to hit – the choice of backend language wouldn’t be relevant to the client.

    I’m building my version of this vision with Corset. I look forward to seeing what other solutions arise.

    Oh, ok, this is some sort of spamvertising. I looked at the Corset website and it looks like it’s a JavaScript library for using CSS to attach event handlers to HTML elements. The value proposition over … well, anything else … isn’t clear to me. Why would I want to write event handlers in a CSS-like custom language? Even the most basic JS framework is better than this.

    1. 5

      I re-read the article to see if the author was confused about “Web Components” vs. “components on the web”, and the answer is no. The author links to a WC library they wrote that mimics React, showing familiarity with both kinds of C/components. If you read closely, the terminology is consistent, and “web component” is used any time the author means using the customElement and Shadow DOM APIs, but “component” is used other times for the general idea of “something like React”. Frankly, it is extremely unfortunate that the customElement and Shadow DOM promoting crowd have hijacked the term “Web Component” for what they are doing, but the author is not a victim of this confusion.

      It’s a straightforward argument:

      However this time there’s probably little (if any) advantage to using web components for server-rendered apps. Isomorphism is complex enough; web components adds extra complexity with the need to render declarative shadow DOM and the still-unknown solution for bundling CSS.

      “Server-rendered” in this case means the server sends you a webpage with the final DOM. Your blog is not server rendered. If you disable JS, most of the content on your blog goes away. This is a standard terminology in JS-land, but it’s a bit odd, since all webpages need to come from some server somewhere where they are “rendered” in some sense, but “rendering” in JS-land means specifically inflating to the final DOM.

      Your backend must be JavaScript. This decision is made for you.

      That follows from the idea that you want to have <blog-layout> on the server turn into <div style="height: 100%"><div id="container">… and also have the <blog-layout> tag work in the browser. In theory, you could do something else with like WASM or recompiling templates or something, but so far no one has figured out an alternative that works as more than a proof of concept.

      Oh, ok, this is some sort of spamvertising

      Literally everything posted on Lobsters that does not have the “history” tag is spamvertising in this sense. It’s a free JS framework, and yes, the author is promoting it because they think it’s good.

      I find the idea of a CSS-like declarative language interesting, but looking at the code examples, I still prefer Alpine.js which also has a declarative language but sprinkled in as HTML attributes. I’m glad someone is looking at new ideas though, and I hope the “write a language like CSS” idea sparks something worth using.

      1. 2

        I’m still horribly confused even after your elucidations. Granted, I’ve never used Web Components, but I’ve read the full MDN docs recently.

        The bit about “your backend must be JavaScript” confuses me the most. Why? My server can generate HTML with any template engine in any language. The HTML includes my custom component tags like blog-layout, entry-header, etc. At the top of the HTML is a script tag pointing to the JS classes defining those tags. Where’s the problem?

        1. 3

          At the top of the HTML is a script tag pointing to the JS classes defining those tags. Where’s the problem?

          I think this is the problem the author points to, if you turn off JS, you don’t have those tags anymore.

          1. 1

            No, the problem has to do with requiring JS on the server.

            I’m, frankly, uninterested in what happens if someone turns off JS in their browser. I imagine that, unsurprisingly, a lot of stuff stops working; quel horreur!. Then they can go use Gemini or Gopher or a TTY-based BBS or read a book or something.

            1. 2

              It is both problems. The first view is blocked until JS loads, which means it is impossible to load the page in under one second. To remove the requirement that JS has loaded on the client you pre-render it on the server, but pre-rendering requires JS on the server (Node, Bun, or Deno).

              I love you guys but this is a very old and well known topic in frontend. It’s okay to be backend specialists, but it’s not a confusing post at all. It’s just written for an audience that is part of an on going conversation.

              1. 2

                Fair, frankly I had hard time deciphering the blog post.

                I agree though that optimizing for no JS is not interesting to me either.

            2. 2

              It’s important to know what the competition to WC is doing. Popular JS frameworks like Next for React and Nuxt for Vue and Svelte Kit for Svelte etc. let you write code that works both server side and client side. So if you write <NumberShower favorite="1" /> in Vue on server, the server sends <div class="mycomponent-123abc">My favorite number is <mark class="mycomponent-456def">1</mark>.</div> to the browser, so that the page will load even with JS disabled. Obviously, if JS is turned off, then interactions can’t work, but the real benefit is it dramatically speeds up time to first render and lets some of the JS load in the background while displaying the first round of HTML. (See Everyone has JavaScript, right?.)

              To do this with WC, you might write <number-shower favorite="1"></number-shower>, but there’s no good way to turn it into first render HTML. Basically the only realistic option is to run a headless browser (!) and scrape the output and send that. Even if you do all that, you would still have problems with “rehydration,” where you want the number-shower to also work on the client side, say if you dynamically changed favorite to be 2 on click.

              The Cloak solution is you just write <div class="number-shower">My favorite number is <mark class="number">1</mark>.</div> in your normal server side templating language, and then you describe it with a CSS-like language that says on click, change 1 to 2. Alpine.js and Stimulus work more or less the same way, but use HTML attributes to tell it to change 1 to 2 on click.

              1. 1

                To do this with WC, you might write <number-shower favorite="1"></number-shower>, but there’s no good way to turn it into first render HTML […] The Cloak solution is you just write <div class="number-shower">My favorite number is <mark class="number">1</mark>.</div> in your normal server side templating language

                It’s possible I’m still misunderstanding, but I think you’ve got something weird going on in how you expect web components to be used here. They’re not like React where you define the entire input via attributes. The web component version would be:

                <number-shower>My favorite number is <mark slot=number>1</mark>.</number-shower>
                

                And then when the user’s browser renders it, if they have no JS, then it renders the same as if the custom elements were all replaced by <div>. It’s not super pretty unless there’s additional stylesheets, but it’s readable as a plain HTML page. Go to my blog in Firefox/Chrome with JS disabled, or heck use a command-line browser like w3m.

                1. 1

                  They’re not like React where you define the entire input via attributes.

                  No, that’s totally a thing in Web Components. It’s a little tricky though because the attributes behave different if you set them in HTML vs if you set them on a JS DOM Node. You use the lifecycle callbacks to have attributeChangedCallback called whenever someone does el.favorite = "2".

                2. 1

                  I just saw Happy DOM, which can prerender Web Components server side without a headless browser. Cool! Still requires server side JS though, and there’s still a big question mark around rehydration.

                3. 1

                  The keyword is “isomorphic”, which in “JS land” means the exact same code is used to render HTML on the server side and on the client side. The only language (ignoring WASM) they can both run is JavaScript.

                  1. 1

                    So their point is “if you want to use the same code on client and server it has to be JS”? Sounds like an oxymoron. But what does that have to do with Web Components?

                    1. 2

                      If you want to be isomorphic, you can’t use Web Components™, because they are HTML elements designed to work inside a browser. However, you can use web components (lowercase), e.g., React components, because they are just a bunch of JavaScript code that generates HTML and can run anywhere.

                4. 2

                  Frankly, it is extremely unfortunate that the customElement and Shadow DOM promoting crowd have hijacked the term “Web Component” for what they are doing

                  Have they? I’m only familiar with the React ecosystem, but I can’t recall ever seeing React components referred to as “web components”.

                  1. 1

                    I’m saying “Web Components” should be called “Custom Element components” because “Web Components” is an intentionally confusing name.

                    1. 2

                      Oh I see, yeah, a less generic name than “Web Components” would have been a good idea.

                      1. 1

                        To be fair, Web Components as a name goes back to 2011, which is after Angular but before React.

                  2. 1

                    If you disable JS, most of the content on your blog goes away.

                    Did you try it, or are you just assuming that’s how it would work? Because that’s not true. Here’s a screenshot of a post with JS turned off: https://i.imgur.com/jLFz6UV.png

                    “Server-rendered” in this case means the server sends you a webpage with the final DOM.

                    What do you mean by “final DOM”?

                    In the cast of my blog, the server does send the final DOM. That static DOM contains elements that are defined by JS, not by the browser, but the DOM itself is whatever comes on the wire.

                    Alternatively, if by “final DOM” you mean the in-memory version, I don’t think that’s a useful definition. Browsers can and do define some HTML standard elements in terms of other elements, so what looks like a <button> might turn into its own miniature DOM tree. You’d have to exclude any page with interactive elements such as <video>, regardless of whether it used JavaScript at all.

                    That follows from the idea that you want to have <blog-layout> on the server turn into <div style="height: 100%"><div id="container">

                    If I wanted to do server-side templating then there’s a variety of mature libraries available. I’ve written websites with server-generated HTML in dozens of languages including C, C#, Java, Python, Ruby, Haskell, Go, JavaScript, and Rust – it’s an extremely well-paved path. Some of them are even DOM-oriented, like Genshi.

                    The point of web components is that it acts like CSS. When I write a CSS stylesheet, I don’t have to pre-compute it against the HTML on the server and then send markup with <div style="height: 100%"> – I just send the stylesheet and the client handles the style processing. Web components serves the same purpose, but expands beyond what CSS’s declarative syntax can express.

                    1. 2

                      Did you try it

                      Yes, actually. I tried it and got the same result as the screenshot, which is that all the sidebars and whatnot are gone and the page is not very readable. Whether you define that as “most of the content” is sort of a semantic point. Anyhow, it’s fine! There’s no reason you should care about it! But tools like Next and SvelteKit do care and do work hard to solve this problem.

                      What do you mean by “final DOM”?

                      Alternatively, if by “final DOM” you mean the in-memory version, I don’t think that’s a useful definition.

                      The DOM that the browser has after it runs all the JavaScript. “Server Side Rendering” is the buzzword to search for. It’s a well known thing in frontend circles. There are a million things to read, but maybe try https://www.smashingmagazine.com/2020/07/differences-static-generated-sites-server-side-rendered-apps/ first. The point about the browser having its own secret shadow DOMs for things like videos and iframes is true, but not really relevant. The point is, does the HTML that comes over the wire parse into the same DOM as the DOM that results after running JS. People have gone to a lot of effort to make them match.

                      If I wanted to do server-side templating then there’s a variety of mature libraries available.

                      Sure! But some people want to use the same templates on the server and the client (“isomorphism”) but they don’t want to use JavaScript on the server. That’s a hard problem to solve and things like Corset, Stimulus, and Alpine.js are working on it from one angle. Another angle is to just not use client side templating, and do the Phoenix LiveView thing. It’s a big space with tons of experiments going on.

                  3. 1

                    My blog’s UI is written with web components.

                    <body>
                        <blog-layout>
                            <style>
                       ...
                        <yuru-card>
                            <h2 slot=title>
                       ...
                    

                    Looks cool. Can you tell more?

                    1. 1

                      Anything specific you’re interested in knowing?

                      I use a little shim library named Lit, which provides a React-style wrapper around web component APIs. The programmer only has to define little chunks of functionality and then wire them up with HTML. If you’ve ever used an XML-style UI builder like Glade, the dev experience is very similar.

                      After porting my blog from server-templated HTML to web components I wanted to reuse some of them in other projects, so I threw together a component library (I think the modern term is “design system”?). It’s called Yuru UI because the pun was irresistible.

                      The <yuru-*> components are all pretty simple, so a more interesting example might be <blog-tableofcontents>. This element dynamically extracts section headers from the current page and renders a ToC:

                      import { LitElement, html, css } from "lit";
                      import type { TemplateResult } from "lit";
                      import { repeat } from "lit/directives/repeat.js";
                      
                      class BlogTableOfContents extends LitElement {
                      	private _sections: NodeListOf<HTMLElement> | null;
                      
                      	static properties = {
                      		_sections: { state: true },
                      	};
                      
                      	static styles = css`
                              :host {
                                  display: inline-block;
                                  border: 1px solid black;
                                  margin: 0 1em 1em 0;
                                  padding: 1em 1em 1em 0;
                              }
                              a { text-decoration: none; }
                              ul {
                                  margin: 0;
                                  padding: 0 0 0 1em;
                                  line-height: 150%;
                                  list-style-type: none;
                              }
                          `;
                      
                      	constructor() {
                      		super();
                      		this._sections = null;
                      
                      		(new MutationObserver(() => {
                      			this._sections = document.querySelectorAll("blog-section");
                      		})).observe(document, {
                      			childList: true,
                      			subtree: true,
                      			characterData: true,
                      		});
                      	}
                      
                      	render() {
                      		const sections = this._sections;
                      		if (sections === null || sections.length === 0) {
                      			return "";
                      		}
                      		return html`${sectionList(sections)}`;
                      	}
                      }
                      
                      customElements.define("blog-tableofcontents", BlogTableOfContents);
                      
                      const keyID = (x: HTMLElement) => x.id;
                      
                      function sectionList(sections: NodeListOf<HTMLElement>) {
                      	let tree: any = {};
                      	let tops: HTMLElement[] = [];
                      	sections.forEach((section) => {
                      		tree[section.id] = {
                      			element: section,
                      			children: [],
                      		};
                      		const parent = (section.parentNode! as HTMLElement).closest("blog-section");
                      		if (parent) {
                      			tree[parent.id].children.push(section);
                      		} else {
                      			tops.push(section);
                      		}
                      	});
                      
                      	function sectionTemplate(section: HTMLElement): TemplateResult | null {
                      		const header = section.querySelector("h1,h2,h3,h4,h5,h6");
                      		if (header === null) {
                      			return null;
                      		}
                      
                      		const children = tree[section.id].children;
                      		let childList = null;
                      		if (children.length > 0) {
                      			childList = html`<ul>${repeat(children, keyID, sectionTemplate)}</ul>`;
                      		}
                      		return html`<li><a href="#${section.id}">${header.textContent}</a>${childList}</li>`;
                      	}
                      
                      	return html`<ul>${repeat(tops, keyID, sectionTemplate)}</ul>`
                      }
                      

                      I’m sure a professional web developer would do a better job, but I mostly do backend development, and of my UI experience maybe half is in native GUI applications (Gtk or Qt). Trying to get CSS to render something even close to acceptable can take me days.

                      That’s why I love web components so much, each element has its own little mini-DOM with scoped CSS. I can just sort of treat them like custom GUI widgets without worrying about whether adjusting a margin is going to make something else on the page turn purple.

                  1. 5

                    NeXTStep had filter services, and MacOS X retained them, at least for a while.

                    These would translate between file types automatically. For example my TextLightning PDF → RTF converter was also available as a filter service, and with it installed, TextEdit could open PDF files, getting the text content as provided by TextLightning.

                    It was pretty awesome.

                    1. 1

                      “Writing glue code and boilerplate is a waste” - True but it’s most of the code we write now.

                      1. 2

                        So: huge opportunity!

                        1. 2

                          I think that’s also exactly the kind of code that’s least amenable to improvements. “get something from this URL, post something to that URL” is quite specific and not likely to be any shorter than it already is without losing precision. No matter how you look at it, we still need to tell the computer what to do when it hasn’t already been written. And glue code is typically not already written.

                          1. 3

                            s3:mybucket/hello.txt ← 'Hello World!'

                            Native-GUI distributed system in a tweet.

                            text → ref:s3:bucket1/msg.txt.

                            If you have the right abstractions, you can replace a lot of glue code with “

                            1. 4

                              That’s like those examples where a UNIX pipeline is so much shorter than a “real” program just because it happens to fit exactly the case it was designed for, and has a lot of the same caveats:

                              • What about error handling and recovery? Looks like this will just error hard.
                              • What about authentication handling in S3? Looks like this can handle only one account.
                              • What if I want to store and retrieve structured data (like JSON)?

                              And on and on and on.

                              There’s a lot of complexity and not all of it can simply be swept under the rug.

                              1. 2

                                That’s like those examples where a UNIX pipeline is so much shorter than a “real” program just because it happens to fit exactly the case it was designed for

                                As I explain in the post: yes, it looks like code golf. But it isn’t.

                                The key to getting compatibility is to constrain the interfaces, but constraining interfaces limits applicability. Unix pipes/filters are at one end of the spectrum: perfectly composable, but also very, very limited. ObjS has Polymorphic Write Streams, which also follow a pipe/filter model but are much more versatile.

                                Another example that is quite composable is http/REST servers, which is why we have generic intermediaries. Again, ObjS has an in-process, polymorphic equivalent that has proven to be extremely applicable and very versatile.

                                These two also interact very synergistically, and things like variables and references also play along. While I did build it, I was a bit blown away when I figured out how well everything plays together.

                                And ObjS is a generalisation, so call/return is still available, for example you can hook up a closure to the output of a textfield.

                                textfield → { :msg | stdout println:msg. }.

                                Of course it is easier to just hook up stdout directly:

                                textfield → stdout

                                What if I want to store and retrieve structured data (like JSON)?

                                You pop on a mapping store and get dictionaries. You can also pop on an additional mapping store to convert those dictionaries to custom objects, or configure the JSON converter store to parse directly to objects.

                                #-countdockerimages
                                scheme:docker := MPWJSONConverterStore -> ref:file:/var/run/docker.sock asUnixSocketStore.
                                docker:/images/json count
                                

                                What about authentication handling in S3?

                                Similarly, you set up your S3 scheme-handler with the required authentication headers and then use that.

                                What about error handling and recovery?

                                Always a tricky one. I’ve found the concept of “standard error” to be very useful in this context, and used it very successfully in Wunderlist.

                        1. 17

                          People have been trying to make Unix-y desktop environments look like Mac OS for just about forever – I recall some truly hideous ones in the 2000s – but I’ve never seen one get close enough to even achieve an uncanny-valley effect. Looking at the screenshots, this is yet another one that’s not even close.

                          And in general the idea of “finesse of macOS” on a *nix desktop environment is contradictory. You can’t have both the “freedom of (insert your favorite OS here)” and the level of enforced conformity of interface design and other niceties that makes macOS actually nice to use. There simply are too many developers out there building too many applications which will refuse to adopt the necessary conventions, and the project almost certainly doesn’t have the resources to fork and maintain all of them.

                          1. 9

                            I am not so sure.

                            The trouble is that people focus on trivial cosmetics rather than structure & function. Themes, for example: those are just not important.

                            Ubuntu’s Unity desktop got a lot of the OS X user-level functionality. A dock/launcher that combined open windows with app launchers, which had indicators to show not only that a window was open (Mac OS X style) but how many windows were open (a gain over OS X), and had a standardised mechanism for opening additional empty windows (another gain over OS X).

                            But people didn’t register that, because it was on the left. Apple’s is at the bottom by default (although you can move it, but many people seem not to know that.) NeXT’s was at the right, but that’s partly because NeXT’s scrollbars were on the left.

                            Cosmetics obscured the functional similarity.

                            Unity showed that a global menu bar on Linux was doable, and somehow, I don’t know how, Unity did it for Gtk apps and for Qt apps and for apps using other frameworks or toolkits. Install new software from other desktops and it acquired a global menu bar. Still works for Google Chrome today. Works for the Waterfox browser today, but not for Firefox.

                            On top of that, Unity allowed you to use that menu with Windows shortcut keystrokes, and Unity’s dock accepted Windows’ Quick Launch bar keystrokes. But that went largely unnoticed too, because all the point-and-drool merchants don’t know that there are standard Windows keystrokes or how to use them.

                            Other aspects can be done as well: e.g. Mac OS X’s .app bundles are implemented in GNUstep, with the same filenames, the same structure, the same config files, everything. And simpler than that, AppImages provide much the same functionality. So does GoboLinux.

                            This stuff can be implemented on FOSS xNix: the proof is that it’s already been done.

                            But nobody’s brought the pieces together in one place.

                            1. 8

                              There are a few things that are harder:

                              Making the same keyboard shortcuts work everywhere. Even little things like the navigation within a text field can behave differently between toolkits, though the big distros have configured at least GTK and Qt to behave the same way. Beyond that, on macOS, command-, will open preferences in any application that has preferences. Command-shift-v pastes and matches style in every application that has rich text. There are a load of shortcuts that are the same everywhere. Apple achieved this by having XCode (and, before that, Interface Builder) populate the default menu bar with all of these shortcuts preconfigured. This is much harder to do with a diverse tooling ecosystem.

                              Drag and drop works everywhere. It’s somewhat embarrassing that drag-and-drop works better between MS Office apps on macOS than Windows. There are a couple of things that make this work well on macOS:

                              • The drag-and-drop protocol has a good content-negotiation model. The drag source provides a list of types that it can provide. The drop target picks one. The drag source then provides the data. This means that there’s no delay on drag start (you just need to provide a list of types, which is trivial, the Windows and Wayland models both encourage you to provide the drag data up front). It also means that you can do complex transcoding things on drop, where users are more tolerant of delay.
                              • There’s a built-in type for providing a file promise, where I can offer a file but not actually need to write it to the filesystem until the drop operation completes. This means, for example, I can drag a file from a Mail.app attachment and it doesn’t need to do the base64 decode and write the file until the drop happens.

                              There are some things that could be improved, but this model has worked pretty well since 1988. It’s significantly augmented by the fact that there’s a file icon in the title bar (newer macOS has hidden this a bit) that means that any document window has a drag source for the file. I can, for example, open a PDF in Preview and then drag from the title bar onto Mail.app’s icon to create a new file with that PDF file as an attachment.

                              The global menu bar mostly works with Qt and GTK applications but it requires applications to have separate app and document abstractions, which a lot of things that aim for a Windows-like model lack. Closing the last window in a macOS app doesn’t quit the app, it leaves the menu bar visible and so you can close one document and then create a new one without the app quitting out from under you. The Windows-equivalent flow requires you to create the new doc and then close the old one, which I find jarring.

                              The macOS model works particularly well on XNU because of the sudden termination mechanism that originated on iOS. Processes can explicitly park themselves in a state where they’re able to be killed (with the equivalent of kill -9) at any point. They are moved out of this state when input is available on a file descriptor that they’re sleeping on. This means that apps can sit in the background with no windows open and be killed if there’s memory pressure.

                              Sudden termination requires a bunch of cooperation between different bits of the stack. For example, the display server needs to own the buffers containing the current window contents so that you can kill an app without the user seeing the windows go away. When the user selects the window, then you need the window server to be able to relaunch the app and give it a mechanism to reattach to the windows. It also needs the window server to buffer input for things that were killed and provide it on relaunch. It also needs the frameworks to support saving state outside of the process.

                              Apple has done a lot of work to make sure that everything properly supports this kind of state preservation across app restarts. My favourite example is the terminal, which will restore all windows in the same positions and a UUID in an environment variable. When my Mac reboots, all of my remote SSH sessions are automatically restored by a quick check in my .profile to see if I have a file corresponding to the session UUID and, if so, reading it and reestablishing the remote SSH session.

                              1. 7

                                I prefer not to engage with comments that resort to phrases like “point-and-drool”.

                                But I’ll point out that you’re not really contradicting here – Unity was a significant amount of work that basically required an entity with the funding level of Ubuntu to pull off even briefly, and in general Ubuntu’s efforts to make the desktop experience more “polished” and easier have been met with outright hostility. Heck, even just GNOME tends to get a lot of hate for this: when they try to unify, simplify, and establish clear conventions, people attack them for “dumbing down”, for being “control freaks”, for their “my way or the highway” approach, for taking away freedom from developers/users/etc., and that ought to be the clearest possible evidence for my claim about the general contradiction between wanting the “finesse” of macOS with the “freedom” of a Free/open-source stack.

                                1. 1

                                  I prefer not to engage with comments that resort to phrases like “point-and-drool”.

                                  [Surprised] What’s wrong with it? It is fairly mildly condemnatory, IMHO. FWIW, I am not an American and I do not generally aim to conform to American cultural norms. If this is particularly offensive, it’s news to me.

                                  I completely agree that Unity was a big project and a lot of work, which was under-appreciated.

                                  But for me, a big difference is that Unity attempted to keep as many UI conventions as it could, to accommodate multiple UI method: mainly keyboard-driven and mainly pointing-device driven; Windows-like conventions (.e.g window and menu manipulation with the standard Windows keystrokes) and Mac-like conventions (e.g. a global menu bar, a dock, etc.)

                                  GNOME, to me, says: “No, we’re not doing that. We don’t care if you like it. We don’t care if you use it. We don’t, and therefore, it’s not needed. You don’t need menu bars, or title bars, or desktop icons, or a choice of sidebar in your filer, or any use for that big panel across the top of the screen. All that stuff is legacy junk. Your phone doesn’t have it, and you use that, therefore, it’s baggage, and we are taking all that away, so get used to it, move on, and stop complaining.”

                              2. 2

                                People have been trying to make Unix-y desktop environments look like Mac OS

                                Starting with Apple. :-). macOS is Unix.

                                And in general the idea of “finesse of macOS” on a *nix desktop environment is contradictory.

                                Considering the above, that seems … unlikely. Or maybe macOS is a contradiction onto itself?

                                NeXTstep arguably had even higher standards.

                                It’s funny, because I use the old “stable but ugly server os vs. pretty+easy-to-use but crashy client OS”-dichotomy as an example of things we used to believe were iron laws of nature, but that turned out to be completely accidental distinctions that were swept away by progress.

                                My phone is running Unix, and so is just about everybody’s. I think my watch is running Unix as well.

                                and the level of enforced conformity of interface design

                                Considering how crappy the built in apps are these days compared to 3rd party apps, and how little they conform to any rules or guidelines, I think that’s at best a highly debatable claim.

                                1. 7

                                  A lot of the “finesse” of macOS actually isn’t in superficial appearance, though, it’s in systemwide conventions that are essentially impossible to enforce in a useful way without also bringing in the level of platform control that’s alleged to be evil when Apple does it.

                                  1. 4

                                    Right, there’s a lot of subtle things you’d have to enforce within the ecosystem to make it work even close - just the notion of NSApplication and friends is alien to a world where it’s assumed a process has to have a window to exist in the GUI.

                                  2. 2

                                    People have been trying to make Unix-y desktop environments look like Mac OS

                                    Starting with Apple. :-). macOS is Unix.

                                    I think if you read “unix-y desktop environments” where “unix-y” modifies “desktop environment” as opposed to as “unixes that have a desktop environment” (eg as “X11 DEs/WMs such as GNOME/KDE/Englightenment”) I think this observation is more compelling. A common theme of most phones, most (all?) watches, NextStep and Mac OS is that they are very much not running “a unix-y desktop environment.”

                                  3. 2

                                    If I want a Mac, I’ll buy a Mac. What’d be more interesting is trying to build something better. Despite Gnome 3’s flaws, I do appreciate it at least tried to present a desktop model other than “Mac OS” or “Windows 98”.

                                    1. 1

                                      Yes, I have to give you that.

                                      I personally hate the model it proposes, but you make a good point: at least it’s not the same old same old.

                                      Oddly, Pop OS pleases me more, because it goes further. Pop OS, to me, says:

                                      “OK, so, GNOME took away your ability to manage windows on the desktop and it expects you to just use one virtual desktop per app. Window management is for simps; use it like a phone, all apps full screen all the time. We don’t think that’s enough, so, we’re going to keep GNOME but slap a tiling window manager on top, because we respect that you might not have a vast desktop or want one app per screen, so let us help you by automating that stuff… without sacrificing a desktop environment and switching to some kind of alien keyboard-driving tiling window-manager thing that takes away everything you know.”

                                    2. 1

                                      Isn’t SerenityOS an example of finesse of macos with some level of open source freedom?

                                      It depends how you define freedom, but they seem to have a lot of different people working on different things entirely that are still built on a shared vision seemingly quite well, it is still early but i wouldnt give up on the idea all together.

                                      I understand what you are saying, but I think its an interesting compromise thats surprisingly effective (from the outside perspective)

                                    1. 1

                                      Objective-S describes itself as “possibly the first general purpose programming language.”, then compares other languages with this claim to be “actually DSLs for the domain of algorithms”.

                                      And then, you see the language…and it’s…a DSL for writing applications using a native UI framework?

                                      I mean, I don’t think it’s a bad-looking language, it just seems to have a really contradictory statement right in the introduction.

                                      1. 1

                                        a DSL for writing applications using a native UI framework?

                                        Not exactly sure where you are getting this from. Yes, a bunch of the examples are framework-specific, but of course for something that’s about better ways of gluing together components you need to start with some components. fib(n) doesn’t cut it, that’s algorithmic. Unless you also want to create all the components from scratch, they’ll come from somewhere, but that doesn’t mean the mechanisms are in any way specific to that.

                                        So I guess the question is why the mechanisms also look like they are specific to one UI framework, even though they are not.

                                        I guess I need to add a lot more examples.

                                        Thanks for the feedback, really useful!

                                        1. 2

                                          The examples were pretty useful in showing me what the language actually was like. I’m glad you included them, because that sentence just totally confused me as to what the actual reasoning for Objective-S was. The introductory paragraph felt super academic, but then looking down I felt like I could see the practicality in the code. I’m not sure you need more/better examples, just a more down-to-earth introduction, showcasing the language’s strengths. After you explained it here I kinda do get what you mean, like those other languages are designed so you can write algorithms, and ergonomically are not truly designed for primarily writing UIs. Yet, that’s what we end up using a lot of these languages for.

                                          I can only think of a couple other widely-used programming languages designed primarily to build UIs with: Elm, HTML, and Swift. And Swift is really a lot more complicated because it dabbles in other purposes over just UI development. It would be nice to see a Smalltalk-inspired programming language designed around UI programming, that could be used on any platform. So I’m excited to see where this goes!

                                          1. 1

                                            Thanks, that’s really useful feedback!

                                      1. 12

                                        I call clickbait. It’s not a “distributed system”, just an S3 uploader client. You could probably write one in 280 bytes of Python or Rust too…

                                        1. 1

                                          I’d love to see that!

                                          Initially I thought it should also be at least possible, if not trivial to do so in for example Python or Ruby (and of course the article never claims that you can’t do it). But looking more closely I was a bit more skeptical.

                                        1. 1

                                          I hadn’t realized that in Smalltalk-72 each object parsed incoming messages. Every object had its own syntax. Ingalls cites performance as an issue then, but perhaps the idea is worth revisiting.

                                          Also, here’s a link to Schorre’s META II paper that he mentions: https://doi.org/10.1145/800257.808896

                                          1. 1

                                            Smalltalk-72 - in contrast to all later Smalltalks - had indeed true message passing. That’s what Kay is usually referring to; it was his language design. The Smalltalk we know today, starting with Smalltalk-74, was invented by Ingalls and had little in common with Smalltak-72. Smalltalk-74 and later have virtual methods instead of message passing; but performance was still an issue.

                                            1. 1

                                              Each object having its own syntax turned out to be an almost bigger problem than performance. And I am not sure about the “almost”.

                                            1. 22

                                              I’ll never understand why WSL wasn’t named “Linux Subsystem for Windows”.

                                              1. 21

                                                Because Windows has historically had several subsystems, including the OS/2 subsystem, POSIX subsystem, and, most famously, the Win16 subsystem, which were all called e.g. “Windows Subsystem for OS/2”. WSL1 built on roughly the same model, and so ended up with a similar name. WSL2 is entirely different, but we’ve got the name stuck now.

                                                Note, I’m not really disagreeing with you, but just explaining that this naming convention is just how Windows has named its subsystems for a long time.

                                                1. 4

                                                  Would it have made more sense to call it “OS/2 Subsystem for Windows?” Or is there some reason the reverse made more sense?

                                                  1. 6

                                                    Back in the 90s, when this showed up with the first versions of Windows NT, the word “applications” was either explicit or obviously implicit (I sincerely forget which) for all of these. So “Windows Subsystem for OS/2 Applications,” or “…for POSIX Applications,” if you will. At the time, Windows NT was trying to subsume minicomputer and (in some cases) mainframe environments, but not vice-versa, so the ambiguity the elision has in 2021 just didn’t exist in 92 or whenever this was.

                                                    1. 3

                                                      One wonders why the word “Windows” was not implicit too. Of course it is a Windows subsystem. It is a subsystem on your Windows. You don’t have subsystems for other operating systems on your Windows than for Windows. Otherwise it would not be a _sub_system, right?

                                                      1. 1

                                                        The Windows 2000 bootscreen would say “built on NT technology”. I always thought that was slightly amusing (I would have done the same though since not everyone knows that NT stands for “new technology”; most people in fact don’t know).

                                                        1. 1

                                                          “NT” did stand for New Techology but I think by the time W2000 rolled around it was just its own term - “Windows NT” was the server version of Windows.

                                                          1. 1

                                                            This joke was already running around cca. 2002 or so: https://imgur.com/a/UhSmCdf (this one’s a newer incarnation, I think). By that time the NT was definitely its own term. I remember people thinking it stood for “networking” as early as 1998 or so.

                                                2. 5

                                                  This is from the company that named the phone they pinned all their iPhone rivalry hopes to the “Windows Phone 7 Series” so honestly I don’t think we can ask too much.

                                                  Think of it this way: you’d like it to be

                                                  (Linux Subsystem) for Windows

                                                  But instead it is:

                                                  Windows (Subsystem for Linux)

                                                  There’s just a missing apostrophe to denote possession that would fix it all:

                                                  Windows’ Subsystem for Linux

                                                  1. 2

                                                    but it’s not for linux, it’s for windows.

                                                    1. 2

                                                      Windows Subsystem for (running) Linux.

                                                      1. 1

                                                        but you don’t run linux (the kernel), you just run a GNU userland right?

                                                        (inb4 “I’d like to interject…”)

                                                        1. 2

                                                          In this particular case, WSL2 literally is purely about the Linux kernel. You can use any distro you want, including those with a BSD or Busybox userland.

                                                          1. 1

                                                            what does it mean to be “about the linux kernel”

                                                            1. 2

                                                              It is a Linux kernel running in a virtual machine. WSL1 was a binary compatibility layer, WSL2 is actually a VM running a Linux kernel.

                                                              1. 1

                                                                I see, thanks

                                                  2. 2

                                                    My understanding is that by that point there were a few “Windows Subsystems” already, not necessarily implementing other OS APIs.

                                                    1. 1

                                                      There were originally 5, I think: • Win32 • WOW – Windows on Windows – the Win16 subsystem • DOS • OS/2 (for text-mode apps only) • POSIX (the NT Unix environment)

                                                      OS/2 was deprecated in 3.51 and dropped in 4.

                                                      64-bit NT drops the old WOW16 subsystem, but replaces it with WOW32 for running Win32 apps on Win64.

                                                      The POSIX subsystem has now been upgraded to be Linux-kernel compatible.

                                                      WSL 2 is different; it’s a dedicated Hyper-V VM with a custom Linux kernel inside.

                                                    1. 11

                                                      I usually really enjoy your writing, but this seems totally disingenuous.

                                                      You don’t even define what qualifies as “OOP” to you, so how could anyone NOT think that you’re going to “No true Scotsman” them if they show you what they believe to be good OOP?

                                                      Does OOP have to use class inheritance? Does it have to involve mutable global state? Can you write OOP in the C language?

                                                      Or, if you’re open to someone showing you a piece of code and you’ll just accept at face value that it’s OOP with no argument, then you should say THAT.

                                                      You need to address that. But even then, I wouldn’t actually expect any replies because of the point(s) that @singpolyma raised- nobody is going to volunteer code to someone who is basically promising to (publically) criticize it.

                                                      1. 5

                                                        so how could anyone NOT think that you’re going to “No true Scotsman” them if they show you what they believe to be good OOP?

                                                        I’m a bit confused. Usually it’s the other way around. I have some concrete criticism against OOP, and and I hear “Oh, that’s just because you don’t know OOP. That’s not a good OOP.” And that keeps on going forever, at which point it seems that the good OOP is everything that books and articles say, except all the things that people actually are doing in practice, but surely there somewhere must be some pristine OOP code.

                                                        I haven’t thought that I could pull a “no true Scotsman” the other way, though I guess you’re right. That’s not my intention though.

                                                        Does OOP have to use class inheritance? Does it have to involve mutable global state? Can you write OOP in the C language?

                                                        I guess I’m open minded about it. The thing with OOP is not well defined. Look - the code I write in Java is not very far from OOP: always uses a lot of interfaces, DI, and yet I don’t consider it OOP for various subtle reasons, which usually get lost in abstract discussion.

                                                        Having read and written many articles in OOP debate, I think we as a community are just talking past each other now, criticizing/defending something that is not well defined and subjective.

                                                        So I think it would be more productive to go through some actual code and talk about, in relation to OOP debate.

                                                        nobody is going to volunteer code to someone who is basically promising to (publically) criticize it.

                                                        You can volunteer someone else’s code, I don’t mind. :D

                                                        I actually thought that this is going to be a default, because people are usually too shy to think their code must be the best one.

                                                        That’s a thing with public OS projects. You put them out there, you have accept the fact that someone might… actually read the code and judge it.

                                                        I promise not to be a douche about it. There’s plenty of my own code on github, none of it pristine, anyone is free to retaliate. :D

                                                        1. 5

                                                          If you do find examples of good OOP code, I predict they will mostly be written in either Smalltalk, Erlang, and Dylan. I haven’t used Smalltalk or Dylan in earnest, and it’s been far too long since I used Erlang, or I’d find some examples myself.

                                                          Edit: thinking more about this, it feels like the only way to find good OOP code is to try to find that mythical domain in which inheritance is actually a benefit rather than the enormous bummer it normally is, and one in which polymorphism is actually justified. The only example which comes to mind is GUI code where different kinds of widgets could inherit off a class hierarchy, which is why I expect you’d have the best luck by looking in Smalltalk for your examples.

                                                          But overall it’s a misguided task IMO; merely by framing it as being about “OOP” in the first place you’ve already gone wrong. OOP is just a mash-up of several unrelated concepts like inheritance, polymorphism, encapsulation, etc, some of which are good and some of which are very rarely helpful. Talk instead about how to do encapsulation well, or about in what limited domains polymorphism is worth the conceptual overhead.

                                                          1. 2

                                                            But overall it’s a misguided task IMO; merely by framing it as being about “OOP” in the first place you’ve already gone wrong. OOP is just a mash-up of several unrelated concepts like inheritance, polymorphism, encapsulation, etc, some of which are good and some of which are very rarely helpful.

                                                            I have a very similar intuition, but the point of the exercise is to find whatever people would consider as good OOP and take a look at it.

                                                            If you do find examples of good OOP code, I predict they will mostly be written in either Smalltalk, Erlang, and Dylan.

                                                            I’m not sure about Dylan, but the rest fits my intuition of one “good” piece of OOP being message passing, which I tend to call actor based programming.

                                                            1. 1

                                                              which I tend to call actor based programming

                                                              Yes. “Actor” is another common word for “OOP”

                                                              1. 4

                                                                This is not quite true. OOP permits asynchronous message passing but actor-model code requires it. Most OO systems use (or, at least, default to) synchronous message passing.

                                                                1. 2

                                                                  This conflation is really problematic. I get that this is what it was supposed to mean long time ago. And then OOP became synonymous with Java-like class oriented programming. So now any time one wants to talk about contemporary common “OOP” there’s a group of people coming and “oh, but look at Erlang and Smalltalk, yada, yada - real OOP”, which while technically do have a point, are nowhere close to what I’ve seen a real life OOP looked like.

                                                                  1. 4

                                                                    It’s also competely ahistorical. The actor model has almost nothing to do with the development of object-based programming languages, the actor model of concurrency is different from the actor model of computation, which is nonsensical (the inventor thinks he proved Turing and godel wrong with the actor model.) Alan Kay changed his mind on what “true oop was supposed to be” in the 1990s.

                                                                    1. 1

                                                                      Alan Kay changed his mind on what “true oop was supposed to be” in the 1990s.

                                                                      Any link what do you mean exactly? I’m aware of http://userpage.fu-berlin.de/~ram/pub/pub_jf47ht81Ht/doc_kay_oop_en .

                                                                      OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things. It can be done in Smalltalk and in LISP. There are possibly other systems in which this is possible, but I’m not aware of them.

                                                                      Is that what you have in mind?

                                                                      1. 4

                                                                        “OOP means only messaging” is revisionism, his earlier writing was much more class-centric. I document some of this here: https://www.hillelwayne.com/post/alan-kay/

                                                                        1. 1

                                                                          I just read that article, twice, and I still can’t figure out what the point is that it is trying to make, never mind actually succeeding at making it.

                                                                          It most certainly isn’t evidence, let alone proof of any sort of “revisionism”.

                                                                          1. 4

                                                                            The point it’s trying to make is that if you read Alan Kay’s writing from around the time that he made Smalltalk, so 72-80ish, it’s absolutely not “just messaging”. Alan Kay started saying “OOP means only messaging” (and mistakenly saying he “invented objects”, though he’s stopped doing that) in the 90s, over a decade and a half after his foundational writing on OOP ideas. It’d be one thing if he said “I changed my mind / realized I was wrong”, but a lot of his modern writing strongly implies that “OOP is just messaging” was his idea all along, which is why I call it revisionism.

                                                                            1. 1

                                                                              I know what the point is that you (now) claim it makes. But that, to me, looks like revisionism, because the contents of the article certainly don’t support, and also don’t appear to actually make that claim in any coherent fashion.

                                                                              First, the title of the article makes a different claim: “Alan Kay did not invent objects”. Which is at best odd (and at worst somewhat slanderous), because he has never claimed to have invented object oriented programming, being very clear that he was inspired very directly by Simula and a lot of earlier work, for example the Burroughs machine, Ivan Sutherland’s Sketchpad etc.

                                                                              In fact, he describes how one of his first tasks, as a grad student(?), was to look at this hacked Algol compiler, and spreading the listing out in the hallway to try and grok it, because it did weird things to flow control. That was Simula.

                                                                              Re-reading your article, I am guessing you seem to think of the following quote as the smoking gun:

                                                                              I mean, I made up the term “objects.” Since we did objects first, there weren’t any objects to radicalize.

                                                                              At least there was nothing else I could find, and you immediately follow that with “He later stopped claiming…”. The interview you cite is from 2012. Even the Squeak mailing list quote you showed was from 1998, and it refers to OOPSLA ‘97. His HOPL IV paper The Early History of Smalltalk is from 1993. That’s where he tells the Simula story.

                                                                              So he “stopped” making the claim you insinuate him making, two decades before, at least according to you, he started making that claim. That seems a bit…weird. Of course, there were rumours on the Squeak mailing list that Alan Kay was actually a time-traveller from the future, partly fuelled by a mail sent from a machine with misconfigured system clock. But apart from that scenario, I have a hard time seeing how he could start doing what you claim he did two decades after he stopped doing what you claim he did.

                                                                              The simpler explanation is that he simply didn’t make that claim. Not in that 2012 interview, not before 2012 and not after 2012. And that you vastly mis- and over-interpreted an off-the-cuff remark in a random interview.

                                                                              Please read the first sentence carefully: “I made up the term ‘objects’”. (My emphasis). He is clearly, and entirely consistently both with the facts and his later writings, claiming to have coined the term.

                                                                              And he is clearly relating this to the systems that came later, C++ and Java, relative to which the Smalltalk “everything is an object” approach does appear radical. But of course it wasn’t a “radicalisation” relative to the current state of the art, because the current state of the art came later.

                                                                              Yeah, he could have mentioned Simula at that point, but if you’ve ever given a talk or an interview you know that you sometimes omit some detail in order to move the main point along. And maybe he did mention it, but it was edited out.

                                                                              But the question at hand was what OO is, according to Alan Kay, not whether he claimed to have invented it. Your article only addresses the second question, incorrectly it turns out, but doesn’t say anything whatsoever about the former.

                                                                              And there it helps to look at the actual artefacts. Let’s take Smalltalk-72, the first Smalltalk. It was entirely message-based, objects and classes being a second-order phenomenon. In order to make it practical, because Smalltalk was a means to an end for them, not an end in itself, this was made more like existing systems over time, culminating in Smalltalk-80. This is a development Alan has consistently lamented.

                                                                              In fact, in the very 2012 interview you misrepresent, he goes on to say the following:

                                                                              The first Smalltalk was presented at MIT, and Carl Hewitt and his folks, a few months later, wrote the first Actor paper. The difference between the two systems is that the Actor model retained more of what I thought were the good features of the object idea, whereas at PARC, we used Smalltalk to invent personal computing

                                                                              So Actors “retained more of the good features of the object idea”. What “good features” might that be, do you think?

                                                                              In fact, there’s Alan’s famous quip at OOPSLA ’97 keynote, The Computer Revolution hasn’t Happened Yet.

                                                                              Actually I made up the term “object-oriented”, and I can tell you I did not have C++ in mind.

                                                                              I am sure you’ve heard it. Alas, what he said next is hardly reported at all:

                                                                              The important thing here is, I have many of the same feelings about Smalltalk.

                                                                              And just reiterating the point above he goes on:

                                                                              My personal reaction to OOP when I started thinking about it in the sixties.

                                                                              So he was reacting to OOP in the sixties. The first Smalltalk was in the seventies. So either he thinks of himself as a time-traveller, or he clearly thinks that OOP was already invented, just like he always said.

                                                                              Anyway, back to your claim of “revisionism” regarding messaging. I still can’t get a handle on it, because all the sources you absolutely harp on the centrality of messaging. For example, in the “Microelectronics and the Personal Computer” article, the central idea is a “message-activity” system. Hmm…sounds pretty message-centric to me.

                                                                              The central idea in writing Small talk programs, then, is to define classes which handle communication among objects in the created environment.

                                                                              So the central idea is to define classes. Aha, smoking gun!! But what do these classes actually do? Handle communication among objects. Messages. And yes, in a manual describing what you do in the system, what you do is define classes. Because that’s the only way to actually send and receive messages.

                                                                              What else should he have written, in your opinion?

                                                                              1. 4

                                                                                Please read the first sentence carefully: “I made up the term ‘objects’”. (My emphasis). He is clearly, and entirely consistently both with the facts and his later writings, claiming to have coined the term.

                                                                                He didn’t coin the term, either. The Simula 67 manual formally defines “objects” on page 6.

                                                                                Anyway, you’re way angrier about this than I expected anybody to be, you’re clearly more interested in defending Kay than studying the history, and you’re honestly kinda scaring me right now. I’m bowing out of this whole story.

                                                                                1. 1

                                                                                  OK, you just can’t let it go, can you?

                                                                                  First you make two silly accusations that don’t hold up to even the slightest scrutiny. I mean, they are in the “wet roads cause rain” league of inane. You get shown to be completely wrong. Instead of coming to terms with just how wrong you were, and maybe what your own personal motivations were for making such a silly accusations, you just pile on more silly accusations.

                                                                                  What axe do you have to grind with Alan Kay? Because you are clearly not rational when it comes to your ahistorical attempts to throw mud at him.

                                                                                  As I have clearly shown, the only one who hasn’t adequately studied history here is you. Taking one off-the-cuff remark from 2012 and defining that has “having studied history” is beyond the pale, when the entire rest of history around this subject contradicts your misinterpretation of that out-of-context quote.

                                                                          2. 1

                                                                            Thanks! It was a good read.

                                                                      2. 2

                                                                        I’m sympathetic, but I mean, at some point you have to just give up and admit that the word has been given so many meanings that it’s no longer useful for constructive discussion and just switch to more precise terminology, right? Want to talk about polymorphism? Cool; talk about polymorphism. Want to talk about the actor model? Just say “actor”.

                                                                        1. 2

                                                                          no longer useful for constructive discussion and just switch to more precise terminology, right?

                                                                          I guess you’re right.

                                                                          It’s just there’s still so many books, articles, talks mentioning and praising OOP, that it’s hard to resist. (Do schools still teach OOP?) It’s not useful to use for constructive discussion, but the ghost of vague amalgamate of ideas of OOP is still hunting us, and I can see some of these ideas in the code that I have to work with sometimes. People keep adding pointless getters and setters in the name of encapsulation and so on. Because that’s in some OOP book.

                                                                          Ironically… when talking about OOP critically, in a way I’m only proliferating its existence. But talking about these ideas in isolation doesn’t seem like it’s making any damage to the ghost of OOP. Everyone keep talking about inheritance being tricky, and yet I keep seeing inheritance hierarchies where they shouldn’t be.

                                                                        2. 1

                                                                          And this is what I was talking about in my top-level comment. It sounds like you’re requiring class inheritance as part of your definition of OOP. Which I think is bunk. I don’t care what Java does or has for features. Any code base that is architected as black-box, stateful, “modules” (classes, OCaml modules, Python packages, JavaScript modules, etc) should count as OOP. Inheritance is just a poor mechanism for code-reuse.

                                                                          There’s no actual reason to include class inheritance in a definition of OOP anymore than we must include monads in our definition of FP (we shouldn’t).

                                                                          1. 1

                                                                            I would say modules rendered black box by polymorphic composition, with state being optional but allowed it the definition of course.

                                                                            1. 1

                                                                              I feel like mutable state is actually important for something to be an “object”.

                                                                              And when I say mutable state, I’m also including if the object “represents” mutable state in the outside world without having its own, local, mutable fields in the code. In other words, an object that communicates with a REST API via HTTP represents mutable state because the response from the server can be different every time and we can issue POST requests to mutate the state of the server, etc. So, even if the actual class in your code doesn’t have mutable properties, it can still be considered to “have” mutable state.

                                                                              Anything that doesn’t have mutable state, explicitly or implicitly, isn’t really an “object” to me. It’s just an opaque data type. If you disagree, then I must ask the question “What isn’t an object?”

                                                                              1. 1

                                                                                It’s just an opaque data type.

                                                                                A data type rendered opaque by polymorphism, specifically. A C struct with associated functions is not an object, even if the fields are hidden with pointer tricks and even if the functions model mutable state, because I can’t make something another object out of something else where I can use those functions safely.

                                                                                1. 1

                                                                                  So are you saying that “objects” have some requirement to be polymorphic per your definition?

                                                                                  Polymorphism is orthogonal to my definition. Neither an opaque data type, nor an “object” need to have anything to do with polymorphism in my mind. An opaque data type can simply be a class with a private field and some “getters”.

                                                                    2. 1

                                                                      it feels like the only way to find good OOP code is to try to find that mythical domain in which inheritance is actually a benefit rather than the enormous bummer it normally is

                                                                      Oof. Inheritance is definitely not a feature I’d call out as a “good idea” part. I’ve been known to use it from time to time, but if you use it a lot and end up being “good OOP” that would be nothing short of a miracle.

                                                                  2. 1

                                                                    so how could anyone NOT think that you’re going to “No true Scotsman” them if they show you what they believe to be good OOP?

                                                                    I’m a bit confused. Usually it’s the other way around. I have some concrete criticism against OOP, and and I hear “Oh, that’s just because you don’t know OOP. That’s not a good OOP.” And that keeps on going forever, at which point it seems that the good OOP is everything that books and articles say, except all the things that people actually are doing in practice, but surely there somewhere must be some pristine OOP code.

                                                                    I haven’t thought that I could pull a “no true Scotsman” the other way, though I guess you’re right. That’s not my intention though.

                                                                    Does OOP have to use class inheritance? Does it have to involve mutable global state? Can you write OOP in the C language?

                                                                    I guess I’m open minded about it. The thing with OOP is - it’s vague and not well defined. The real OO is what I would call actor programming, with real message passing, and OOP we do now is class-oriented programming, and that just starts the confusion.

                                                                    Look - the code I write in Java is not all that much different from OOP: always uses a lot of interfaces, DI, and yet I don’t even consider it OOP for various subtle reasons.

                                                                    Having read and written many articles in OOP debate, I think we as a community are just talking past each other now, criticizing/defending something that is not well defined and subjective.

                                                                    So I think it would be more productive to go through some actual code and talk about, in relation to OOP debate.

                                                                    nobody is going to volunteer code to someone who is basically promising to (publically) criticize it.

                                                                    You can volunteer someone else’s code, I don’t mind. :D

                                                                    I actually thought that this is going to be a default, because people are usually too shy to think their code must be the best one.

                                                                    That’s a thing with public OS projects. You put them out there, you have accept the fact that someone might… actually read the code and judge it.

                                                                    I promise not to be a douche about it. There’s plenty of my own code on github, none of it pristine, anyone is free to retaliate. :D

                                                                  1. 26

                                                                    I’m a little bit suspicious of this plan. You specifically call out that you already have an anti-OOP bias to the point of even saying “no true Scotsman” and then say you plan to take anything someone sends you and denigrate it. Since no codebase is perfect and every practitioner’s understanding is always evolving, there will of course be something bad you can say especially if predisposed to do so.

                                                                    If you actually want to learn what OOP is like, why not pick up a known-good text on the subject, such as /99 Bottles of OOP/?

                                                                    1. 10

                                                                      I, for one, think this project seems very interesting. The author is correct that criticisms of OOP is often dismissed by saying “that’s just a problem if you don’t do OOP correctly”. Naturally, a response is to ask for an example of a project which does OOP “correctly”, and see if the common critiques still applies to that.

                                                                      Maybe the resulting article will be uninteresting. But I think I would love to see an article which dissects a particular well-written code-base, discusses exactly where and how it encounters issues which seem to be inherent in the paradigm or approach it’s using, and how it solves or works around or avoids those issues. I just hope the resulting article is actually fair and not just a rant about OOP bad.

                                                                      EDIT: And to be clear, just because there seems to be some confusion: The author isn’t saying that examples of good OOP aren’t “real OOP”. The author is saying that critiques of OOP is dismissed by saying “that’s not real OOP”. The author is complaining about other people who use the no true Scotsman fallacy.

                                                                      1. 7

                                                                        Explicitly excluding frameworks seems to be a bit prejudiced, since producing abstractions that encourage reuse is where OOP really shines. OpenStep, for example, is an absolutely beautiful API that is a joy to use and encourages you to write very small amounts of code (some of the abstractions are a bit dated now, but remember it was designed in a world where high-end workstations had 8-16 MiB of RAM). Individual programs written against it don’t show the benefits.

                                                                        1. 1

                                                                          Want to second OpenStep here, essentially Cocoa before they started with the ViewController nonsense.

                                                                          Also, from Smalltalk, at least the Collection classes and the Magnitude hierarchy.

                                                                          And yes, explicitly excluding frameworks is non-sensical. “I want examples of good OO code, excluding the things OO is good at”.

                                                                        2. 2

                                                                          Maybe the resulting article will be uninteresting. But I think I would love to see an article which dissects a particular well-written code-base, discusses exactly where and how it encounters issues which seem to be inherent in the paradigm or approach it’s using, and how it solves or works around or avoids those issues. I just hope the resulting article is actually fair and not just a rant about OOP bad.

                                                                          That’s exactly my plan. I have my biases and existing ideas, but I’ll try to keep it open minded and maybe through talking about concrete examples I will learn something, refine my arguments, or just have a useful conversation.

                                                                          1. 2

                                                                            The author is correct that criticisms of OOP is often dismissed by saying “that’s just a problem if you don’t do OOP correctly”.

                                                                            I know this may look like splitting hairs, but while “that’s only a problem if you don’t do OOP correctly” would be No True Scotsman and invalid, what I see more often is “that’s a problem and that’s why you should do OOP instead of just labelling random code full of if statements as ‘OOP’” which is a nomenclature argument to be sure, but in my view differs from No True Scotman in that it’s not a generalization but an attempt to make a call towards a different way.

                                                                            I agree that a properly unbiased tear-down of a good OOP project by someone familiar with OOP but without stars in their eyes could be interesting, my comment was based on the tone of the OP and a sinking feeling that that is not what we would get here.

                                                                            1. 1

                                                                              OOP simplifies real objects and properties to make abstraction approachable for developers, and the trade-off is accuracy (correctness) for simplicity.

                                                                              So if you would try to describe the real world in OOP terms adequately, then an application will be as complex as the world itself.

                                                                              This makes me think that proper OOP is unattainable in principle, with the only exception – the world itself.

                                                                            2. 3

                                                                              One could argue in favor of Assembly and still be right, which doesn’t make “everyone program should be written in Assembly” a good statement. It sounds, to me, like saying “English will never have great literature”. It doesn’t make much sense.

                                                                              Microsoft has great materials on Object-Oriented-Design tangled inside their documentation of .NET, Tackle Business Complexity in a Microservice with DDD and CQRS Patterns is a good example of what you want, but is not a repository I am afraid. Design Patterns has great examples of real world code, they are old (drawing scrollbars) but they are great Object-Oriented-Programming examples.

                                                                              Good code is good, no matter the paradigm or language. In my experience, developers lack understanding the abstractions they are using (HTTP, IO, Serialization, Patterns, Architecture etc.) and that shows on their code. Their code doesn’t communicate a solution very well because they doesn’t understand it themselves.

                                                                              1. 3

                                                                                you plan to take anything someone sends you and denigrate it.

                                                                                I could do that, but then it wouldn’t be really useful and convincing.

                                                                                If you actually want to learn what OOP is like, why not pick up a known-good text on the subject, such as /99 Bottles of OOP/

                                                                                Because no real program looks like this.

                                                                              1. 0

                                                                                It’s 2021. You can give me up already. Please.

                                                                                1. 1

                                                                                  I knew something was fishy when that link rendered as already-visited.

                                                                                  1. 1

                                                                                    And there I thought the “this one neat trick” would give it a away…

                                                                                1. 10

                                                                                  I think an important direction for future programming language development is better support for writing single programs that span multiple nodes. It’s been done, e.g. erlang, but it would be nice to see more tight integration of network protocols into programming languages, or languages that can readily accommodate libraries that do this without a lot of fuss.

                                                                                  There’s still some utility in IDLs like protobufs/capnproto in that realistically the whole world isn’t going to converge on one language any time soon, so having a nice way of describing an interface in a way that’s portable across languages is important for some use cases. But today we write a lot of plumbing code that we probably shouldn’t need to.

                                                                                  1. 3

                                                                                    I couldn’t agree more. Some sort of language feature or DSL or something would allow you to have your services architecture without paying quite so many of the costs for it.

                                                                                    Type-checking cross-node calls, service fusion (i.e. co-locating services that communicate with each other on the same node to eliminate network traffic where possible), RPC inlining (at my company we have RPC calls that amount to just CPU work but they’re in different repos and different machines because they’re written by different teams; if the compiler had access to that information it could eliminate that boundary), something like a query planner for complex RPCs that decay to many other backend RPC calls (we pass object IDs between services but often many of them need the data about those same underlying objects so they all go out to the data access layer to look up the same objects). Some of that could be done by ops teams with implementation knowledge but in our case those implementations are changing all of the time so they’d be out of date by the time the ops team has time to figure out what’s going on under the hood. There’s a lot that a Sufficiently Smart Compiler(tm) can do given all of the information

                                                                                    1. 3

                                                                                      There is also a view that it is a function of underlying OS (not a particular programming language) to seamlessly provide ‘resources’ (eg memory, CPU, scheduling) etc. across networked nodes.

                                                                                      This view is, sometimes, called Single Image OS (briefly discussed that angle in that thread as well )

                                                                                      Overall, I agree, of course, that creating safe, efficient and horizontally scalable programs – should much easier.

                                                                                      Hardware is going to continue to drive horizontal scalability capabilities (whether it is multiple cores, or multiple nodes, or multiple video/network cards)

                                                                                      1. 2

                                                                                        I was tempted to add some specifics about projects/ideas I thought were promising, but I’m kinda glad I didn’t, since everybody’s chimed with stuff they’re excited about and there’s a pretty wide range. Some of these I knew about others I didn’t, and this turned out to be way more interesting than if it had been about one thing!

                                                                                        1. 2

                                                                                          Yes, but: you need to avoid the mistakes of earlier attempts to do this, like CORBA, Java RMI, DistributedObjects, etc. A remote call is not the same as an in-process call, for all the reasons called out in the famous Fallacies Of Distributed Computing list. Earlier systems tried to shove that inconvenient truth under the rug, with the result that ugly things happened at runtime.

                                                                                          On the other hand, Erlang has of course been doing this well for a while.

                                                                                          I think we’re in better shape to deal with this now thanks all the recent work languages have been doing to provide async calls, Erlang-style channels, Actors, and better error handling through effect systems. (Shout out to Rust, Swift and Pony!)

                                                                                          1. 2

                                                                                            Yep! I’m encouraged by signs that we as a field have learned our lesson. See also: https://capnproto.org/rpc.html#distributed-objects

                                                                                            1. 1

                                                                                              Cap’nProto is already on my long list of stuff to get into…

                                                                                          2. 2

                                                                                            Great comment, yes, I completely agree.

                                                                                            This is linked from the article, but just in case you didn’t se it, http://catern.com/list_singledist.html lists a few attempts at exactly that. Including my own http://catern.com/caternetes.html

                                                                                            1. 2

                                                                                              This is what work like Spritely Goblins is hoping to push forward

                                                                                              1. 1

                                                                                                I think an important direction for future programming language development is better support for writing single programs that span multiple nodes.

                                                                                                Yes!

                                                                                                I think the model that has the most potential is something near to tuple spaces. That is, leaning in to the constraints, rather than trying to paper over them, or to prop up anachronistic models of computation.

                                                                                                1. 1

                                                                                                  better support for writing single programs that span multiple nodes.

                                                                                                  That’s one of the goals of Objective-S. Well, not really a specific goal, but more a result of the overall goal of generalising to components and connectors. And components can certainly be whole programs, and connectors can certainly be various forms of IPC.

                                                                                                  Having support for node-spanning programs also illustrates the need to go beyond the current call/return focus in our programming languages. As long as the only linguistically supported way for two components to talk to each other is a procedure call, the only way to do IPC is transparent RPCs. And we all know how well that turned out.

                                                                                                  1. 1

                                                                                                    indeed! Stuff like https://www.unisonweb.org/ looks promising.

                                                                                                  1. 17

                                                                                                    These rules make perfect sense in a closed, Google-like ecosystem. Many of them don’t make sense outside of that context, at least not without serious qualifications. The danger with articles like this is that they don’t acknowledge the contextual requirements that motivate each practice, making them liable for cargo-culting into situations where they end up doing more harm than good.

                                                                                                    Automate common tasks

                                                                                                    Absolutely — unless building and maintaining that automation takes more time than just doing it manually. Which tends to happen, especially when you don’t have a team dedicated to infrastructure, and spending time on automation necessarily means not spending time on product development. Programmers love to overestimate the cost of toil, and the benefit of avoiding it; and to underestimate the cost of building and running new software.

                                                                                                    Stubs and mocks make bad tests

                                                                                                    Stubs and mocks are tools for unit testing, just one part of a complete testing breakfast. Without them, it’s more difficult to achieve encapsulation, build strong abstractions, and keep complex systems coherent. You need integration tests, absolutely! But if you just have integration tests, you’re stacking the deck against yourself architecturally.

                                                                                                    Small frequent releases

                                                                                                    No objection.

                                                                                                    Upgrade dependencies early, fast, and often

                                                                                                    Big and complex dependencies, subject to CVEs, and especially if they interface with out-of-process stuff that may not retain a static API? Absolutely. Smaller dependencies, stuff that just serves a single purpose? It’s make-work, and adds a small amount of continuous risk to your deployments — even small changes can introduce big bugs that skirt past your test processes — which may not be the best choice in all environments.

                                                                                                    Expert makes everyone’s update

                                                                                                    (Basically: update your consumers for them.) This one in particular is so pernicious. The relationship between author and consumer is one to many, with no upper bound on the many. Authors always owe some degree of care and responsibility to their consumers, but not, like, total fealty. That’s literally impossible in open ecosystems, and even in closed ones, taking it to this extreme rarely makes sense in the cost/benefit sense. Software is always an explorative process, and needs to change to stay healthy; extending authors’ domain of responsibility literally into the codebases of their consumers makes change just enormously difficult. That’s appropriate in some circumstances, where the cost of change is very high! But the cost of change is not always very high. Sometimes, often, it’s more important to let authors evolve their software relatively unconstrained, then to bind them to Hyrum’s Law.

                                                                                                    1. 9

                                                                                                      Stubs and mocks are tools for unit testing, just one part of a complete testing breakfast. Without them, it’s more difficult to achieve encapsulation, build strong abstractions, and keep complex systems coherent.

                                                                                                      extending authors’ domain of responsibility.. into the codebases of their consumers.. is appropriate in some circumstances..

                                                                                                      The second bullet here rebuts the first if you squint a little. When subsystems have few consumers (the predominant case for integration tests), occasionally modifying a large number of tests is better than constantly relying on stubs and mocks.

                                                                                                      You can’t just dream up strong abstractions on a schedule. Sometimes they take time to coalesce. Overly rigid mocking can prematurely freeze interfaces.

                                                                                                      1. 2

                                                                                                        I’m afraid I don’t really understand what you’re getting at here. I want to! Do you maybe have an example?

                                                                                                        You can’t just dream up strong abstractions on a schedule. Sometimes they take time to coalesce. Overly rigid mocking can prematurely freeze interfaces.

                                                                                                        I totally agree! But mocking at component encapsulation boundaries isn’t a priori rigid, I don’t think?

                                                                                                        When subsystems have few consumers (the predominant case for integration tests), occasionally modifying a large number of tests is better than constantly relying on stubs and mocks.

                                                                                                        I understand integration tests as whole-system, not subsystem. Not for you?

                                                                                                        1. 2

                                                                                                          I need to test one subsystem. I could either do that in isolation using mocks to simulate its environment, or in a real environment. That’s the trade-off we’re talking about, right? When you say “integration tests make it difficult to achieve encapsulation” I’m not sure what you mean. My best guess is that you’re saying mocks force you to think about cross-subsystem interfaces. Does this help?

                                                                                                          1. 2

                                                                                                            What is a subsystem? Is it a single structure with state and methods? A collection of them? An entire process?

                                                                                                            edit:

                                                                                                            I need to test one subsystem. I could either do that in isolation using mocks to simulate its environment, or in a real environment. That’s the trade-off we’re talking about, right? When you say “integration tests make it difficult to achieve encapsulation” I’m not sure what you mean.

                                                                                                            Programs are a collection of components that provide capabilities and depend on other components. So in the boxes-and-lines architecture diagram sense, the boxes. They encapsulate the stuff they need to do their jobs, and provide their capabilities as methods (or whatever) to their consumers. This is what I’m saying should be testable in isolation, with mocks (fakes, whatever) provided as dependencies. Treating them independently in this way encourages you to think about their APIs, avoid exposing internal details, etc. etc. — all necessary stuff. I’m not saying integration tests make that difficult, I’m saying if all you have is integration tests, then there’s no incentive to think about componentwise APIs, or to avoid breaking encapsulation, or whatever else. You’re treating the whole collection of components as a single thing. That’s bad.

                                                                                                            If you mean subsystem as a subset of inter-related components within a single application, well, I wouldn’t test anything like that explicitly.

                                                                                                            1. 2

                                                                                                              All I mean by it is something a certain kind of architecture astronaut would use as a signal to start mocking :) I’ll happily switch to “component” if you prefer that. In general conversations like this I find all these nouns to be fairly fungible.

                                                                                                              More broadly, I question your implicit premise that encapsulation and whatnot is something to pursue as an end in itself. When I program I try to gradually make the program match its domain. My programs tend to start out as featureless blobs and gradually acquire structure as I understand a domain and refactor. I don’t need artificial pressures to progress on this trajectory. Even in a team context, I don’t find teams that use them to be better than teams that don’t.

                                                                                                              I wholeheartedly believe that tests help inexperienced programmers learn to progress on this trajectory. But unit vs integration is in the noise next to tests vs no tests.

                                                                                                              1. 2

                                                                                                                But unit vs integration is in the noise next to tests vs no tests.

                                                                                                                My current company is a strong counterpoint against this.

                                                                                                                Lots of integration tests, which have become sprawling, slow, and flaky.

                                                                                                                Very few unit tests – not coincidentally, the component boundaries are not crisp, how things relate is hard to follow, and dependencies are not explicitly passed in (so you can’t use fakes). Hence unit tests are difficult to write. It’s a case study in the phenomenon @peterbourgon is describing.

                                                                                                                1. 2

                                                                                                                  I’ve experienced it as well. I’ve also experienced the opposite, codebases with egregious mocking that were improved by switching to integration tests. So I consider these categories to be red herrings. What matters is that someone owns the whole, and takes ownership of the whole by constantly adjusting boundaries when that’s needed.

                                                                                                                  1. 2

                                                                                                                    codebases with egregious mocking

                                                                                                                    Agreed, I’ve seen this too.

                                                                                                                    So I consider these categories to be red herrings.

                                                                                                                    I don’t think this follows though. Ime, the egregious mocking always results from improper application code design or improper test design. That is, any time I’ve seen a component like that, the design (of the component, of the test themselves, or of higher parts of the system in which the component is embedded) has always been faulty, and the hard-to-understand mocks would melt away naturally when that was fixed.

                                                                                                                    What matters is that someone owns the whole, and takes ownership of the whole by constantly adjusting boundaries when that’s needed.

                                                                                                                    Per the previous point, ownership alone won’t help if the owner’s design skills aren’t good enough. I see no way around this, though I wish there were.

                                                                                                                2. 2

                                                                                                                  More broadly, I question your implicit premise that encapsulation and whatnot is something to pursue as an end in itself. When I program I try to gradually make the program match its domain. My programs tend to start out as featureless blobs and gradually acquire structure as I understand a domain and refactor. I don’t need artificial pressures to progress on this trajectory. Even in a team context, I don’t find teams that use them to be better than teams that don’t.

                                                                                                                  This is a fine process! Follow it. But when you put your PR up for review or whatever, this process needs to be finished, and I need to be looking at well-thought-out, coherent, isolated, and, yes, encapsulated components. So I think it is actually a goal in itself. Technically it’s meant to motivate coherence and maintainability, but I think it’s an essential part of those things, not just a proxy for them.

                                                                                                        2. 5

                                                                                                          Stubs and mocks are tools for unit testing, just one part of a complete testing breakfast. Without them, it’s more difficult to achieve encapsulation, build strong abstractions, and keep complex systems coherent. You need integration tests, absolutely! But if you just have integration tests, you’re stacking the deck against yourself architecturally.

                                                                                                          Traditional OO methodology encourages you to think of your program as loosely coupled boxes calling into each other, and your unit test should focus on exact one box, and stub out all the other boxes. But it’s not a suitable model for everything.

                                                                                                          Consider a simple function for calculating factorial of n: when you write a unit test for it, you wouldn’t stub out the * operation, you take it for granted. But in a pure OO sense, the * operation is a distinct “box” that the factorial function is calling into, so a unit test that doesn’t stub out * is technically an integration test, and a “real” unit test should stub it out too. But we know that the latter is just meaningless (you’ll essentially be re-implementing *, but for a small set of operands in the stubs) and we still happily call the former a unit test.

                                                                                                          A more suitable model for this scenario is to think of some of dependencies as an implementation detail, and instead of stubbing them out, use either the real thing or something that replicates its behavior (called “fakes” in Google). These boxes might still be dependencies in a technical sense (e.g. subject to dependency injection), but they should be considered “hidden” in an architectural sense. The * operation in the former example is one such dependency. If you are unit testing some web backend, databases often fall into this category too.

                                                                                                          Still, the real world is quite complex, and there are often cases that straddle the line between a loosely-coupled-box dependency and a mostly-implementation-detail dependency. Choosing between them is a constant tradeoff and requires evaluation of usage patterns. Even the * operation could cross over from the latter category to the former, if you are implementing a generic function that supports both real number multiplications and matrix multiplications, for example.

                                                                                                          1. 6

                                                                                                            Consider a simple function for calculating factorial of n: when you write a unit test for it, you wouldn’t stub out the * operation, you take it for granted. But in a pure OO sense, the * operation is a distinct “box” that the factorial function is calling into, so a unit test that doesn’t stub out * is technically an integration test, and a “real” unit test should stub it out too.

                                                                                                            Imo this is a misunderstanding (or maybe that’s what you’re arguing?). You should only stub out (and use DI for) dependencies with side effects (DB calls, network calls, File I/O, etc). Potentially if you had some really slow, computationally expensive pure function, you could stub that too. I have never actually run into this use-case but can imagine reasons for it.

                                                                                                            1. 2

                                                                                                              I think we’re broadly in agreement.

                                                                                                              But in a pure OO sense, the * operation is a distinct “box” that the factorial function is calling into, so a unit test that doesn’t stub out * is technically an integration test

                                                                                                              Well, these terms aren’t well defined, and I don’t think this is a particularly useful definition. The distinct boxes are the things that exist in the domain of the program (i.e. probably not language constructs) and act as dependencies to other boxes (i.e. parameters to constructors). So if factorial took multiply as a dependency, sure.

                                                                                                              instead of stubbing them out, use either the real thing or something that replicates its behavior

                                                                                                              Names, details, it’s all fine. The only thing I’m claiming is important is that you’re able to exercise your code, at some reasonably granular level of encapsulation, in isolation.

                                                                                                              If you have a component that’s tightly coupled to the database with bespoke SQL, then consider it part of the database, and use “the real thing” in tests. Sure. Makes sense. But otherwise, mocks (fakes, whatever) are a great tool to get to this form of testability, which is in my experience the best proxy for “code quality” that we got.

                                                                                                            2. 4

                                                                                                              Absolutely — unless building and maintaining that automation takes more time than just doing it manually. Which tends to happen, especially when you don’t have a team dedicated to infrastructure, and spending time on automation necessarily means not spending time on product development.

                                                                                                              Obligatory relevant XKCDs:

                                                                                                              1. 2

                                                                                                                Stubs and mocks are tools for unit testing,

                                                                                                                Nope.

                                                                                                                Why I don’t mock

                                                                                                                1. 4

                                                                                                                  Nope

                                                                                                                  That mocks are tools for unit testing is a statement of fact?

                                                                                                                  Why I don’t mock

                                                                                                                  I don’t think we’re talking about the same thing.

                                                                                                                  1. 1

                                                                                                                    Mocks are tools for unit testing the same way hammers are tools for putting in screws.

                                                                                                                    1. 2

                                                                                                                      A great way to make pilot holes so you don’t split your board while putting the screw in?

                                                                                                                      1. 1

                                                                                                                        A great way to split hairs without actually putting a screw in? ¯\(ツ)

                                                                                                                        1. 1

                                                                                                                          You seem way more interested in making dropping zingers than actually talking about your position.

                                                                                                                          1. 1

                                                                                                                            I already spelled out my position in detail in my linked article, which echoes the experience that the Google book from TFA talks about.

                                                                                                                            Should I copy-paste it here?

                                                                                                                            Mocks are largely a unit-testing anti-pattern, they can easily make your tests worse than useless, because you believe you have real tests, but you actually do not. This is worse than not having tests and at least knowing you don’t have tests. (It is also more work). Stubs have the same structural problems, but are not quite as bad as mocks, because they are more transparent/straightforward.

                                                                                                                            Fakes are OK.

                                                                                                                            1. 1

                                                                                                                              Mocks, stubs, fakes — substitutes for the real thing. Whatever. They play the same role.

                                                                                                                              1. 1

                                                                                                                                They are not the same thing and do not play the same role.

                                                                                                                                I recommend that you learn why and how they are different.

                                                                                                                                1. 2

                                                                                                                                  I understand the difference, it’s just that it’s too subtle to be useful.

                                                                                                                                  1. 0

                                                                                                                                    I humbly submit that if you think the difference is too subtle to be useful, then you might not actually understand it.

                                                                                                                                    Because the difference is huge. And it seems that Google Engineering agrees with me. Now the fact that Google Engineering believes something doesn’t automatically make it right, they can mess up like anyone else. On the other hand, they have a lot of really, really smart engineers, and a lot of experience building a huge variety of complex systems. So it seems at least conceivable that all of us (“Google and Me”, LOL) might have, in the tens or hundreds of thousands of engineer-years figured out that a distinction that may seem very subtle on the surface is, in fact, profound.

                                                                                                                                    Make of that what you will.

                                                                                                                      2. 1

                                                                                                                        I’m sure we’re not talking about the same thing.

                                                                                                                1. 12

                                                                                                                  Not comparing like and like.

                                                                                                                  SQLite is 35% faster reading and writing within a large file than the filesystem is at reading and writing small files. Most filesystems I know are very, very slow at reading and writing small files and much, much faster at reading and writing within large files.

                                                                                                                  For example, for my iOS/macOS performance book, I measured the difference writing 1GB of data in files of different sizes, ranging from 100 files of 10MB to 100K files of 10K each.

                                                                                                                  Overall, the times span about an order of magnitude, and even the final step, from individual file sizes of 100KB each to 10KB each was a factor of 3-4 different.

                                                                                                                  1. 12

                                                                                                                    You are technically correct, but you’re looking at this the wrong way. The point of the article is that storing small blobs within a single file is faster than storing them in individual files. And SQLite is a good way to store them in a single file.

                                                                                                                    1. 1

                                                                                                                      It’s a bit more than that (though note that the small vs big thing can be very different between hard disks and SSDs and CoW vs conventional filesystems). A write to a complete file is not just a data operation, it is an update to both the file contents and the filesystem metadata, including any directories that now include a reference to that file. A write within a file is not comparable because it is just an update to the contents (if also a resize, it updates a smaller set of metadata. On a CoW filesystem or one that provides cryptographic integrity guarantees it may also update more metadata). SQLite also provides updates to metadata (in most cases, richer metadata than a filesystem). The filesystem typically provides concurrent updates (though with some quite exciting semantics), which SQLite doesn’t provide.

                                                                                                                      1. 1

                                                                                                                        The individual-files approach is kind of like the common newbie SQLite mistake of making multiple inserts without an enclosing transaction, wherein you get a disk flush after each individual insert. I’m sure a file update is less expensive, but that’s because the filesystem is only making its own metadata fully durable, not your data. 🤬

                                                                                                                        As I learn more about filesystems and databases (I’m writing a small b-tree manager as a research project) I’m considering what it is that makes one different from the other. A filesystem is really just a specialized database on raw block storage. (Apple’s former HFS+ is literally built on b-trees.) It would be easy to build a toy filesystem atop a key-value database.

                                                                                                                        The feature set of a filesystem has been pretty much unchanged for decades; even small advances like Apple’s resource forks and Be’s indexing we’re abandoned, although we do have file metadata attributes on most filesystems now. We really need to rethink it — I love the ideas behind NewtonOS’s “soup”.

                                                                                                                      2. 1

                                                                                                                        I don’t think I am the one looking at it the wrong way ¯\(ツ)

                                                                                                                        You might notice that there are two parts of your interpretation of the article:

                                                                                                                        storing small blobs within a single file is faster than storing them in individual files

                                                                                                                        This is true, and fairly trivially so. Note that it has nothing to do with SQLite at all. Pretty much any mechanism will do this.

                                                                                                                        And SQLite is a good way to store them in a single file.

                                                                                                                        That may also be true, but it has nothing to do with the performance claim.

                                                                                                                        Neither of these statements supports the claim of the article that “SQLite is 35% faster than the Filesystem”, not even in combination.

                                                                                                                        In addition, I am pretty dubious about the explanation given (open and close system calls). To me, a much more likely cause is that filesystems will prefetch and coalesces reads and writes within a file, whereas they will not do so across files. So it is actually the filesystem that is giving the speed boost.

                                                                                                                    1. 3

                                                                                                                      Oh, my pet peeve!

                                                                                                                      Rant on

                                                                                                                      Compilers should not silently optimise a loop I’ve written to sum integers to the closed formula version.

                                                                                                                      Never ever. No.

                                                                                                                      If they are smart enough to figure out that there’s a closed formula, then issue a warning telling me about that formula, or maybe tell me about a library function that does it.

                                                                                                                      So I can change the code.

                                                                                                                      Rant off

                                                                                                                      Thank you for your attention.

                                                                                                                      1. 3

                                                                                                                        Compilers should not silently optimise a loop I’ve written to sum integers to the closed formula version.

                                                                                                                        Serious question: why not? If it produces the same result under all possible inputs, what’s your argument against it?

                                                                                                                        1. 1

                                                                                                                          Well, I already wrote what the compiler should do instead.

                                                                                                                          There are essentially 2 possibilities why the loop solution is in there:

                                                                                                                          1. I don’t know the closed form solution
                                                                                                                          2. I do know the closed form solution, but decided not to use it

                                                                                                                          In neither case is silently replacing what I wrote the right answer.

                                                                                                                          If I do know the closed form solution and I put the other solution in there, that was probably for a reason, the compiler should not override me. For example I wanted to time the CPU doing arithmetic. And of course that is another “output” from this computation, so they do not produce exactly the same result. So in this case, please don’t replace, or if you do replace at least tell me about it.

                                                                                                                          If I don’t know the closed form, then I should probably learn about it, because the code would be better if it used the closed form solution instead of the loop. That would make sure that it is always used, rather than at the whims of the optimiser, which we can’t really control. It also makes compile times faster and makes the code more intention-revealing.

                                                                                                                          1. 3

                                                                                                                            There’s a third reason that is far more common:

                                                                                                                            The closed form would lead to incomprehensible, difficult-to-modify, source code.

                                                                                                                            The entire point of an optimising compiler is to allow the programmer to write clean, readable, maintainable source code and generate assembly that they might be able to write by hand but would probably never want to modify.

                                                                                                                            The transform pass that’s replacing the loop with the closed form doesn’t know if that’s what you wrote in a single function or if this is the result of a load of earlier passes. Typically it will see this kind of thing after inlining and constant propagation. Source code isn’t trying to sum all integers in a range, it’s trying to do something more generic and it’s only after multiple rounds of transformation and analysis that the compiler can determine that it can aggressively specialise the loop for this particular use.

                                                                                                                            In particular, you typically see summing integers in a loop as a single accumulator in a loop that does a load of other things so that code at the end can see it. If you want to use the accumulated value inside the loop, that’s the correct form, otherwise the right thing is to pull it out and compute it at the end with the closed form. Imagine you do that by hand. Now you want to debug the loop and know the cumulative total, so in debug builds you end up keeping an accumulator around as well and then discard it and use the closed form. If your closed form computation had bugs, you wouldn’t see these in the debug build. You could stick in an assert that checks that the two are the same. Now your code is a mess. Or you could just use the accumulator for both, have the same code paths for debug and release builds, and have the compiler generate good code for the release builds.

                                                                                                                            If you want your compiler to warn you explicitly, then you need to keep a lot more state around. Compilers are written as sequences of transform passes (with analyses off to the side). They don’t know anything about prior passes, they just take some input and produce some output. If you want them to warn then they need to be able to explain to the programmer why they are warning. Are you happy with compilers requiring an increase in memory by a polynomial factor?

                                                                                                                            1. -1

                                                                                                                              The closed form would lead to incomprehensible, difficult-to-modify, source code.

                                                                                                                              Hard disagree. Trying to infer what an iterative loop will actually compute is what’s hard to figure out, because you have to play computer and keep iterative state in your head, which leads to subtle bugs when you go back and modify that code later.

                                                                                                                              See also: Go To Statement Considered Harmful:

                                                                                                                              My second remark is that our intellectual powers are rather geared to master static relations and that our powers to visualize processes evolving in time are relatively poorly developed. For that reason we should do (as wise programmers aware of our limitations) our utmost to shorten the conceptual gap between the static program and the dynamic process, to make the correspondence between the program (spread out in text space) and the process (spread out in time) as trivial as possible.

                                                                                                                              Minimising tracing dynamic execution of code should always be a goal.

                                                                                                                              allow the programmer to write clean, readable, maintainable source code

                                                                                                                              Exactly. But here it is doing the opposite: allowing the programmer to write low-level, mucky iterative code and infer a clean higher-level semantic description from that lower level code. That I don’t want.

                                                                                                                              Typically it will see…

                                                                                                                              This seems highly contrived and speculative.

                                                                                                                              In particular [computing a sum alongside doing other computation]

                                                                                                                              Again, that example seems extremely contrived. Furthermore, if the loop is doing anything else of substance, keeping the sum around will be entirely negligible.

                                                                                                                              If you want your compiler to warn you explicitly…[the world will implode]

                                                                                                                              Again, lots of speculation.

                                                                                                                              I am also fine with them simply not doing this “optimisation”.

                                                                                                                              1. 3

                                                                                                                                This reply reads like a troll, I have flagged it as such.

                                                                                                                                A trivial(?) counterexample -

                                                                                                                                Fresh from being hired, you’re tasked with the following business-critical code: add all numbers from 1 to 1,000 except those that match FizzBuzz.

                                                                                                                                What I’d do, and any normal person would do, is to write a loop, and check numbers against the rule, not adding them to the total if they match. Presumably the compiler will derive a closed form (which I cannot provide, Wolfram Alpha failed me, but the OEIS sequence gives a hint that it’s complicated).

                                                                                                                                What you would do is write the above, get an answer from the compiler which is the closed form, delete the loop, insert the closed form, add a comment saying “closed issue #14495982” and go on with your day.

                                                                                                                                2 years later, your successor has to implement an urgent enhancement: only exclude those numbers which are mod 5 ==0 or mod 7==0. Your closed form is inscrutable and hardly trivial to amend, as opposed to ours, which requires changing 1 variable. (Of course our code might implement Enterprise FizzBuzz where the modands are parametrized, but that way madness lies).

                                                                                                                                In summary and conclusion - you should write code for other people to understand, not to make the compiler’s life easier.

                                                                                                                                1. 1

                                                                                                                                  I think there can be a middle ground. Better tooling to show the developer what “no optimization” vs “optimization” will do to your code.

                                                                                                                                  Like… annotated godbolt inline in your IDE, for those of us who have used it to look at C++ code.

                                                                                                                        2. 2

                                                                                                                          Fully agree. I would say this about UB:

                                                                                                                          If correctness is critical, you want the compiler to warn about UB, not use it to infer things. Such as inferring that a branch is unreachable and deleting it (insert Linus rant about deleting null pointer checks), or as in this case, “optimising” away something that is a bug in the source code (infinite loop without side effects on overflow) into something else not prescribed by the source code or exhibited by the unoptimised program.

                                                                                                                          In my own programs, I mostly use natural numbers (unsigned integer is such a misnomer for natural number) – I don’t expect to have much UB, so it should be no problem to turn on such a warning.

                                                                                                                          1. 2

                                                                                                                            Here are some of the reasons why most compilers don’t do this sort of diagnostic:

                                                                                                                            • Adding diagnostics breaks the build if users are compiling with warnings as errors.
                                                                                                                              This is why gcc -Wall hasn’t changed for years.
                                                                                                                            • The user may not understand, care, or be willing to change their code.
                                                                                                                              You can’t, for example. modify the Spec benchmarks to improve performance.
                                                                                                                              Detecting the loop and optimizing it improves performance for everyone not just the few who fix their code.
                                                                                                                            • Diagnostics are done by an earlier pass; there isn’t enough information to issue a diagnostic.
                                                                                                                              Many optimizations may have modified the intermediate representation, e.g. the loop might have been restructured, contained in an inlined or cloned function, etc.
                                                                                                                            1. 1

                                                                                                                              What a great summary of the sorry state of compilers today … for users.

                                                                                                                              All of these “reasons” are barely more than excuses as to why doing this would be inconvenient for the creators of compilers.

                                                                                                                              None of them invalidate my reason for why doing this is the right thing for users of compilers.

                                                                                                                          1. 5

                                                                                                                            Smells like a reinvention/rediscovery of the access side of lenses. I don’t know if Rust’s type system admits lenses to the level you can have them in a language like Haskell, but I’d suggest reading up on them.

                                                                                                                            1. 3

                                                                                                                              The post author works on the druid GUI toolkit, which makes use of Lenses so I imagine they’re aware. I’m not sure whether the implementation of them in druid is or can be as sophisticated as Haskell though.

                                                                                                                              1. 1

                                                                                                                                Hmm…

                                                                                                                                Keypaths were released in 1994 as part of NeXT’s EOF framework. When was the Haskell lens library released?

                                                                                                                                1. 2

                                                                                                                                  https://julesh.com/2018/08/16/lenses-for-philosophers/ identifies lenslike structures in a 1958 work by Kurt Godel.

                                                                                                                                  1. 2

                                                                                                                                    Well, the Nimrud Lens goes back to 7th century BC. ¯_(ツ)_/¯

                                                                                                                                    Seriously, you gotta stop with the Functional Appropriation: just because something is similar to something in FP doesn’t mean that it’s derivative of or a reinvention of the thing in FP. Particularly if the non-FP thing predates the FP thing.

                                                                                                                              1. 3

                                                                                                                                Hmm…isn’t this really just a bug in the test program?

                                                                                                                                It just steps by a constant amount regardless of the number of iterations, so greater number of iterations increases the range of inputs to way beyond what is reasonable for sin()/cos() and triggering the range reduction.

                                                                                                                                Easy fix: divide whatever your step size by the number of iterations.

                                                                                                                                1. 2
                                                                                                                                  1. How about detaching a thread that then simply blocks?
                                                                                                                                  let location = try userLocation()
                                                                                                                                  let conditions = try weatherConditions(for: location)
                                                                                                                                  let (imageData, response) = try URLSession.shared.data(from: conditions.imageURL)
                                                                                                                                  if (response as? HTTPURLResponse)?.statusCode == 200 {
                                                                                                                                      let image = UIImage(data: imageData)
                                                                                                                                  } else {
                                                                                                                                      // failed
                                                                                                                                  }
                                                                                                                                  

                                                                                                                                  The OS already has all these facilities, not sure why we have to recreate them from scratch in user space.

                                                                                                                                  1. Both Combine (Rx,…) and async/await map dataflow onto familiar and seemingly “easy” call/return structures. However, the more you make the code look like it is call/return, the further away you get from what the code actually does, making understanding such code more and more difficult.
                                                                                                                                  1. 3

                                                                                                                                    Apple points out in a Swift concurrency WWDC talk that wasting threads can have a much bigger impact on devices like iPhones. Having 100k worker threads on a modern Linux server isn’t a big deal at all. But on a RAM-constrained device trying to use as little energy as possible that’s not a good idea.

                                                                                                                                    Consider an app that needs to download 200 small files in the background (the example from the video linked above). Blocking in threads, that’s 100 MB of thread stacks alone, not to mention the OS-level data structures and other overhead. On a server that’s negligible. On a brand new 2021 iPhone with 4 GB of RAM that’s 1/40 of physical memory. 1/40 sounds small, but users run dozens of apps at a time. 1/40 of RAM can be 1/4 to 1/2 your entire memory budget for your app. Not a good use of resources.

                                                                                                                                    Update: both replies mention thread stacks are virtual memory, and likely won’t use the full 512 KB allocated for them. Which is a good point. Nevertheless, the async model has proven repeatedly to use less RAM and have lower overhead than a threaded model in multiple applications, most famously nginx vs Apache. Personally I think async/await has more utility on an iPhone than in 99% of web app servers.

                                                                                                                                    1. 2

                                                                                                                                      Thread stacks are demand paged.

                                                                                                                                      But even if they weren’t, userspace threads are a cleaner abstraction. Async and await is manual thread scheduling.

                                                                                                                                      1. 2

                                                                                                                                        That Apple statement, like a lot of what Apple says on performance and concurrency, is at best misleading.

                                                                                                                                        First, iPhones are veritable supercomputers with amazing CPUs and tons of memory. The C10K problem was coined in 1999, computers then had something like 32-128MB of RAM, Intel would sell you CPUs between 750MHz and 1GHz. And the C10K problem was considered something for the most extreme highly loaded servers. How are you going to get 10K connections on an iPhone, let alone 100K? No client-side workloads reasonably produce 100K worker threads. Wunderlist, for example, used a maximum of 3-4 threads, except for the times when colleagues put in some GCD, at which point that number would balloon to 20-40. Second, Apple technologies such as GCD actually produce way more threads than just having a few worker threads that block. Third, those technologies also perform significantly worse, with more overhead, than just having a few worker threads that block. We will see how async/await does in this context.

                                                                                                                                        For your specific example, downloading 200 (small) files simultaneously is a bad idea, as your connection will max out way before that, more around 10 simultaneous requests. So you’re not downloading any faster, you are just making each download take 20 times longer. Meaning you increase the load on the client and on the server and add a very high risk of timeouts. Really, really bad idea. If async/await makes such a scenario easier to accomplish and thus more likely to actually happen, that would be a good case for not having it.

                                                                                                                                        Not sure where you are getting your 100MB of thread stacks from. While threads get 512K of stack space, that is virtual memory, so just address space. Real memory is not allocated until actually needed, and it would be a very weird program that would come even close to maxing out the stack space (with deep recursion/nested call stacks + lots of local data on those deeply nested stacks) on all these I/O threads.

                                                                                                                                        And of course iOS has special APIs for doing downloads even while your app is suspended, so you should probably be using those if you’re worried about background performance.

                                                                                                                                        1. 2

                                                                                                                                          This may be true, but it doesn’t have to leak into our code. Golang has been managing just fine with m:n green threads.

                                                                                                                                          1. 1

                                                                                                                                            I do wonder why Apple didn’t design Swift that way. Maybe there are some Obj-C interop issues? I’d love a primary source on the topic.

                                                                                                                                            1. 2

                                                                                                                                              Why it was designed this way is a good question, but Objective-C interop is not the reason.

                                                                                                                                              NeXTstep used cthreads,a user-level threads package.

                                                                                                                                      1. 23

                                                                                                                                        Hmm…doesn’t surprise. I only had a very brief encounter with a part of the Racktet core team, but it was…memorable.

                                                                                                                                        I attended a Racketfest out of curiosity when it was held in my city, with one of the core team in attendance. His presentation was OK, but his behavior during another presentation truly outlandish. He kept interrupting the presenter and telling him how he was wrong and everything he was saying BS. Admittedly the thesis was a bit questionable, but it was still interesting. And if you really, really want to make such a comment, do it in the Q&A. Once. Definitely not interrupting the presentation. And most definitely not multiple times.

                                                                                                                                        OK, so maybe a one-off. Nope.

                                                                                                                                        Same person was a visitor at my institute a little later. People presented their stuff. One person kept trying to tell him that if he only let him continue with his presentation, it would explain the things he wasn’t getting. Nope. Kept stopping, saying the terms used were wrong (they weren’t) and refused to let the presenter continue.

                                                                                                                                        At some point, he bluntly said: you are just PhD students, and I will be on the program committees of the conferences where you need to get your papers published, so you better adapt to how I see the world. Pure power play.

                                                                                                                                        Never seen anything like it.

                                                                                                                                        And I personally find what I’ve seen/heard of Linus, for example, absolutely OK. And RMS to me seems a bit weird, but that’s it.