1. 11

      I usually really enjoy your writing, but this seems totally disingenuous.

      You don’t even define what qualifies as “OOP” to you, so how could anyone NOT think that you’re going to “No true Scotsman” them if they show you what they believe to be good OOP?

      Does OOP have to use class inheritance? Does it have to involve mutable global state? Can you write OOP in the C language?

      Or, if you’re open to someone showing you a piece of code and you’ll just accept at face value that it’s OOP with no argument, then you should say THAT.

      You need to address that. But even then, I wouldn’t actually expect any replies because of the point(s) that @singpolyma raised- nobody is going to volunteer code to someone who is basically promising to (publically) criticize it.

      1. 5

        so how could anyone NOT think that you’re going to “No true Scotsman” them if they show you what they believe to be good OOP?

        I’m a bit confused. Usually it’s the other way around. I have some concrete criticism against OOP, and and I hear “Oh, that’s just because you don’t know OOP. That’s not a good OOP.” And that keeps on going forever, at which point it seems that the good OOP is everything that books and articles say, except all the things that people actually are doing in practice, but surely there somewhere must be some pristine OOP code.

        I haven’t thought that I could pull a “no true Scotsman” the other way, though I guess you’re right. That’s not my intention though.

        Does OOP have to use class inheritance? Does it have to involve mutable global state? Can you write OOP in the C language?

        I guess I’m open minded about it. The thing with OOP is not well defined. Look - the code I write in Java is not very far from OOP: always uses a lot of interfaces, DI, and yet I don’t consider it OOP for various subtle reasons, which usually get lost in abstract discussion.

        Having read and written many articles in OOP debate, I think we as a community are just talking past each other now, criticizing/defending something that is not well defined and subjective.

        So I think it would be more productive to go through some actual code and talk about, in relation to OOP debate.

        nobody is going to volunteer code to someone who is basically promising to (publically) criticize it.

        You can volunteer someone else’s code, I don’t mind. :D

        I actually thought that this is going to be a default, because people are usually too shy to think their code must be the best one.

        That’s a thing with public OS projects. You put them out there, you have accept the fact that someone might… actually read the code and judge it.

        I promise not to be a douche about it. There’s plenty of my own code on github, none of it pristine, anyone is free to retaliate. :D

        1. 5

          If you do find examples of good OOP code, I predict they will mostly be written in either Smalltalk, Erlang, and Dylan. I haven’t used Smalltalk or Dylan in earnest, and it’s been far too long since I used Erlang, or I’d find some examples myself.

          Edit: thinking more about this, it feels like the only way to find good OOP code is to try to find that mythical domain in which inheritance is actually a benefit rather than the enormous bummer it normally is, and one in which polymorphism is actually justified. The only example which comes to mind is GUI code where different kinds of widgets could inherit off a class hierarchy, which is why I expect you’d have the best luck by looking in Smalltalk for your examples.

          But overall it’s a misguided task IMO; merely by framing it as being about “OOP” in the first place you’ve already gone wrong. OOP is just a mash-up of several unrelated concepts like inheritance, polymorphism, encapsulation, etc, some of which are good and some of which are very rarely helpful. Talk instead about how to do encapsulation well, or about in what limited domains polymorphism is worth the conceptual overhead.

          1. 2

            But overall it’s a misguided task IMO; merely by framing it as being about “OOP” in the first place you’ve already gone wrong. OOP is just a mash-up of several unrelated concepts like inheritance, polymorphism, encapsulation, etc, some of which are good and some of which are very rarely helpful.

            I have a very similar intuition, but the point of the exercise is to find whatever people would consider as good OOP and take a look at it.

            If you do find examples of good OOP code, I predict they will mostly be written in either Smalltalk, Erlang, and Dylan.

            I’m not sure about Dylan, but the rest fits my intuition of one “good” piece of OOP being message passing, which I tend to call actor based programming.

            1. 1

              which I tend to call actor based programming

              Yes. “Actor” is another common word for “OOP”

              1. 4

                This is not quite true. OOP permits asynchronous message passing but actor-model code requires it. Most OO systems use (or, at least, default to) synchronous message passing.

                1. 2

                  This conflation is really problematic. I get that this is what it was supposed to mean long time ago. And then OOP became synonymous with Java-like class oriented programming. So now any time one wants to talk about contemporary common “OOP” there’s a group of people coming and “oh, but look at Erlang and Smalltalk, yada, yada - real OOP”, which while technically do have a point, are nowhere close to what I’ve seen a real life OOP looked like.

                  1. 4

                    It’s also competely ahistorical. The actor model has almost nothing to do with the development of object-based programming languages, the actor model of concurrency is different from the actor model of computation, which is nonsensical (the inventor thinks he proved Turing and godel wrong with the actor model.) Alan Kay changed his mind on what “true oop was supposed to be” in the 1990s.

                    1. 1

                      Alan Kay changed his mind on what “true oop was supposed to be” in the 1990s.

                      Any link what do you mean exactly? I’m aware of http://userpage.fu-berlin.de/~ram/pub/pub_jf47ht81Ht/doc_kay_oop_en .

                      OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things. It can be done in Smalltalk and in LISP. There are possibly other systems in which this is possible, but I’m not aware of them.

                      Is that what you have in mind?

                      1. 4

                        “OOP means only messaging” is revisionism, his earlier writing was much more class-centric. I document some of this here: https://www.hillelwayne.com/post/alan-kay/

                        1. 1

                          I just read that article, twice, and I still can’t figure out what the point is that it is trying to make, never mind actually succeeding at making it.

                          It most certainly isn’t evidence, let alone proof of any sort of “revisionism”.

                          1. 4

                            The point it’s trying to make is that if you read Alan Kay’s writing from around the time that he made Smalltalk, so 72-80ish, it’s absolutely not “just messaging”. Alan Kay started saying “OOP means only messaging” (and mistakenly saying he “invented objects”, though he’s stopped doing that) in the 90s, over a decade and a half after his foundational writing on OOP ideas. It’d be one thing if he said “I changed my mind / realized I was wrong”, but a lot of his modern writing strongly implies that “OOP is just messaging” was his idea all along, which is why I call it revisionism.

                            1. 1

                              I know what the point is that you (now) claim it makes. But that, to me, looks like revisionism, because the contents of the article certainly don’t support, and also don’t appear to actually make that claim in any coherent fashion.

                              First, the title of the article makes a different claim: “Alan Kay did not invent objects”. Which is at best odd (and at worst somewhat slanderous), because he has never claimed to have invented object oriented programming, being very clear that he was inspired very directly by Simula and a lot of earlier work, for example the Burroughs machine, Ivan Sutherland’s Sketchpad etc.

                              In fact, he describes how one of his first tasks, as a grad student(?), was to look at this hacked Algol compiler, and spreading the listing out in the hallway to try and grok it, because it did weird things to flow control. That was Simula.

                              Re-reading your article, I am guessing you seem to think of the following quote as the smoking gun:

                              I mean, I made up the term “objects.” Since we did objects first, there weren’t any objects to radicalize.

                              At least there was nothing else I could find, and you immediately follow that with “He later stopped claiming…”. The interview you cite is from 2012. Even the Squeak mailing list quote you showed was from 1998, and it refers to OOPSLA ‘97. His HOPL IV paper The Early History of Smalltalk is from 1993. That’s where he tells the Simula story.

                              So he “stopped” making the claim you insinuate him making, two decades before, at least according to you, he started making that claim. That seems a bit…weird. Of course, there were rumours on the Squeak mailing list that Alan Kay was actually a time-traveller from the future, partly fuelled by a mail sent from a machine with misconfigured system clock. But apart from that scenario, I have a hard time seeing how he could start doing what you claim he did two decades after he stopped doing what you claim he did.

                              The simpler explanation is that he simply didn’t make that claim. Not in that 2012 interview, not before 2012 and not after 2012. And that you vastly mis- and over-interpreted an off-the-cuff remark in a random interview.

                              Please read the first sentence carefully: “I made up the term ‘objects’”. (My emphasis). He is clearly, and entirely consistently both with the facts and his later writings, claiming to have coined the term.

                              And he is clearly relating this to the systems that came later, C++ and Java, relative to which the Smalltalk “everything is an object” approach does appear radical. But of course it wasn’t a “radicalisation” relative to the current state of the art, because the current state of the art came later.

                              Yeah, he could have mentioned Simula at that point, but if you’ve ever given a talk or an interview you know that you sometimes omit some detail in order to move the main point along. And maybe he did mention it, but it was edited out.

                              But the question at hand was what OO is, according to Alan Kay, not whether he claimed to have invented it. Your article only addresses the second question, incorrectly it turns out, but doesn’t say anything whatsoever about the former.

                              And there it helps to look at the actual artefacts. Let’s take Smalltalk-72, the first Smalltalk. It was entirely message-based, objects and classes being a second-order phenomenon. In order to make it practical, because Smalltalk was a means to an end for them, not an end in itself, this was made more like existing systems over time, culminating in Smalltalk-80. This is a development Alan has consistently lamented.

                              In fact, in the very 2012 interview you misrepresent, he goes on to say the following:

                              The first Smalltalk was presented at MIT, and Carl Hewitt and his folks, a few months later, wrote the first Actor paper. The difference between the two systems is that the Actor model retained more of what I thought were the good features of the object idea, whereas at PARC, we used Smalltalk to invent personal computing

                              So Actors “retained more of the good features of the object idea”. What “good features” might that be, do you think?

                              In fact, there’s Alan’s famous quip at OOPSLA ’97 keynote, The Computer Revolution hasn’t Happened Yet.

                              Actually I made up the term “object-oriented”, and I can tell you I did not have C++ in mind.

                              I am sure you’ve heard it. Alas, what he said next is hardly reported at all:

                              The important thing here is, I have many of the same feelings about Smalltalk.

                              And just reiterating the point above he goes on:

                              My personal reaction to OOP when I started thinking about it in the sixties.

                              So he was reacting to OOP in the sixties. The first Smalltalk was in the seventies. So either he thinks of himself as a time-traveller, or he clearly thinks that OOP was already invented, just like he always said.

                              Anyway, back to your claim of “revisionism” regarding messaging. I still can’t get a handle on it, because all the sources you absolutely harp on the centrality of messaging. For example, in the “Microelectronics and the Personal Computer” article, the central idea is a “message-activity” system. Hmm…sounds pretty message-centric to me.

                              The central idea in writing Small talk programs, then, is to define classes which handle communication among objects in the created environment.

                              So the central idea is to define classes. Aha, smoking gun!! But what do these classes actually do? Handle communication among objects. Messages. And yes, in a manual describing what you do in the system, what you do is define classes. Because that’s the only way to actually send and receive messages.

                              What else should he have written, in your opinion?

                              1. 4

                                Please read the first sentence carefully: “I made up the term ‘objects’”. (My emphasis). He is clearly, and entirely consistently both with the facts and his later writings, claiming to have coined the term.

                                He didn’t coin the term, either. The Simula 67 manual formally defines “objects” on page 6.

                                Anyway, you’re way angrier about this than I expected anybody to be, you’re clearly more interested in defending Kay than studying the history, and you’re honestly kinda scaring me right now. I’m bowing out of this whole story.

                                1. 1

                                  OK, you just can’t let it go, can you?

                                  First you make two silly accusations that don’t hold up to even the slightest scrutiny. I mean, they are in the “wet roads cause rain” league of inane. You get shown to be completely wrong. Instead of coming to terms with just how wrong you were, and maybe what your own personal motivations were for making such a silly accusations, you just pile on more silly accusations.

                                  What axe do you have to grind with Alan Kay? Because you are clearly not rational when it comes to your ahistorical attempts to throw mud at him.

                                  As I have clearly shown, the only one who hasn’t adequately studied history here is you. Taking one off-the-cuff remark from 2012 and defining that has “having studied history” is beyond the pale, when the entire rest of history around this subject contradicts your misinterpretation of that out-of-context quote.

                          2. 1

                            Thanks! It was a good read.

                      2. 2

                        I’m sympathetic, but I mean, at some point you have to just give up and admit that the word has been given so many meanings that it’s no longer useful for constructive discussion and just switch to more precise terminology, right? Want to talk about polymorphism? Cool; talk about polymorphism. Want to talk about the actor model? Just say “actor”.

                        1. 2

                          no longer useful for constructive discussion and just switch to more precise terminology, right?

                          I guess you’re right.

                          It’s just there’s still so many books, articles, talks mentioning and praising OOP, that it’s hard to resist. (Do schools still teach OOP?) It’s not useful to use for constructive discussion, but the ghost of vague amalgamate of ideas of OOP is still hunting us, and I can see some of these ideas in the code that I have to work with sometimes. People keep adding pointless getters and setters in the name of encapsulation and so on. Because that’s in some OOP book.

                          Ironically… when talking about OOP critically, in a way I’m only proliferating its existence. But talking about these ideas in isolation doesn’t seem like it’s making any damage to the ghost of OOP. Everyone keep talking about inheritance being tricky, and yet I keep seeing inheritance hierarchies where they shouldn’t be.

                        2. 1

                          And this is what I was talking about in my top-level comment. It sounds like you’re requiring class inheritance as part of your definition of OOP. Which I think is bunk. I don’t care what Java does or has for features. Any code base that is architected as black-box, stateful, “modules” (classes, OCaml modules, Python packages, JavaScript modules, etc) should count as OOP. Inheritance is just a poor mechanism for code-reuse.

                          There’s no actual reason to include class inheritance in a definition of OOP anymore than we must include monads in our definition of FP (we shouldn’t).

                          1. 1

                            I would say modules rendered black box by polymorphic composition, with state being optional but allowed it the definition of course.

                            1. 1

                              I feel like mutable state is actually important for something to be an “object”.

                              And when I say mutable state, I’m also including if the object “represents” mutable state in the outside world without having its own, local, mutable fields in the code. In other words, an object that communicates with a REST API via HTTP represents mutable state because the response from the server can be different every time and we can issue POST requests to mutate the state of the server, etc. So, even if the actual class in your code doesn’t have mutable properties, it can still be considered to “have” mutable state.

                              Anything that doesn’t have mutable state, explicitly or implicitly, isn’t really an “object” to me. It’s just an opaque data type. If you disagree, then I must ask the question “What isn’t an object?”

                              1. 1

                                It’s just an opaque data type.

                                A data type rendered opaque by polymorphism, specifically. A C struct with associated functions is not an object, even if the fields are hidden with pointer tricks and even if the functions model mutable state, because I can’t make something another object out of something else where I can use those functions safely.

                                1. 1

                                  So are you saying that “objects” have some requirement to be polymorphic per your definition?

                                  Polymorphism is orthogonal to my definition. Neither an opaque data type, nor an “object” need to have anything to do with polymorphism in my mind. An opaque data type can simply be a class with a private field and some “getters”.

                    2. 1

                      it feels like the only way to find good OOP code is to try to find that mythical domain in which inheritance is actually a benefit rather than the enormous bummer it normally is

                      Oof. Inheritance is definitely not a feature I’d call out as a “good idea” part. I’ve been known to use it from time to time, but if you use it a lot and end up being “good OOP” that would be nothing short of a miracle.

                  2. 1

                    so how could anyone NOT think that you’re going to “No true Scotsman” them if they show you what they believe to be good OOP?

                    I’m a bit confused. Usually it’s the other way around. I have some concrete criticism against OOP, and and I hear “Oh, that’s just because you don’t know OOP. That’s not a good OOP.” And that keeps on going forever, at which point it seems that the good OOP is everything that books and articles say, except all the things that people actually are doing in practice, but surely there somewhere must be some pristine OOP code.

                    I haven’t thought that I could pull a “no true Scotsman” the other way, though I guess you’re right. That’s not my intention though.

                    Does OOP have to use class inheritance? Does it have to involve mutable global state? Can you write OOP in the C language?

                    I guess I’m open minded about it. The thing with OOP is - it’s vague and not well defined. The real OO is what I would call actor programming, with real message passing, and OOP we do now is class-oriented programming, and that just starts the confusion.

                    Look - the code I write in Java is not all that much different from OOP: always uses a lot of interfaces, DI, and yet I don’t even consider it OOP for various subtle reasons.

                    Having read and written many articles in OOP debate, I think we as a community are just talking past each other now, criticizing/defending something that is not well defined and subjective.

                    So I think it would be more productive to go through some actual code and talk about, in relation to OOP debate.

                    nobody is going to volunteer code to someone who is basically promising to (publically) criticize it.

                    You can volunteer someone else’s code, I don’t mind. :D

                    I actually thought that this is going to be a default, because people are usually too shy to think their code must be the best one.

                    That’s a thing with public OS projects. You put them out there, you have accept the fact that someone might… actually read the code and judge it.

                    I promise not to be a douche about it. There’s plenty of my own code on github, none of it pristine, anyone is free to retaliate. :D

                  1. 26

                    I’m a little bit suspicious of this plan. You specifically call out that you already have an anti-OOP bias to the point of even saying “no true Scotsman” and then say you plan to take anything someone sends you and denigrate it. Since no codebase is perfect and every practitioner’s understanding is always evolving, there will of course be something bad you can say especially if predisposed to do so.

                    If you actually want to learn what OOP is like, why not pick up a known-good text on the subject, such as /99 Bottles of OOP/?

                    1. 10

                      I, for one, think this project seems very interesting. The author is correct that criticisms of OOP is often dismissed by saying “that’s just a problem if you don’t do OOP correctly”. Naturally, a response is to ask for an example of a project which does OOP “correctly”, and see if the common critiques still applies to that.

                      Maybe the resulting article will be uninteresting. But I think I would love to see an article which dissects a particular well-written code-base, discusses exactly where and how it encounters issues which seem to be inherent in the paradigm or approach it’s using, and how it solves or works around or avoids those issues. I just hope the resulting article is actually fair and not just a rant about OOP bad.

                      EDIT: And to be clear, just because there seems to be some confusion: The author isn’t saying that examples of good OOP aren’t “real OOP”. The author is saying that critiques of OOP is dismissed by saying “that’s not real OOP”. The author is complaining about other people who use the no true Scotsman fallacy.

                      1. 7

                        Explicitly excluding frameworks seems to be a bit prejudiced, since producing abstractions that encourage reuse is where OOP really shines. OpenStep, for example, is an absolutely beautiful API that is a joy to use and encourages you to write very small amounts of code (some of the abstractions are a bit dated now, but remember it was designed in a world where high-end workstations had 8-16 MiB of RAM). Individual programs written against it don’t show the benefits.

                        1. 1

                          Want to second OpenStep here, essentially Cocoa before they started with the ViewController nonsense.

                          Also, from Smalltalk, at least the Collection classes and the Magnitude hierarchy.

                          And yes, explicitly excluding frameworks is non-sensical. “I want examples of good OO code, excluding the things OO is good at”.

                        2. 2

                          Maybe the resulting article will be uninteresting. But I think I would love to see an article which dissects a particular well-written code-base, discusses exactly where and how it encounters issues which seem to be inherent in the paradigm or approach it’s using, and how it solves or works around or avoids those issues. I just hope the resulting article is actually fair and not just a rant about OOP bad.

                          That’s exactly my plan. I have my biases and existing ideas, but I’ll try to keep it open minded and maybe through talking about concrete examples I will learn something, refine my arguments, or just have a useful conversation.

                          1. 2

                            The author is correct that criticisms of OOP is often dismissed by saying “that’s just a problem if you don’t do OOP correctly”.

                            I know this may look like splitting hairs, but while “that’s only a problem if you don’t do OOP correctly” would be No True Scotsman and invalid, what I see more often is “that’s a problem and that’s why you should do OOP instead of just labelling random code full of if statements as ‘OOP’” which is a nomenclature argument to be sure, but in my view differs from No True Scotman in that it’s not a generalization but an attempt to make a call towards a different way.

                            I agree that a properly unbiased tear-down of a good OOP project by someone familiar with OOP but without stars in their eyes could be interesting, my comment was based on the tone of the OP and a sinking feeling that that is not what we would get here.

                            1. 1

                              OOP simplifies real objects and properties to make abstraction approachable for developers, and the trade-off is accuracy (correctness) for simplicity.

                              So if you would try to describe the real world in OOP terms adequately, then an application will be as complex as the world itself.

                              This makes me think that proper OOP is unattainable in principle, with the only exception – the world itself.

                            2. 3

                              One could argue in favor of Assembly and still be right, which doesn’t make “everyone program should be written in Assembly” a good statement. It sounds, to me, like saying “English will never have great literature”. It doesn’t make much sense.

                              Microsoft has great materials on Object-Oriented-Design tangled inside their documentation of .NET, Tackle Business Complexity in a Microservice with DDD and CQRS Patterns is a good example of what you want, but is not a repository I am afraid. Design Patterns has great examples of real world code, they are old (drawing scrollbars) but they are great Object-Oriented-Programming examples.

                              Good code is good, no matter the paradigm or language. In my experience, developers lack understanding the abstractions they are using (HTTP, IO, Serialization, Patterns, Architecture etc.) and that shows on their code. Their code doesn’t communicate a solution very well because they doesn’t understand it themselves.

                              1. 3

                                you plan to take anything someone sends you and denigrate it.

                                I could do that, but then it wouldn’t be really useful and convincing.

                                If you actually want to learn what OOP is like, why not pick up a known-good text on the subject, such as /99 Bottles of OOP/

                                Because no real program looks like this.

                              1. 0

                                It’s 2021. You can give me up already. Please.

                                1. 1

                                  I knew something was fishy when that link rendered as already-visited.

                                  1. 1

                                    And there I thought the “this one neat trick” would give it a away…

                                1. 10

                                  I think an important direction for future programming language development is better support for writing single programs that span multiple nodes. It’s been done, e.g. erlang, but it would be nice to see more tight integration of network protocols into programming languages, or languages that can readily accommodate libraries that do this without a lot of fuss.

                                  There’s still some utility in IDLs like protobufs/capnproto in that realistically the whole world isn’t going to converge on one language any time soon, so having a nice way of describing an interface in a way that’s portable across languages is important for some use cases. But today we write a lot of plumbing code that we probably shouldn’t need to.

                                  1. 3

                                    I couldn’t agree more. Some sort of language feature or DSL or something would allow you to have your services architecture without paying quite so many of the costs for it.

                                    Type-checking cross-node calls, service fusion (i.e. co-locating services that communicate with each other on the same node to eliminate network traffic where possible), RPC inlining (at my company we have RPC calls that amount to just CPU work but they’re in different repos and different machines because they’re written by different teams; if the compiler had access to that information it could eliminate that boundary), something like a query planner for complex RPCs that decay to many other backend RPC calls (we pass object IDs between services but often many of them need the data about those same underlying objects so they all go out to the data access layer to look up the same objects). Some of that could be done by ops teams with implementation knowledge but in our case those implementations are changing all of the time so they’d be out of date by the time the ops team has time to figure out what’s going on under the hood. There’s a lot that a Sufficiently Smart Compiler(tm) can do given all of the information

                                    1. 3

                                      There is also a view that it is a function of underlying OS (not a particular programming language) to seamlessly provide ‘resources’ (eg memory, CPU, scheduling) etc. across networked nodes.

                                      This view is, sometimes, called Single Image OS (briefly discussed that angle in that thread as well )

                                      Overall, I agree, of course, that creating safe, efficient and horizontally scalable programs – should much easier.

                                      Hardware is going to continue to drive horizontal scalability capabilities (whether it is multiple cores, or multiple nodes, or multiple video/network cards)

                                      1. 2

                                        I was tempted to add some specifics about projects/ideas I thought were promising, but I’m kinda glad I didn’t, since everybody’s chimed with stuff they’re excited about and there’s a pretty wide range. Some of these I knew about others I didn’t, and this turned out to be way more interesting than if it had been about one thing!

                                        1. 2

                                          Yes, but: you need to avoid the mistakes of earlier attempts to do this, like CORBA, Java RMI, DistributedObjects, etc. A remote call is not the same as an in-process call, for all the reasons called out in the famous Fallacies Of Distributed Computing list. Earlier systems tried to shove that inconvenient truth under the rug, with the result that ugly things happened at runtime.

                                          On the other hand, Erlang has of course been doing this well for a while.

                                          I think we’re in better shape to deal with this now thanks all the recent work languages have been doing to provide async calls, Erlang-style channels, Actors, and better error handling through effect systems. (Shout out to Rust, Swift and Pony!)

                                          1. 2

                                            Yep! I’m encouraged by signs that we as a field have learned our lesson. See also: https://capnproto.org/rpc.html#distributed-objects

                                            1. 1

                                              Cap’nProto is already on my long list of stuff to get into…

                                          2. 2

                                            Great comment, yes, I completely agree.

                                            This is linked from the article, but just in case you didn’t se it, http://catern.com/list_singledist.html lists a few attempts at exactly that. Including my own http://catern.com/caternetes.html

                                            1. 2

                                              This is what work like Spritely Goblins is hoping to push forward

                                              1. 1

                                                I think an important direction for future programming language development is better support for writing single programs that span multiple nodes.

                                                Yes!

                                                I think the model that has the most potential is something near to tuple spaces. That is, leaning in to the constraints, rather than trying to paper over them, or to prop up anachronistic models of computation.

                                                1. 1

                                                  better support for writing single programs that span multiple nodes.

                                                  That’s one of the goals of Objective-S. Well, not really a specific goal, but more a result of the overall goal of generalising to components and connectors. And components can certainly be whole programs, and connectors can certainly be various forms of IPC.

                                                  Having support for node-spanning programs also illustrates the need to go beyond the current call/return focus in our programming languages. As long as the only linguistically supported way for two components to talk to each other is a procedure call, the only way to do IPC is transparent RPCs. And we all know how well that turned out.

                                                  1. 1

                                                    indeed! Stuff like https://www.unisonweb.org/ looks promising.

                                                  1. 17

                                                    These rules make perfect sense in a closed, Google-like ecosystem. Many of them don’t make sense outside of that context, at least not without serious qualifications. The danger with articles like this is that they don’t acknowledge the contextual requirements that motivate each practice, making them liable for cargo-culting into situations where they end up doing more harm than good.

                                                    Automate common tasks

                                                    Absolutely — unless building and maintaining that automation takes more time than just doing it manually. Which tends to happen, especially when you don’t have a team dedicated to infrastructure, and spending time on automation necessarily means not spending time on product development. Programmers love to overestimate the cost of toil, and the benefit of avoiding it; and to underestimate the cost of building and running new software.

                                                    Stubs and mocks make bad tests

                                                    Stubs and mocks are tools for unit testing, just one part of a complete testing breakfast. Without them, it’s more difficult to achieve encapsulation, build strong abstractions, and keep complex systems coherent. You need integration tests, absolutely! But if you just have integration tests, you’re stacking the deck against yourself architecturally.

                                                    Small frequent releases

                                                    No objection.

                                                    Upgrade dependencies early, fast, and often

                                                    Big and complex dependencies, subject to CVEs, and especially if they interface with out-of-process stuff that may not retain a static API? Absolutely. Smaller dependencies, stuff that just serves a single purpose? It’s make-work, and adds a small amount of continuous risk to your deployments — even small changes can introduce big bugs that skirt past your test processes — which may not be the best choice in all environments.

                                                    Expert makes everyone’s update

                                                    (Basically: update your consumers for them.) This one in particular is so pernicious. The relationship between author and consumer is one to many, with no upper bound on the many. Authors always owe some degree of care and responsibility to their consumers, but not, like, total fealty. That’s literally impossible in open ecosystems, and even in closed ones, taking it to this extreme rarely makes sense in the cost/benefit sense. Software is always an explorative process, and needs to change to stay healthy; extending authors’ domain of responsibility literally into the codebases of their consumers makes change just enormously difficult. That’s appropriate in some circumstances, where the cost of change is very high! But the cost of change is not always very high. Sometimes, often, it’s more important to let authors evolve their software relatively unconstrained, then to bind them to Hyrum’s Law.

                                                    1. 9

                                                      Stubs and mocks are tools for unit testing, just one part of a complete testing breakfast. Without them, it’s more difficult to achieve encapsulation, build strong abstractions, and keep complex systems coherent.

                                                      extending authors’ domain of responsibility.. into the codebases of their consumers.. is appropriate in some circumstances..

                                                      The second bullet here rebuts the first if you squint a little. When subsystems have few consumers (the predominant case for integration tests), occasionally modifying a large number of tests is better than constantly relying on stubs and mocks.

                                                      You can’t just dream up strong abstractions on a schedule. Sometimes they take time to coalesce. Overly rigid mocking can prematurely freeze interfaces.

                                                      1. 2

                                                        I’m afraid I don’t really understand what you’re getting at here. I want to! Do you maybe have an example?

                                                        You can’t just dream up strong abstractions on a schedule. Sometimes they take time to coalesce. Overly rigid mocking can prematurely freeze interfaces.

                                                        I totally agree! But mocking at component encapsulation boundaries isn’t a priori rigid, I don’t think?

                                                        When subsystems have few consumers (the predominant case for integration tests), occasionally modifying a large number of tests is better than constantly relying on stubs and mocks.

                                                        I understand integration tests as whole-system, not subsystem. Not for you?

                                                        1. 2

                                                          I need to test one subsystem. I could either do that in isolation using mocks to simulate its environment, or in a real environment. That’s the trade-off we’re talking about, right? When you say “integration tests make it difficult to achieve encapsulation” I’m not sure what you mean. My best guess is that you’re saying mocks force you to think about cross-subsystem interfaces. Does this help?

                                                          1. 2

                                                            What is a subsystem? Is it a single structure with state and methods? A collection of them? An entire process?

                                                            edit:

                                                            I need to test one subsystem. I could either do that in isolation using mocks to simulate its environment, or in a real environment. That’s the trade-off we’re talking about, right? When you say “integration tests make it difficult to achieve encapsulation” I’m not sure what you mean.

                                                            Programs are a collection of components that provide capabilities and depend on other components. So in the boxes-and-lines architecture diagram sense, the boxes. They encapsulate the stuff they need to do their jobs, and provide their capabilities as methods (or whatever) to their consumers. This is what I’m saying should be testable in isolation, with mocks (fakes, whatever) provided as dependencies. Treating them independently in this way encourages you to think about their APIs, avoid exposing internal details, etc. etc. — all necessary stuff. I’m not saying integration tests make that difficult, I’m saying if all you have is integration tests, then there’s no incentive to think about componentwise APIs, or to avoid breaking encapsulation, or whatever else. You’re treating the whole collection of components as a single thing. That’s bad.

                                                            If you mean subsystem as a subset of inter-related components within a single application, well, I wouldn’t test anything like that explicitly.

                                                            1. 2

                                                              All I mean by it is something a certain kind of architecture astronaut would use as a signal to start mocking :) I’ll happily switch to “component” if you prefer that. In general conversations like this I find all these nouns to be fairly fungible.

                                                              More broadly, I question your implicit premise that encapsulation and whatnot is something to pursue as an end in itself. When I program I try to gradually make the program match its domain. My programs tend to start out as featureless blobs and gradually acquire structure as I understand a domain and refactor. I don’t need artificial pressures to progress on this trajectory. Even in a team context, I don’t find teams that use them to be better than teams that don’t.

                                                              I wholeheartedly believe that tests help inexperienced programmers learn to progress on this trajectory. But unit vs integration is in the noise next to tests vs no tests.

                                                              1. 2

                                                                But unit vs integration is in the noise next to tests vs no tests.

                                                                My current company is a strong counterpoint against this.

                                                                Lots of integration tests, which have become sprawling, slow, and flaky.

                                                                Very few unit tests – not coincidentally, the component boundaries are not crisp, how things relate is hard to follow, and dependencies are not explicitly passed in (so you can’t use fakes). Hence unit tests are difficult to write. It’s a case study in the phenomenon @peterbourgon is describing.

                                                                1. 2

                                                                  I’ve experienced it as well. I’ve also experienced the opposite, codebases with egregious mocking that were improved by switching to integration tests. So I consider these categories to be red herrings. What matters is that someone owns the whole, and takes ownership of the whole by constantly adjusting boundaries when that’s needed.

                                                                  1. 2

                                                                    codebases with egregious mocking

                                                                    Agreed, I’ve seen this too.

                                                                    So I consider these categories to be red herrings.

                                                                    I don’t think this follows though. Ime, the egregious mocking always results from improper application code design or improper test design. That is, any time I’ve seen a component like that, the design (of the component, of the test themselves, or of higher parts of the system in which the component is embedded) has always been faulty, and the hard-to-understand mocks would melt away naturally when that was fixed.

                                                                    What matters is that someone owns the whole, and takes ownership of the whole by constantly adjusting boundaries when that’s needed.

                                                                    Per the previous point, ownership alone won’t help if the owner’s design skills aren’t good enough. I see no way around this, though I wish there were.

                                                                2. 2

                                                                  More broadly, I question your implicit premise that encapsulation and whatnot is something to pursue as an end in itself. When I program I try to gradually make the program match its domain. My programs tend to start out as featureless blobs and gradually acquire structure as I understand a domain and refactor. I don’t need artificial pressures to progress on this trajectory. Even in a team context, I don’t find teams that use them to be better than teams that don’t.

                                                                  This is a fine process! Follow it. But when you put your PR up for review or whatever, this process needs to be finished, and I need to be looking at well-thought-out, coherent, isolated, and, yes, encapsulated components. So I think it is actually a goal in itself. Technically it’s meant to motivate coherence and maintainability, but I think it’s an essential part of those things, not just a proxy for them.

                                                        2. 5

                                                          Stubs and mocks are tools for unit testing, just one part of a complete testing breakfast. Without them, it’s more difficult to achieve encapsulation, build strong abstractions, and keep complex systems coherent. You need integration tests, absolutely! But if you just have integration tests, you’re stacking the deck against yourself architecturally.

                                                          Traditional OO methodology encourages you to think of your program as loosely coupled boxes calling into each other, and your unit test should focus on exact one box, and stub out all the other boxes. But it’s not a suitable model for everything.

                                                          Consider a simple function for calculating factorial of n: when you write a unit test for it, you wouldn’t stub out the * operation, you take it for granted. But in a pure OO sense, the * operation is a distinct “box” that the factorial function is calling into, so a unit test that doesn’t stub out * is technically an integration test, and a “real” unit test should stub it out too. But we know that the latter is just meaningless (you’ll essentially be re-implementing *, but for a small set of operands in the stubs) and we still happily call the former a unit test.

                                                          A more suitable model for this scenario is to think of some of dependencies as an implementation detail, and instead of stubbing them out, use either the real thing or something that replicates its behavior (called “fakes” in Google). These boxes might still be dependencies in a technical sense (e.g. subject to dependency injection), but they should be considered “hidden” in an architectural sense. The * operation in the former example is one such dependency. If you are unit testing some web backend, databases often fall into this category too.

                                                          Still, the real world is quite complex, and there are often cases that straddle the line between a loosely-coupled-box dependency and a mostly-implementation-detail dependency. Choosing between them is a constant tradeoff and requires evaluation of usage patterns. Even the * operation could cross over from the latter category to the former, if you are implementing a generic function that supports both real number multiplications and matrix multiplications, for example.

                                                          1. 6

                                                            Consider a simple function for calculating factorial of n: when you write a unit test for it, you wouldn’t stub out the * operation, you take it for granted. But in a pure OO sense, the * operation is a distinct “box” that the factorial function is calling into, so a unit test that doesn’t stub out * is technically an integration test, and a “real” unit test should stub it out too.

                                                            Imo this is a misunderstanding (or maybe that’s what you’re arguing?). You should only stub out (and use DI for) dependencies with side effects (DB calls, network calls, File I/O, etc). Potentially if you had some really slow, computationally expensive pure function, you could stub that too. I have never actually run into this use-case but can imagine reasons for it.

                                                            1. 2

                                                              I think we’re broadly in agreement.

                                                              But in a pure OO sense, the * operation is a distinct “box” that the factorial function is calling into, so a unit test that doesn’t stub out * is technically an integration test

                                                              Well, these terms aren’t well defined, and I don’t think this is a particularly useful definition. The distinct boxes are the things that exist in the domain of the program (i.e. probably not language constructs) and act as dependencies to other boxes (i.e. parameters to constructors). So if factorial took multiply as a dependency, sure.

                                                              instead of stubbing them out, use either the real thing or something that replicates its behavior

                                                              Names, details, it’s all fine. The only thing I’m claiming is important is that you’re able to exercise your code, at some reasonably granular level of encapsulation, in isolation.

                                                              If you have a component that’s tightly coupled to the database with bespoke SQL, then consider it part of the database, and use “the real thing” in tests. Sure. Makes sense. But otherwise, mocks (fakes, whatever) are a great tool to get to this form of testability, which is in my experience the best proxy for “code quality” that we got.

                                                            2. 4

                                                              Absolutely — unless building and maintaining that automation takes more time than just doing it manually. Which tends to happen, especially when you don’t have a team dedicated to infrastructure, and spending time on automation necessarily means not spending time on product development.

                                                              Obligatory relevant XKCDs:

                                                              1. 2

                                                                Stubs and mocks are tools for unit testing,

                                                                Nope.

                                                                Why I don’t mock

                                                                1. 4

                                                                  Nope

                                                                  That mocks are tools for unit testing is a statement of fact?

                                                                  Why I don’t mock

                                                                  I don’t think we’re talking about the same thing.

                                                                  1. 1

                                                                    Mocks are tools for unit testing the same way hammers are tools for putting in screws.

                                                                    1. 2

                                                                      A great way to make pilot holes so you don’t split your board while putting the screw in?

                                                                      1. 1

                                                                        A great way to split hairs without actually putting a screw in? ¯\(ツ)

                                                                        1. 1

                                                                          You seem way more interested in making dropping zingers than actually talking about your position.

                                                                          1. 1

                                                                            I already spelled out my position in detail in my linked article, which echoes the experience that the Google book from TFA talks about.

                                                                            Should I copy-paste it here?

                                                                            Mocks are largely a unit-testing anti-pattern, they can easily make your tests worse than useless, because you believe you have real tests, but you actually do not. This is worse than not having tests and at least knowing you don’t have tests. (It is also more work). Stubs have the same structural problems, but are not quite as bad as mocks, because they are more transparent/straightforward.

                                                                            Fakes are OK.

                                                                            1. 1

                                                                              Mocks, stubs, fakes — substitutes for the real thing. Whatever. They play the same role.

                                                                              1. 1

                                                                                They are not the same thing and do not play the same role.

                                                                                I recommend that you learn why and how they are different.

                                                                                1. 2

                                                                                  I understand the difference, it’s just that it’s too subtle to be useful.

                                                                                  1. 0

                                                                                    I humbly submit that if you think the difference is too subtle to be useful, then you might not actually understand it.

                                                                                    Because the difference is huge. And it seems that Google Engineering agrees with me. Now the fact that Google Engineering believes something doesn’t automatically make it right, they can mess up like anyone else. On the other hand, they have a lot of really, really smart engineers, and a lot of experience building a huge variety of complex systems. So it seems at least conceivable that all of us (“Google and Me”, LOL) might have, in the tens or hundreds of thousands of engineer-years figured out that a distinction that may seem very subtle on the surface is, in fact, profound.

                                                                                    Make of that what you will.

                                                                      2. 1

                                                                        I’m sure we’re not talking about the same thing.

                                                                1. 12

                                                                  Not comparing like and like.

                                                                  SQLite is 35% faster reading and writing within a large file than the filesystem is at reading and writing small files. Most filesystems I know are very, very slow at reading and writing small files and much, much faster at reading and writing within large files.

                                                                  For example, for my iOS/macOS performance book, I measured the difference writing 1GB of data in files of different sizes, ranging from 100 files of 10MB to 100K files of 10K each.

                                                                  Overall, the times span about an order of magnitude, and even the final step, from individual file sizes of 100KB each to 10KB each was a factor of 3-4 different.

                                                                  1. 12

                                                                    You are technically correct, but you’re looking at this the wrong way. The point of the article is that storing small blobs within a single file is faster than storing them in individual files. And SQLite is a good way to store them in a single file.

                                                                    1. 1

                                                                      It’s a bit more than that (though note that the small vs big thing can be very different between hard disks and SSDs and CoW vs conventional filesystems). A write to a complete file is not just a data operation, it is an update to both the file contents and the filesystem metadata, including any directories that now include a reference to that file. A write within a file is not comparable because it is just an update to the contents (if also a resize, it updates a smaller set of metadata. On a CoW filesystem or one that provides cryptographic integrity guarantees it may also update more metadata). SQLite also provides updates to metadata (in most cases, richer metadata than a filesystem). The filesystem typically provides concurrent updates (though with some quite exciting semantics), which SQLite doesn’t provide.

                                                                      1. 1

                                                                        The individual-files approach is kind of like the common newbie SQLite mistake of making multiple inserts without an enclosing transaction, wherein you get a disk flush after each individual insert. I’m sure a file update is less expensive, but that’s because the filesystem is only making its own metadata fully durable, not your data. 🤬

                                                                        As I learn more about filesystems and databases (I’m writing a small b-tree manager as a research project) I’m considering what it is that makes one different from the other. A filesystem is really just a specialized database on raw block storage. (Apple’s former HFS+ is literally built on b-trees.) It would be easy to build a toy filesystem atop a key-value database.

                                                                        The feature set of a filesystem has been pretty much unchanged for decades; even small advances like Apple’s resource forks and Be’s indexing we’re abandoned, although we do have file metadata attributes on most filesystems now. We really need to rethink it — I love the ideas behind NewtonOS’s “soup”.

                                                                      2. 1

                                                                        I don’t think I am the one looking at it the wrong way ¯\(ツ)

                                                                        You might notice that there are two parts of your interpretation of the article:

                                                                        storing small blobs within a single file is faster than storing them in individual files

                                                                        This is true, and fairly trivially so. Note that it has nothing to do with SQLite at all. Pretty much any mechanism will do this.

                                                                        And SQLite is a good way to store them in a single file.

                                                                        That may also be true, but it has nothing to do with the performance claim.

                                                                        Neither of these statements supports the claim of the article that “SQLite is 35% faster than the Filesystem”, not even in combination.

                                                                        In addition, I am pretty dubious about the explanation given (open and close system calls). To me, a much more likely cause is that filesystems will prefetch and coalesces reads and writes within a file, whereas they will not do so across files. So it is actually the filesystem that is giving the speed boost.

                                                                    1. 3

                                                                      Oh, my pet peeve!

                                                                      Rant on

                                                                      Compilers should not silently optimise a loop I’ve written to sum integers to the closed formula version.

                                                                      Never ever. No.

                                                                      If they are smart enough to figure out that there’s a closed formula, then issue a warning telling me about that formula, or maybe tell me about a library function that does it.

                                                                      So I can change the code.

                                                                      Rant off

                                                                      Thank you for your attention.

                                                                      1. 3

                                                                        Compilers should not silently optimise a loop I’ve written to sum integers to the closed formula version.

                                                                        Serious question: why not? If it produces the same result under all possible inputs, what’s your argument against it?

                                                                        1. 1

                                                                          Well, I already wrote what the compiler should do instead.

                                                                          There are essentially 2 possibilities why the loop solution is in there:

                                                                          1. I don’t know the closed form solution
                                                                          2. I do know the closed form solution, but decided not to use it

                                                                          In neither case is silently replacing what I wrote the right answer.

                                                                          If I do know the closed form solution and I put the other solution in there, that was probably for a reason, the compiler should not override me. For example I wanted to time the CPU doing arithmetic. And of course that is another “output” from this computation, so they do not produce exactly the same result. So in this case, please don’t replace, or if you do replace at least tell me about it.

                                                                          If I don’t know the closed form, then I should probably learn about it, because the code would be better if it used the closed form solution instead of the loop. That would make sure that it is always used, rather than at the whims of the optimiser, which we can’t really control. It also makes compile times faster and makes the code more intention-revealing.

                                                                          1. 3

                                                                            There’s a third reason that is far more common:

                                                                            The closed form would lead to incomprehensible, difficult-to-modify, source code.

                                                                            The entire point of an optimising compiler is to allow the programmer to write clean, readable, maintainable source code and generate assembly that they might be able to write by hand but would probably never want to modify.

                                                                            The transform pass that’s replacing the loop with the closed form doesn’t know if that’s what you wrote in a single function or if this is the result of a load of earlier passes. Typically it will see this kind of thing after inlining and constant propagation. Source code isn’t trying to sum all integers in a range, it’s trying to do something more generic and it’s only after multiple rounds of transformation and analysis that the compiler can determine that it can aggressively specialise the loop for this particular use.

                                                                            In particular, you typically see summing integers in a loop as a single accumulator in a loop that does a load of other things so that code at the end can see it. If you want to use the accumulated value inside the loop, that’s the correct form, otherwise the right thing is to pull it out and compute it at the end with the closed form. Imagine you do that by hand. Now you want to debug the loop and know the cumulative total, so in debug builds you end up keeping an accumulator around as well and then discard it and use the closed form. If your closed form computation had bugs, you wouldn’t see these in the debug build. You could stick in an assert that checks that the two are the same. Now your code is a mess. Or you could just use the accumulator for both, have the same code paths for debug and release builds, and have the compiler generate good code for the release builds.

                                                                            If you want your compiler to warn you explicitly, then you need to keep a lot more state around. Compilers are written as sequences of transform passes (with analyses off to the side). They don’t know anything about prior passes, they just take some input and produce some output. If you want them to warn then they need to be able to explain to the programmer why they are warning. Are you happy with compilers requiring an increase in memory by a polynomial factor?

                                                                            1. -1

                                                                              The closed form would lead to incomprehensible, difficult-to-modify, source code.

                                                                              Hard disagree. Trying to infer what an iterative loop will actually compute is what’s hard to figure out, because you have to play computer and keep iterative state in your head, which leads to subtle bugs when you go back and modify that code later.

                                                                              See also: Go To Statement Considered Harmful:

                                                                              My second remark is that our intellectual powers are rather geared to master static relations and that our powers to visualize processes evolving in time are relatively poorly developed. For that reason we should do (as wise programmers aware of our limitations) our utmost to shorten the conceptual gap between the static program and the dynamic process, to make the correspondence between the program (spread out in text space) and the process (spread out in time) as trivial as possible.

                                                                              Minimising tracing dynamic execution of code should always be a goal.

                                                                              allow the programmer to write clean, readable, maintainable source code

                                                                              Exactly. But here it is doing the opposite: allowing the programmer to write low-level, mucky iterative code and infer a clean higher-level semantic description from that lower level code. That I don’t want.

                                                                              Typically it will see…

                                                                              This seems highly contrived and speculative.

                                                                              In particular [computing a sum alongside doing other computation]

                                                                              Again, that example seems extremely contrived. Furthermore, if the loop is doing anything else of substance, keeping the sum around will be entirely negligible.

                                                                              If you want your compiler to warn you explicitly…[the world will implode]

                                                                              Again, lots of speculation.

                                                                              I am also fine with them simply not doing this “optimisation”.

                                                                              1. 3

                                                                                This reply reads like a troll, I have flagged it as such.

                                                                                A trivial(?) counterexample -

                                                                                Fresh from being hired, you’re tasked with the following business-critical code: add all numbers from 1 to 1,000 except those that match FizzBuzz.

                                                                                What I’d do, and any normal person would do, is to write a loop, and check numbers against the rule, not adding them to the total if they match. Presumably the compiler will derive a closed form (which I cannot provide, Wolfram Alpha failed me, but the OEIS sequence gives a hint that it’s complicated).

                                                                                What you would do is write the above, get an answer from the compiler which is the closed form, delete the loop, insert the closed form, add a comment saying “closed issue #14495982” and go on with your day.

                                                                                2 years later, your successor has to implement an urgent enhancement: only exclude those numbers which are mod 5 ==0 or mod 7==0. Your closed form is inscrutable and hardly trivial to amend, as opposed to ours, which requires changing 1 variable. (Of course our code might implement Enterprise FizzBuzz where the modands are parametrized, but that way madness lies).

                                                                                In summary and conclusion - you should write code for other people to understand, not to make the compiler’s life easier.

                                                                                1. 1

                                                                                  I think there can be a middle ground. Better tooling to show the developer what “no optimization” vs “optimization” will do to your code.

                                                                                  Like… annotated godbolt inline in your IDE, for those of us who have used it to look at C++ code.

                                                                        2. 2

                                                                          Fully agree. I would say this about UB:

                                                                          If correctness is critical, you want the compiler to warn about UB, not use it to infer things. Such as inferring that a branch is unreachable and deleting it (insert Linus rant about deleting null pointer checks), or as in this case, “optimising” away something that is a bug in the source code (infinite loop without side effects on overflow) into something else not prescribed by the source code or exhibited by the unoptimised program.

                                                                          In my own programs, I mostly use natural numbers (unsigned integer is such a misnomer for natural number) – I don’t expect to have much UB, so it should be no problem to turn on such a warning.

                                                                          1. 2

                                                                            Here are some of the reasons why most compilers don’t do this sort of diagnostic:

                                                                            • Adding diagnostics breaks the build if users are compiling with warnings as errors.
                                                                              This is why gcc -Wall hasn’t changed for years.
                                                                            • The user may not understand, care, or be willing to change their code.
                                                                              You can’t, for example. modify the Spec benchmarks to improve performance.
                                                                              Detecting the loop and optimizing it improves performance for everyone not just the few who fix their code.
                                                                            • Diagnostics are done by an earlier pass; there isn’t enough information to issue a diagnostic.
                                                                              Many optimizations may have modified the intermediate representation, e.g. the loop might have been restructured, contained in an inlined or cloned function, etc.
                                                                            1. 1

                                                                              What a great summary of the sorry state of compilers today … for users.

                                                                              All of these “reasons” are barely more than excuses as to why doing this would be inconvenient for the creators of compilers.

                                                                              None of them invalidate my reason for why doing this is the right thing for users of compilers.

                                                                          1. 5

                                                                            Smells like a reinvention/rediscovery of the access side of lenses. I don’t know if Rust’s type system admits lenses to the level you can have them in a language like Haskell, but I’d suggest reading up on them.

                                                                            1. 3

                                                                              The post author works on the druid GUI toolkit, which makes use of Lenses so I imagine they’re aware. I’m not sure whether the implementation of them in druid is or can be as sophisticated as Haskell though.

                                                                              1. 1

                                                                                Hmm…

                                                                                Keypaths were released in 1994 as part of NeXT’s EOF framework. When was the Haskell lens library released?

                                                                                1. 2

                                                                                  https://julesh.com/2018/08/16/lenses-for-philosophers/ identifies lenslike structures in a 1958 work by Kurt Godel.

                                                                                  1. 2

                                                                                    Well, the Nimrud Lens goes back to 7th century BC. ¯_(ツ)_/¯

                                                                                    Seriously, you gotta stop with the Functional Appropriation: just because something is similar to something in FP doesn’t mean that it’s derivative of or a reinvention of the thing in FP. Particularly if the non-FP thing predates the FP thing.

                                                                              1. 3

                                                                                Hmm…isn’t this really just a bug in the test program?

                                                                                It just steps by a constant amount regardless of the number of iterations, so greater number of iterations increases the range of inputs to way beyond what is reasonable for sin()/cos() and triggering the range reduction.

                                                                                Easy fix: divide whatever your step size by the number of iterations.

                                                                                1. 2
                                                                                  1. How about detaching a thread that then simply blocks?
                                                                                  let location = try userLocation()
                                                                                  let conditions = try weatherConditions(for: location)
                                                                                  let (imageData, response) = try URLSession.shared.data(from: conditions.imageURL)
                                                                                  if (response as? HTTPURLResponse)?.statusCode == 200 {
                                                                                      let image = UIImage(data: imageData)
                                                                                  } else {
                                                                                      // failed
                                                                                  }
                                                                                  

                                                                                  The OS already has all these facilities, not sure why we have to recreate them from scratch in user space.

                                                                                  1. Both Combine (Rx,…) and async/await map dataflow onto familiar and seemingly “easy” call/return structures. However, the more you make the code look like it is call/return, the further away you get from what the code actually does, making understanding such code more and more difficult.
                                                                                  1. 3

                                                                                    Apple points out in a Swift concurrency WWDC talk that wasting threads can have a much bigger impact on devices like iPhones. Having 100k worker threads on a modern Linux server isn’t a big deal at all. But on a RAM-constrained device trying to use as little energy as possible that’s not a good idea.

                                                                                    Consider an app that needs to download 200 small files in the background (the example from the video linked above). Blocking in threads, that’s 100 MB of thread stacks alone, not to mention the OS-level data structures and other overhead. On a server that’s negligible. On a brand new 2021 iPhone with 4 GB of RAM that’s 1/40 of physical memory. 1/40 sounds small, but users run dozens of apps at a time. 1/40 of RAM can be 1/4 to 1/2 your entire memory budget for your app. Not a good use of resources.

                                                                                    Update: both replies mention thread stacks are virtual memory, and likely won’t use the full 512 KB allocated for them. Which is a good point. Nevertheless, the async model has proven repeatedly to use less RAM and have lower overhead than a threaded model in multiple applications, most famously nginx vs Apache. Personally I think async/await has more utility on an iPhone than in 99% of web app servers.

                                                                                    1. 2

                                                                                      Thread stacks are demand paged.

                                                                                      But even if they weren’t, userspace threads are a cleaner abstraction. Async and await is manual thread scheduling.

                                                                                      1. 2

                                                                                        That Apple statement, like a lot of what Apple says on performance and concurrency, is at best misleading.

                                                                                        First, iPhones are veritable supercomputers with amazing CPUs and tons of memory. The C10K problem was coined in 1999, computers then had something like 32-128MB of RAM, Intel would sell you CPUs between 750MHz and 1GHz. And the C10K problem was considered something for the most extreme highly loaded servers. How are you going to get 10K connections on an iPhone, let alone 100K? No client-side workloads reasonably produce 100K worker threads. Wunderlist, for example, used a maximum of 3-4 threads, except for the times when colleagues put in some GCD, at which point that number would balloon to 20-40. Second, Apple technologies such as GCD actually produce way more threads than just having a few worker threads that block. Third, those technologies also perform significantly worse, with more overhead, than just having a few worker threads that block. We will see how async/await does in this context.

                                                                                        For your specific example, downloading 200 (small) files simultaneously is a bad idea, as your connection will max out way before that, more around 10 simultaneous requests. So you’re not downloading any faster, you are just making each download take 20 times longer. Meaning you increase the load on the client and on the server and add a very high risk of timeouts. Really, really bad idea. If async/await makes such a scenario easier to accomplish and thus more likely to actually happen, that would be a good case for not having it.

                                                                                        Not sure where you are getting your 100MB of thread stacks from. While threads get 512K of stack space, that is virtual memory, so just address space. Real memory is not allocated until actually needed, and it would be a very weird program that would come even close to maxing out the stack space (with deep recursion/nested call stacks + lots of local data on those deeply nested stacks) on all these I/O threads.

                                                                                        And of course iOS has special APIs for doing downloads even while your app is suspended, so you should probably be using those if you’re worried about background performance.

                                                                                        1. 2

                                                                                          This may be true, but it doesn’t have to leak into our code. Golang has been managing just fine with m:n green threads.

                                                                                          1. 1

                                                                                            I do wonder why Apple didn’t design Swift that way. Maybe there are some Obj-C interop issues? I’d love a primary source on the topic.

                                                                                            1. 2

                                                                                              Why it was designed this way is a good question, but Objective-C interop is not the reason.

                                                                                              NeXTstep used cthreads,a user-level threads package.

                                                                                      1. 22

                                                                                        Hmm…doesn’t surprise. I only had a very brief encounter with a part of the Racktet core team, but it was…memorable.

                                                                                        I attended a Racketfest out of curiosity when it was held in my city, with one of the core team in attendance. His presentation was OK, but his behavior during another presentation truly outlandish. He kept interrupting the presenter and telling him how he was wrong and everything he was saying BS. Admittedly the thesis was a bit questionable, but it was still interesting. And if you really, really want to make such a comment, do it in the Q&A. Once. Definitely not interrupting the presentation. And most definitely not multiple times.

                                                                                        OK, so maybe a one-off. Nope.

                                                                                        Same person was a visitor at my institute a little later. People presented their stuff. One person kept trying to tell him that if he only let him continue with his presentation, it would explain the things he wasn’t getting. Nope. Kept stopping, saying the terms used were wrong (they weren’t) and refused to let the presenter continue.

                                                                                        At some point, he bluntly said: you are just PhD students, and I will be on the program committees of the conferences where you need to get your papers published, so you better adapt to how I see the world. Pure power play.

                                                                                        Never seen anything like it.

                                                                                        And I personally find what I’ve seen/heard of Linus, for example, absolutely OK. And RMS to me seems a bit weird, but that’s it.

                                                                                        1. 15

                                                                                          Didn’t amazon do this with The Decree:

                                                                                          1. All teams will henceforth expose their data and functionality through service interfaces.
                                                                                          2. Teams must communicate with each other through these interfaces.
                                                                                          3. There will be no other form of interprocess communication allowed: no direct linking, no direct reads of another team’s data store, no shared-memory model, no back-doors whatsoever. The only communication allowed is via service interface calls over the network.
                                                                                          4. It doesn’t matter what technology they use. HTTP, Corba, Pubsub, custom protocols — doesn’t matter.
                                                                                          5. All service interfaces, without exception, must be designed from the ground up to be externalizable. That is to say, the team must plan and design to be able to expose the interface to developers in the outside world. No exceptions.
                                                                                          6. Anyone who doesn’t do this will be fired.
                                                                                          7. Thank you; have a nice day!
                                                                                          1. 6

                                                                                            Rereading this now, it’s not clear to me why a read of a data store is not a service interface. How do you draw the line?

                                                                                            1. 3

                                                                                              It’s a lot easier to version your API than it is to be stuck with a schema that’s now incorrect due to changing business logic.

                                                                                              1. 4

                                                                                                Only if the business logic uses the schema of the data store directly. But I’ve seen use cases where the business logic separates the internal schema from an external “materialized view” schema.

                                                                                                And the quote above says, “it doesn’t matter what technology you use.” If you can use HTTP, well reading from an S3 bucket is HTTP. So this is one of those slippery cases where somebody can provide a “versioned API” that is utter crap and still not solve any of the problems that the decree seems to have been trying to solve.

                                                                                                I guess what I’m saying is: Never under-estimate the ability of humans to follow the letter of the law while violating its spirit. The key here is some way to be deliberate about externally visible changes. Everything else is window dressing.

                                                                                              2. 1

                                                                                                The same way reading a field isn’t a method call.

                                                                                                1. 2

                                                                                                  a) I think you mean “reading a field isn’t part of the interface”? Many languages support public fields, which are considered part of the interface.

                                                                                                  b) Creating a getter method and calling the public interface done is exactly the scenario I was thinking about when I wrote my comment. What does insisting on methods accomplish?

                                                                                                  c) Reading a value in an S3 bucket requires an HTTP call. Is it more like reading a field or a method call?

                                                                                                  1. 2

                                                                                                    Maybe it is assumed that AWS/Amazon employees at the time were capable of understanding the nuances when provided with the vision. It is not too much of a stretch to rely on a mostly homogenous engineering culture when making such a plan.

                                                                                              3. 4

                                                                                                How so?

                                                                                                They just decreed a certain architectural style, nothing about actually supporting this (and other styles!) by providing first class support.

                                                                                                1. 4

                                                                                                  Thank you; have a nice day!

                                                                                                  OR ELSE

                                                                                                1. 2

                                                                                                  This is a great summary of things to be aware of with C++ ABIs. It’s unfortunate enough that some libraries go for the other extreme of header only. For some prior art here, you might also be interested in COM, which achieved API stability with vtables very well. I really wish there was some more expressive calling convention so we didn’t have to shove C++2x/Rust binaries through the very thin C straw.

                                                                                                  1. 2

                                                                                                    I’m honestly really surprised they’re not just doing COM, straight-up. Nano-COM/XPCOM are lightweight, well-understood, have solid existing tooling, run on all relevant OSes, and map cleanly to JS as-is. Hell, the interface model used heavily in TypeScript even maps very well to COM interface usage (by convergent evolution, not intentionally). That’d be a lot easier than going through the whole stabilize-the-ABI hell C++ always dies on.

                                                                                                    1. 3

                                                                                                      Hey, Mozilla got pretty good mileage out of COM and JS!

                                                                                                      1. 2

                                                                                                        I wonder if the background of the Facebook developers that have been working on the C++ React Native core is a factor. I imagine there are many C++ developers these days that just aren’t familiar with COM. My guess is that many Googlers and ex-Googlers fall into that category, since Google tends to use monolithic statically linked binaries (even Chromium is mostly one giant binary), with no COM or other ABI boundary internally.

                                                                                                        Is Nano-COM an actual thing, e.g. an open-source project, or just an informal name for a minimal subset of COM? And is it practical to use XPCOM outside of the huge Gecko codebase and build environment? If Nano-COM isn’t an actual package and XPCOM is too tied to Gecko, then maybe another factor is that people don’t have a straightforward way of introducing a COM-ish thing into a cross-platform project. I, for one, would be afraid to just copy from the Windows SDK headers, for legal reasons. But then, with everything that Microsoft has open-sourced, I suppose there’s a COM variant under the MIT license somewhere now.

                                                                                                      2. 1

                                                                                                        That’s basically Objective-C, which is essentially “COM with language support”. And incidentally, the COM/Objective-C interim in d’OLE was stellar.

                                                                                                      1. 16

                                                                                                        Great post. I think it’s really important to separate out the additive effect (how much does a person accomplish on their own) and the multiplicative effect (how much do they change the contributions from others). In general, junior developers should have a reasonable additive effect (they get work done) but may have a slightly negative multiplicative effect (everyone they work with needs to mentor them a bit and so they reduce everyone else’s productivity by a few percent). Over time they additive effect goes up and the multiplicative effect at least reaches 1 (they are doing code reviews and other things that make everyone else more productive and offset any time spent by others doing the same for them).

                                                                                                        A more senior developer may have a large additive effect (they’re really productive at writing code) but their real value is from their large multiplicative effect. Someone who makes everyone else on the team 10-20% more productive is incredibly vastly more valuable than someone who gets twice as much work done on their own as the average. A really good dev lead can achieve far more than a 20% multiplicative improvement by teaching, preventing people from wasting time implementing things that won’t work or won’t be maintainable, by having the ten-minute bug-hunting conversations that make people look in the right place, and so on.

                                                                                                        If you’re really lucky, you’ll find junior devs with a high multiplicative effect who will do a load of glue activities (writing docs, talking to everyone and making sure everyone has the same mental model of the design, asking useful clarifying questions in meetings, cleaning up the build system, improving test coverage, keeping CI happy). If you find one of these folks, do whatever you can to keep them. They’re going to end up being the real rockstars: the dev leads who make everyone on their team twice as productive as they would otherwise be.

                                                                                                        The folks that make everyone else less productive? They’d need a really big additive effect to make up for the impact of their multiplicative effect. It may be that someone is making everyone on the team 50% less effective but still being a net positive contribution but it’s spectacularly unlikely. On a team with ten other people, their additive effect needs to be five times the average just to reach break-even point. This is only really likely if the rest of your team is completely useless. In this case it’s unlikely that there’s a good forward path that involves keeping your existing team.

                                                                                                        1. 2

                                                                                                          Interesting analysis, but I think it’s missing a couple of points and gets some of the numbers sufficiently wrong as to invalidate the results.

                                                                                                          You wrote:

                                                                                                          five times the average just to reach break-even point. This is only really likely if the rest of your team is completely useless

                                                                                                          Let’s see what the research says:

                                                                                                          Study after study shows that the very best designers produce structures that are faster, smaller, simpler, cleaner, and produced with less effort. [12] The differences between the great and the average approach an order of magnitude.

                                                                                                          No Silver Bullet – Fred Brooks

                                                                                                          Let that sink in: the difference between the best designers and the average approach an order of magnitude. Not the difference between the best and the worst (“completely useless”). That means the outcome you described as spectacularly unlikely is actually spectacularly likely, in fact it is the norm (if we are actually dealing with someone who is brilliant).

                                                                                                          In fact, it means a single very good designer can completely replace that team of 10 you mentioned, and that’s when doing the simplistic thing of just summing individual output, which we know isn’t true, team productivity significantly lags the sum of individual productivities (see The Mythical Man Month). Making the brilliant designer just 10% more effective would be worth a complete engineering salary, which is why Brooks advocated for “Chief Programmer Teams”, a concept that is entirely ordinary in other fields such as law or medicine.

                                                                                                          I probably still wouldn’t want to have “Bob” around, because I don’t think the Bob described fits the 10x designer described by Brooks.

                                                                                                        1. 24

                                                                                                          Dunno…given the description of actual behavior rather than pejorative value judgements, I would much rather have Alice on my team than people who criticise her, and would find it more likely that they are the jerks (or bullies, to be more precise).

                                                                                                          In my experience, workplaces that frown on Alices are nice, but not kind, and actually have way more underhandedness and backstabbing.

                                                                                                          1. 26

                                                                                                            I think the problem here is isolating people and their behaviour. I enjoy blunt and direct people around me. I even encourage them, it gives me clarity. I even prefer them, I don’t need to decode them. Sometimes, people see me with a person that I enjoy working with and are like “they treat you hard” and I have to tell them that this is how I want it. Fast. Clear. Cut me off if you got my point and agree or disagree. I’m okay with people being grumpy on a day or - given that I run projects of global scope - people just being tired and through that sometimes not take the mental load of choosing every word carefully. I would still expect those people to treat other people differently and not approach me this way right away.

                                                                                                            The blog post does not talk about sources of Alices behaviour. For example, Alice may take this approach because her feedback is never addressed, which is quite different category of a jerk than someone who just makes that their persona and feels good in it.

                                                                                                            My problem with niceness is: it’s highly cultural. It’s a category that is immediately exclusive e.g. to foreigners. A lot of people find different cultures behaving “nice” actually odd and weird and are insecure around them. For someone who’s third-language English, it may be a struggle to fulfill those requirements. I’ve definitely seen writing of e.g. people writing in English as a foreign language being framed as jerky just because it was direct and to the point and omitted some amount of fluff. When I approached those people, the reason was simple: they already spent more time than others on formulating their point right, the “niceness” would even incur more cost to them.

                                                                                                            There are jerks out there. But we need to be careful that perceived jerks also exist.

                                                                                                            I also agree with your underhandedness and backstabbing reading. “nice” cultures tend to be conflict-avoidant and conflict always finds its way.

                                                                                                            It’s okay to formulate sharp boundaries around the edges (e.g. using peoples identity to attack their points), but inside of those boundaries, communication is a group exercise and curiosity and willingness to understand other people needs to apply, especially in a more and more interconnected world.

                                                                                                            1. 7

                                                                                                              I think the crux is that there’s an entire “taxonomy of jerks” one could make; Alice doesn’t sound too bad, and Bob sounds like a right cunt. But in between those two there’s a whole lot of other jerks.

                                                                                                              Take the “this is one true only correct way to do it, anyone who disagrees is an idiot, and I will keep forcing my views through ad nauseam even when the entire team has already decided against them”-kind of jerk. These are “selfless jerks” in a way, but can be super toxic and demotivating if not dealt with properly by management. These people don’t even need to be abrasive as such (although they can be) and can be very polite, they just keep starting the same tired old discussions all the time.

                                                                                                              They’re not like Bob’s “selfish jerk”, and genuinely think they’re doing it in the good for the company.

                                                                                                              I once called out such an jerk as an “asshole” in a GitHub comment when he reverted my commit without discussion; that commit fixed some parts of our documentation toolchain. The change was a trivial one: it changed a lower-case letter to a capital one one in a Go import path, which is what we used everywhere else. Perhaps unfortunate that the capital was used, but fixing that would be a lot of work and inconvenience on people’s local dev machines (not all of whom are dedicated Go devs) so after discussing this at length we decided to just stick with the capital.

                                                                                                              Yet he sneaked it in anyway, in spite of the previous consensus of just a few weeks prior, while simply calling the toolchain “broken” and reverted my fix (a toolchain I spent quite some time on outside of my regular work duties I might add, and it’s wasn’t broken; Go import paths are just case-sensitive and it didn’t account for two import paths differing only in case, dealing with that is actually an entire can of worms I did already look in to and discussed with him when we talked about changing this company-wide).

                                                                                                              From the outset I looked like the “perceived jerk”, as you put it, as I swore at him in frustration while he was perfectly polite, but that ignores all the context that we had already discussed this before, came to an agreement, that he decided to just ignore this, and kept forcing his opinion through, and that this was the umpteenth subject on which this happened. HR didn’t see it that way though, “because it’s inappropriate to call people assholes”. True I suppose, but … yeah. And trust me, “asshole” was the filtered polite response and was subtle compared to what I had typed (bit didn’t send) originally.

                                                                                                              It’s almost devious and underhanded in a way, especially the taking offence at the “asshole” part and harping on about that while refusing to discus your own toxic “polite” behaviour. Having to deal with this kind of bullshit for years was a major reason I got burned out at that job, and I’m from the Netherlands which is probably one of the most direct no-nonsense cultures there is.

                                                                                                              My point is: politeness is not unimportant, but often overrated, especially in a context where you all know each other and know perfectly well that they’re just the abrasive sort but are basically just decent folk (it’s different on a public forum like Lobsters and such).


                                                                                                              Adding to that, I’ve definitely been a “perceived jerk”. Communicating over text is hard; my culture is very direct and also kind of sweary, and even as a skilled non-native speaker at times it can be difficult to fully grasp the subtleties of how something is perceived by native speakers. In all the COVID craze of “remote work is the dog’s bollocks and the future” I think people are forgetting about how hard all of this is, and why I’m skeptical that remote work really is the future (but that’s a different discussion…) It’s probably also a major reason for a lot of Open Source drama.

                                                                                                              I’ve had quite a few people tell me “I thought you were a jerk until I met you in person”, a major reason I always went out of my way to meet new people face-to-face for a pint at the pub (not always easy, since we had people world-wide). I tried very hard to change this, and I think I’ve mostly succeeded at it, but it was a lot of active effort that took years.

                                                                                                              1. 2

                                                                                                                I also fully agree with you there. I have worked as a moderator for years and before I engage e.g. in the whole thing I wrote above I do an assessment whether it’s worth figuring out. Someone jumping on a board, registering an account and blowing off steam with the first comment? => kick But someone going on a long and unfair rant after 12 months of participation? Let’s check out what happened, rather than taking all he said at face value.

                                                                                                                While I’d love to give every person the benefit of doubt and a long conversation, the day has 24 hours and this assessment must be done. But usually, if that assessment must be done, we’re talking about 1-2h chatting, which is manageable.

                                                                                                                Adding to that, I’ve definitely been a “perceived jerk”. Communicating over text is hard; my culture is very direct and also kind of sweary, and even as a skilled non-native speaker at times it can be difficult to fully grasp the subtleties of how something is perceived by native speakers. In all the COVID craze of “remote work is the dog’s bollocks and the future” I think people are forgetting about how hard all of this is, and why I’m skeptical that remote work really is the future (but that’s a different discussion…) It’s probably also a major reason for a lot of Open Source drama.

                                                                                                                I have a bit of a handle on that, interestingly because I got trained in international relationships and marketing. The trick is as easy as it is hard to consistently implement: Always voice your feelings clearly. “I am frustrated that…”. “I am happy that”. Cut tons of slack and assume good faith.

                                                                                                                “Lack of training” is also a hard problem in open source.

                                                                                                                I’ve had quite a few people tell me “I thought you were a jerk until I met you in person”, a major reason I always went out of my way to meet new people face-to-face for a pint at the pub (not always easy, since we had people world-wide). I tried very hard to change this, and I think I’ve mostly succeeded at it, but it was a lot of active effort that took years.

                                                                                                                Same :).

                                                                                                                1. 2

                                                                                                                  I have a bit of a handle on that, interestingly because I got trained in international relationships and marketing. The trick is as easy as it is hard to consistently implement: Always voice your feelings clearly. “I am frustrated that…”. “I am happy that”. Cut tons of slack and assume good faith.

                                                                                                                  I’ve independently discovered the same trick as well. I also wish people would do this more in political/social justice discussions and the like, e.g. “I feel [X] by [Y] because [Z]” rather than “[X] is [Y]!”, but okay, let’s not side-track this too much 🙃

                                                                                                                  I wish people would give more feedback on his type of stuff in general. I once heard through a co-worker “hey, Bob has a real problem with you, I don’t know why”, “oh wow, I had no idea”. I followed up on this later by just asking him and it turned it was some really simple stuff like PR review comments “just do [X]?” being taken as passive-aggressive. I can (now) see how it was taken like that, but that wasn’t my intention at all. So I explained that, apologized, slightly adjusted the phrasing of my comments, and we actually became quite good friends after this (still are). But if I hadn’t been told this while drunk at 3am in some pub by another coworker then … yeah.

                                                                                                                  Granted, not everyone responds well to this kind of feedback; but if you don’t try you’ll never know, and the potential benefits can be huge, especially in the workplace where you’re “stuck” with the same people for hours every day. I can close a website if I don’t like it; bit harder to do that with your job.

                                                                                                                  I found that communicating with native English speakers in general (much) harder than communicating with other non-native proficient speakers. Maybe we should just go back to Latin so everyone’s a non-native speaker.

                                                                                                            2. 8

                                                                                                              In my experience, workplaces that frown on Alices are nice, but not kind, and actually have way more underhandedness and backstabbing.

                                                                                                              Extreme examples of this from the real-world,

                                                                                                              The leadership style in Elm is extremely aggressive and authoritarian.

                                                                                                              By that I do not mean impolite or rude. It is almost always very civil. But still ultimately aggressive and controlling. — Luke Plant

                                                                                                              … once enough people are piling on to complain or tell you what’s wrong with what you did, you’re going to feel attacked - even if every single comment is worded politely and respectfully. — StackExchange

                                                                                                              1. 3

                                                                                                                Yeah, if someone (or the news) tells me what to think about someone, without telling (or far better SHOWING) me exactly what they did, I tend to just reserve judgement. Even then, it’s easy to make someone look like the jerk by omission of important information.

                                                                                                              1. 3

                                                                                                                I actually chuckled. This is seriously a self aware wolf moment. This guy is so very, very close to realizing how to fix the problem but is skipping probably the most important step.

                                                                                                                He mentioned single-core performance at least 5 times in the article but completely left out multi-core performance. Even the Moto E, the low end phone of 2020, has 8 cores to play with. Granted, some of them are going to be efficiency/low performance cores but 8 cores, nonetheless. Utilize them. WebWorkers exist. Please use them. Here’s a library that makes it really easy to use them as well.

                                                                                                                ComLink

                                                                                                                Here’s a video that probably not enough people have watched.

                                                                                                                The main thread is overworked and underpaid

                                                                                                                1. 7

                                                                                                                  The article claims the main performance cost is in DOM manipulation and Workers do not have access to the DOM.

                                                                                                                  1. 1

                                                                                                                    if you’re referring to this:

                                                                                                                    Browsers and JavaScript engines continue to offload more work from the main thread but the main thread is still where the majority of the execution time is spent. React-based apps are tied to the DOM and only the main thread can touch the DOM therefore React-based apps are tied to single-core performance.

                                                                                                                    That’s pretty weak. Any javascript application that modifies the DOM is tied to the DOM. It doesn’t mean the logic is tied to the DOM. If it is then at least in react’s case it means that developers thought rendering then re-rendering then rendering again was a good application of user’s computing resources.

                                                                                                                    I haven’t seen their code and I don’t know what kinds of constraints they’re being forced to program under but react isn’t their bottleneck. Wasteful logic is.

                                                                                                                    1. 2

                                                                                                                      The author’s point is that a top of the line iPhone can mask this “wasteful logic”. Unless developers test their websites on other, less expensive, devices they may not realize that they need to implement some of your suggested fixes to achieve acceptable performance.

                                                                                                                      1. 1

                                                                                                                        You’re right. I missed the point when I read into how he was framing the problem. Excuse me.

                                                                                                                  2. 3
                                                                                                                    1. iPhones also have many cores, so that’s not going to bridge the gap.

                                                                                                                    2. From TFA: “Browsers and JavaScript engines continue to offload more work from the main thread but the main thread is still where the majority of the execution time is spent.”

                                                                                                                    3. See also: Amdahl’s Law

                                                                                                                    1. 1

                                                                                                                      Gonna fight you on all of these points because they’re a bunch of malarkey.

                                                                                                                      iPhones also have many cores, so that’s not going to bridge the gap.

                                                                                                                      If you shift the entire performance window up then everyone benefits.

                                                                                                                      From TFA: “Browsers and JavaScript engines continue to offload more work from the main thread but the main thread is still where the majority of the execution time is spent.”

                                                                                                                      This shouldn’t be the case. If it is then people are screwing around and running computations in render() when everything should be handled before that. Async components should alleviate this and react suspense should help a bit this but right now I use Redux Saga to move any significant computation to a webworker. React should only be hit when you’re hydrating and diffing. React is not your bottleneck. If anything it should have a near constant overhead for each operation. You should also note that the exact quote you chose does not mention react but all of javascript. Come on.

                                                                                                                      See also: Amdahl’s Law

                                                                                                                      I did. Did you see how much performance you gain by going to 8 identical cores? It’s 6x. Would you consider that to be better than only having 1x performance? I would.

                                                                                                                      1. 1

                                                                                                                        Hmm..if you’re going to call what I write “malarky”, it would help if you actually had a point. You do not.

                                                                                                                        If you shift the entire performance window up then everyone benefits.

                                                                                                                        Yep, that’s what I said. If everyone benefits, it doesn’t close the gap. You seem to be arguing against something that nobody said.

                                                                                                                        Amdahl’s law … 8 identical cores? 6x speedup

                                                                                                                        Er, you seem to not understand Amdahl’s Law, because it is parameterised, and does not yield a number without that parameter, which is the portion of the work that is parallelizable. So saying Amdahl’s law says you get a speedup of 6x from 8 cores is not just wrong, it is non-sensical.

                                                                                                                        Second, you now write “8 identical cores”. I think we already covered that phones do not have 8 high performance cores, but at most something like 4/4 high/efficiency cores.

                                                                                                                        Finally, even with an exceedingly rare near perfectly parallelisable talks that kind of speedup compared to a non-parallel implementation is exceedingly rare, because parallelising has overhead and also on a phone other resources such as memory bandwidth typically can’t handle many cores going full tilt.

                                                                                                                        but the main thread is still where the majority of the execution time is spent

                                                                                                                        This shouldn’t be the case … React …

                                                                                                                        The article doesn’t talk about what you think should be the case, but about what is the case, and it’s not exclusively about React.

                                                                                                                  1. 1

                                                                                                                    I was just reading K&R C the other day. It seems in the first version C, declarations were optional. Object model in C is surprisingly elegant. If data and methods are separate then they can be evolved separately - only C allows this. On a trivial note, copy paste is better than Inheritance because the copied code can evolve separately instead of changing every time the base class changes.

                                                                                                                    In terms of generality,

                                                                                                                    Pointers > Lexical Scope

                                                                                                                    Function Pointers > Closures, Virtual Methods

                                                                                                                    Gotos > Exceptions

                                                                                                                    Arrays, Structs > Objects

                                                                                                                    Co-routines > Monads

                                                                                                                    C with namespaces, pattern matching, garbage collection, generics, nested functions and defer is the C++ that I wish had happened. Go is good but I miss the syntax of C. I recently came across Pike scripting language which looks surprisingly clean.

                                                                                                                    1. 6

                                                                                                                      It seems in the first version C, declarations were optional.

                                                                                                                      Yup, which sucked. It combined the lack of compiler checks of a dynamic language, with the data-corruption bugs of native code. For instance, what happens when you pass a long as the third argument, to a function whose implementation takes an int for that parameter? 😱

                                                                                                                      Object model in C is surprisingly elegant. If data and methods are separate then they can be evolved separately - only C allows this.

                                                                                                                      Maybe I’m unsure what you’re getting at, but many languages including Objective-C, Swift and Rust allow methods to be declared separately from the data, including adding more methods afterwards, even in separate binaries.

                                                                                                                      copy paste is better than Inheritance because the copied code can evolve separately instead of changing every time the base class changes.

                                                                                                                      But it’s worse than inheritance because, when you fix a bug in the copied code, you have to remember to also fix it every place it was pasted. I had a terrible time of this in an earlier job where I maintained a codebase written by an unrepentant copy/paster. This is the kind of nightmare that led to the DRY principle.

                                                                                                                      1. 2

                                                                                                                        For instance, what happens when you pass a long as the third argument, to a function whose implementation takes an int for that parameter? 😱

                                                                                                                        Usually nothing, or rather, exactly what you would want 😀. Last I checked, K&R C requires function parameters to be converted to the largest matching integral type, so long and int get passed the same way. All floating point parameters get passed as double. In fact, I remember when ANSI-C came out that one of the consequences was that you could now have actual float parameters. Pointers are the same size anyway, no struct by value parameters.

                                                                                                                        It still wasn’t all roses: messing up argument order or forgetting a parameter. Oops. So function prototypes: 👍😎

                                                                                                                        #include <stdio.h>
                                                                                                                        
                                                                                                                        int a( a, b )
                                                                                                                        int a;
                                                                                                                        int b;
                                                                                                                        {
                                                                                                                             return a+b;
                                                                                                                        }
                                                                                                                        
                                                                                                                        
                                                                                                                        int main()
                                                                                                                        {
                                                                                                                            long c=12;
                                                                                                                            int b=3;
                                                                                                                            printf("%d\n",a(c,b));
                                                                                                                        }
                                                                                                                        
                                                                                                                        [/tmp]cc -Wall hi.c
                                                                                                                        [/tmp]./a.out 
                                                                                                                        15
                                                                                                                        
                                                                                                                        1. 2

                                                                                                                          Usually nothing, or rather, exactly what you would want 😀.

                                                                                                                          Except, of course, when the sizes differed.

                                                                                                                          1. 1

                                                                                                                            No. The sizes do differ in the example. Once again: arguments are passed (and received) as the largest matching integral type.

                                                                                                                            I changed the printf() of the example to show this:

                                                                                                                            	printf("sizeof int: %ld sizeof long: %ld result: %d\n",sizeof b, sizeof c,a(c,b));
                                                                                                                            

                                                                                                                            Result:

                                                                                                                             sizeof int: 4 sizeof long: 8 result: 15
                                                                                                                            
                                                                                                                          2. 1

                                                                                                                            A lot of this is assuming arguments passed in registers. Passing on the stack can result in complete nonsense as you could have misaligned the stack, or simply not made a large enough frame.

                                                                                                                          3. 1

                                                                                                                            I don’t mean copy paste everything, use functions for DRY ofcourse … just to get the effect of inheritance copy paste is better. Inheritance, far from the notions of biology or taxonomy is similar to a lawyer contract that states all changes of A will be available to B just like land inheritance. Every time some maintainer changes a class in React, Angular, Ruby, Java, C++, Rust, Python frameworks and libraries everyone has to change their code. If for every release of a framework you have to rewrite your entire code, calling that code reuse is wrong and fraudulent. If we add any method, rename any method, change any implementation of any method that is not a trivial fix; we should create a new class instead of asking millions of developers to change their code.

                                                                                                                            when you fix a bug in the copied code, you have to remember to also fix it every place it was pasted.

                                                                                                                            If instead we used copy paste, there would be no inheritance hierarchy but just flattened code if that makes sense and you can modify it without affecting other developers. If we want to add new functionality to an existing class we should use something like plugins/delegation/mixins but never modify the base class … but absolutely no one uses or understands this pattern and everyone prefers to diddle with the base class.

                                                                                                                            In C such massive rewrites won’t happen, because everything is manually wired instead of automatically being inherited. You can always define new methods without bothering if you are breaking someone’s precious interface. You can always nest structs and cast them to reuse code written for the previous struct. Combined with judicious use of function pointers and vtables you will never need to group data and code in classes.

                                                                                                                            1. 6

                                                                                                                              Every time some maintainer changes a class in React, Angular, Ruby, Java, C++, Rust, Python frameworks and libraries everyone has to change their code.

                                                                                                                              That is simply not true. There are a lot of changes you can make to a class without requiring changes in subclasses. As a large-scale example, macOS and iOS frameworks (Objective-C and Swift) change in every single OS update, and the Apple engineers are very careful not to make changes that require client code to change, since end users expect that an OS update will not break their apps. This includes changes to classes that are ubiquitously subclassed by apps, like NSView, NSDocument, UIViewController, etc. I could say exactly the same thing about .NET or other Windows system libraries that use OOP.

                                                                                                                              I’m sure that in many open source projects the maintainers are sloppy about preserving source compatibility (let alone binary), because their ‘customers’ are themselves developers, so it’s easy to say “it’s easier to change the signature of this method and tell people to update their code”. But that’s more laziness (or “move fast and break stuff”) than a defining feature of inheritance.

                                                                                                                              In C such massive rewrites won’t happen

                                                                                                                              Yes, because everyone’s terrified of touching the code for fear of breaking stuff. I’ve used code like that.

                                                                                                                              1. 1

                                                                                                                                That is simply not true.

                                                                                                                                How ?

                                                                                                                                In C you would just create a new function and rightfully touching working code except for bug fixes is taboo. I can probably point to kernel drivers that use C vtables that haven’t been touched in 10 years. If you want to create an extensible function, use a function pointer. How many times has the sort function been reused ?

                                                                                                                                OO programmers claim that the average joe can write reusable code by simply using classes. If even the most well paid, professional programmers can’t write reusable code and writing OO code requires high training then we shouldn’t lie about OO being for the average programmer. Even if you hire highly trained programmers, code reuse is fragile requiring constant vigilance of the base classes and interfaces. Why bother with fragile base classes at all ?

                                                                                                                                Technically you can avoid this problem by never touching the base class and always adding new classes and interfaces. I think classes should have a version suffix but I don’t think it will be a popular idea and requires too much discipline. OO programmers on average prefer adding a fly method to a fish class as a quick fix to creating a bird class and thats just a disaster waiting to happen.

                                                                                                                                1. 5

                                                                                                                                  I don’t understand why you posted that link. Apple release notes describe new features, and sometimes deprecations of APIs that they plan to remove in a year or two. They apply only to developers, of course; compiled apps continue to work unchanged.

                                                                                                                                  OO is not trivial, but it’s much better than resorting to flat procedural APIs. Zillions of developers use it on iOS, Mac, .NET, and other platforms.

                                                                                                                                  1. 1

                                                                                                                                    My conclusion - OO is fragile and needs constant rewrites by developers who use OO code and procedural apis are resilient.

                                                                                                                                    1. 9

                                                                                                                                      Your conclusion is not supported by evidence. Look at a big, widely used, C library, such as ICU or libavcodec. You will have API deprecations and removals. Both of these projects do it nicely so you have foo2(), foo3() and so on. In OO APIs, the same thing happens, you add new methods and deprecate the old ones over time. For things like glib or gtk, the churn is even more pronounced.

                                                                                                                                      OO covers a variety of different implementation strategies. C++ is a thin wrapper around C: with the exception of exceptions, everything in C++ can be translated to C (in the case of templates, a lot more C) and so C++ objects are exactly like C structs. If a C/C++ struct is exposed in a header then you can’t add or remove fields without affecting consumers because in both languages a struct can be embedded in another and the size and offsets are compiled into the binary.

                                                                                                                                      In C, you use the opaque pointers idiom to avoid this. In C++ you use the pImpl pattern, where you have a public class and a pointer to an implementation. Both of these require an extra indirection. You can also avoid this in C++ by making the constructor for your class private and having factory methods. If you do this, then only removing fields modifies your ABI, because nothing outside of your library can allocate it. This lets you put fast paths in the header that directly access fields, without imposing an ABI / API contract that prevents adding fields.

                                                                                                                                      In C++, virtual methods are looked up by vtable offset, so you can’t remove virtual functions and you can’t add virtual functions if your class is subclassed. You also can’t change the signature of any existing virtual methods. You can; however, add non-virtual methods because these do not take place in dynamic dispatch and so are exactly the same as C functions that take the object pointer as the first parameter.

                                                                                                                                      In a more rigid discipline, such as COM, the object model doesn’t allow directly exposing fields and freezes interfaces after creation. This is how most OO APIs are exposed on Windows and we (Microsoft) have been able to maintain source and binary compatibility with programs using these APIs for almost three decades.

                                                                                                                                      In Objective-C, fields (instance variables) are looked up via an indirection layer. Roughly speaking, for each field there’s a global variable that tells you its offset. If you declare a field as having package visibility then the offset variable is not exposed from your library and so can’t be named. Methods are looked up via a dynamic dispatch mechanism that doesn’t use fixed vtable offsets and so you are able to add both fields and methods without changing your downstream ABI. This is also true for anything that uses JIT or install-time compilation (Java, .NET).

                                                                                                                                      You raise the problem of behaviour being automatically inherited, but this is an issue related to the underlying problem, not with the OO framing. If you are just consuming types from a library then this isn’t an issue. If you are providing types to a library (e.g. a way of representing a string that’s efficient for your use or a new kind of control in a GUI, for example), then the library will need to perform operations on that type. A new version of the library may need to perform more operations on that type. If your code doesn’t provide them, then it needs to provide some kind of default. In C, you’d do this with a struct containing callback function pointers that carried its size (or a version cookie) in the first field, so that you could dispatch to some generic code in your functions if the library consumer didn’t provide an implementation. If you’re writing in an OO language then you’ll just provide a default implementation in the superclass.

                                                                                                                                      Oh, and you don’t say what kernel you’re referring to. I can point to code in Linux that’s needed to be rewritten between minor revisions of the kernel because a C API changed. I can point to C++ code in the XNU kernel that hasn’t changed since the first macOS release when it was rewritten from Objective-C to C++. Good software engineering is hard. OO is not a magic bullet but going back to ‘70s-style designs doesn’t avoid the problems unless you’re also willing to avoid writing anything beyond the complexity of things that were possible in the ’70s. Software is now a lot more complex than it was back then. The Version 6 UNIX release was only about 83KLoC: individual components of clang are larger than that today.

                                                                                                                                      1. 0

                                                                                                                                        Your conclusion is not supported by evidence.

                                                                                                                                        It absolutely is. Please reuse code from an earlier version of any framework released in the last 50 years. OO was sold as the magic bullet that will solve all reuse and software engineering problems.

                                                                                                                                        Do you think homeopathy is medicine just because people dress up and play the role of doctors doing science ?

                                                                                                                                        How many times has the sort function been reused by using function pointers ? Washing machines don’t make clothes dirtier than the clothes you put in.

                                                                                                                                        Both of these projects do it nicely so you have foo2(), foo3() and so on.

                                                                                                                                        If they are doing it that way, then thats the way to go. Function signatures are the only stable interface you need. Don’t use fragile interfaces, classes and force developers to rewrite every time a new framework is released because someone renamed a method.

                                                                                                                                        For the rest of your arguments, why even bother with someone else’s vtables when you can build your own, trivially.

                                                                                                                                        My point is simply this - How is rewriting code, code reuse ?

                                                                                                                                        1. 5

                                                                                                                                          It absolutely is. Please reuse code from an earlier version of any framework released in the last 50 years.

                                                                                                                                          This is what Windows and Mac OS programmers do every day. My experience with COM is the Windows APIs built on it have great API/ABI stability.

                                                                                                                                          1. 1

                                                                                                                                            I don’t know much about COM but if it provides API/ABI stability then that’s great and thats what I am complaining about here. It seems to be an IPC of sorts, how would it compare to REST which can be implemented on top of basic functions ?

                                                                                                                                            1. 5

                                                                                                                                              COM is a language-agnostic ABI. for exposing object oriented interfaces. It has been used to provide stable ABIs for object oriented interfaces for around 30 years to Windows APIs. It is not an IPC mechanism, it is a binary representation. It is a strong counter example to your claim that OO APIs cannot be made stable (and one that I mentioned already in the other thread).

                                                                                                                                              1. 4

                                                                                                                                                I’m not sure about the IPC parts (there is a degree of “hosting”); however, DCOM provides RPC with COM.

                                                                                                                                            2. 6

                                                                                                                                              It absolutely is. Please reuse code from an earlier version of any framework released in the last 50 years. OO was sold as the magic bullet that will solve all reuse and software engineering problems.

                                                                                                                                              I’ve reused code written in C, C++, and Objective-C over multiple decades. Of these, Objective-C is by a very large margin the one that caused the fewest problems. Your argument is ‘OO was oversold, so let’s use the approach that was used back when people found the problems that motivated the introduction of OO’.

                                                                                                                                              How many times has the sort function been reused by using function pointers ? Washing machines don’t make clothes dirtier than the clothes you put in.

                                                                                                                                              I don’t know what this means. Are you trying to claim that C standard library qsort is the pinnacle of API design? It provides a compare function, but not a swap function so if your structures require any kind of copying between a byte-by-byte copy then it’s a problem. How do you reuse C’s qsort with a data type that isn’t a contiguous buffer? With C++‘s std::sort (which doesn’t use function pointers), you can sort any data structure that supports random access iteration.

                                                                                                                                              If they are doing it that way, then thats the way to go. Function signatures are the only stable interface you need.

                                                                                                                                              That’s true, if your library is producing types but not consuming them. If code in your library needs to call into code provided by library consumers, then this is not the case. Purely procedural C interfaces are easy to keep backwards compatible if they are not doing very much. The zlib interface, for example, is pretty trivial: consume a buffer, produce a buffer. The more complex a library is, the harder it is to maintain a stable API. OO gives you some tools that help, but it doesn’t solve the problem magically.

                                                                                                                                              Don’t use fragile interfaces, classes and force developers to rewrite every time a new framework is released because someone renamed a method.

                                                                                                                                              Absolutely none of that is intrinsic to OO. If you rename a C struct field or a function, people will need to rewrite their code. The set of things that you can break without breaking compatibility is strictly larger in an OO language than in a purely procedural language.

                                                                                                                                              For the rest of your arguments, why even bother with someone else’s vtables when you can build your own, trivially.

                                                                                                                                              Why use any language feature when you can just roll your own in macro assembly?

                                                                                                                                              • Compilers are aware of the semantics and so can perform better optimisations.
                                                                                                                                              • Compilers are aware of the semantics and so can give better error messages.
                                                                                                                                              • Compilers are aware of the semantics and so can do better type checking.
                                                                                                                                              • Consistency across implementations: C library X and C library Y use different idioms for vtables (e.g. compare ICU and glib: two completely different vtable models). Library users need to learn each one, increasing their cognitive burden. Any two libraries in the same OO language will use the same dispatch mechanism.

                                                                                                                                              My point is simply this - How is rewriting code, code reuse ?

                                                                                                                                              Far better in OO languages (and far better in hybrid languages that provide OO and generic abstractions) than in purely procedural ones. This isn’t the ’80s anymore. No one is claiming that OO is a magic bullet that solves all of your problems.

                                                                                                                                              1. 1

                                                                                                                                                Are you trying to claim that C standard library qsort is the pinnacle of API design?

                                                                                                                                                Personal attacks are not welcomed in this forum or any forum. If you can’t use technical arguments to debate you are never going to win.

                                                                                                                                                It is an example of code reuse that absolutely doesn’t break.

                                                                                                                                                Absolutely none of that is intrinsic to OO. If you rename a C struct field or a function, people will need to rewrite their code.

                                                                                                                                                It is absolutely intrinsic to OO because interfaces, classes are multiple level deep. It is a fractal of bad design. Change one thing everything breaks.

                                                                                                                                                There is a strong culture of not breaking interfaces in C and using versioning but the opposite is true for OO where changing the base class and interface happens for every release. Do you actually have fun rewriting code between every new release of an MVC framework ?

                                                                                                                                                Why use any language feature when you can just roll your own in macro assembly?

                                                                                                                                                Again, personal attacks are not welcomed in this forum or any forum.

                                                                                                                                                Vtables are trivial. They are not a new feature. All your optimzations can equally apply to vtables.

                                                                                                                                                This isn’t the ’80s anymore.

                                                                                                                                                Lies don’t become truths just because time has passed.

                                                                                                                                                If code in your library needs to call into code provided by library consumers, then this is not the case.

                                                                                                                                                Use function pointers to provide hooks or I am missing something.

                                                                                                                                                OO is fragile. Procedural code is resilient.

                                                                                                                                                1. 6

                                                                                                                                                  Are you trying to claim that C standard library qsort is the pinnacle of API design?

                                                                                                                                                  Personal attacks are not welcomed in this forum or any forum. If you can’t use technical arguments to debate you are never going to win.

                                                                                                                                                  That was not an ad hominem, that was an attempt to clarify your claims. It was unclear what you were claiming with references to a sort function. An ad hominem attack looks more like this:

                                                                                                                                                  Do you think homeopathy is medicine just because people dress up and play the role of doctors doing science ?

                                                                                                                                                  This is an ad hominem attack and one that I ignored when you made it, because I’m attempting to have a discussion on technical aspects.

                                                                                                                                                  It is an example of code reuse that absolutely doesn’t break.

                                                                                                                                                  It’s also an example of an interface with trivial semantics (it’s covered in the first term of most undergraduate computer science course) and whose requirements have been stable for longer than C has been around. The C++ std::sort template is also stable and defaults to using OO interfaces for defining the comparison (overloads of the compare operators). The Objective-C -sort family of methods on the standard collection classes are also unchanged since they were standardised in 1992. The Smalltalk equivalents have remained stable since 1980.

                                                                                                                                                  You have successfully demonstrated that it’s possible to write stable APIs in situations where the requirements are stable. That’s orthogonal to OO vs procedural. If you want to produce a compelling example, please present something where a C library has changed the semantics of how it interacts with a type provided by the library consumer (for example a plug-in filter to a video processing library, a custom view in a GUI, or similar) and an OO library making the same change has required more code modification.

                                                                                                                                                  Absolutely none of that is intrinsic to OO. If you rename a C struct field or a function, people will need to rewrite their code.

                                                                                                                                                  It is absolutely intrinsic to OO because interfaces, classes are multiple level deep. It is a fractal of bad design. Change one thing everything breaks.

                                                                                                                                                  This is an assertion, but it is not supported by evidence. I have provided examples of the same kinds of breaking changes being required in widely used C libraries that do non-trivial things. You have made a few claims here:

                                                                                                                                                  • Something about interfaces. I’m not sure what this is, but COM objects are defined in terms of interfaces and Microsoft is still able to support the same interfaces in 2021 that we were shipping for Windows 3.1 (though since we no longer support 16-bit binaries these required a recompile at some point between 1995 and now).
                                                                                                                                                  • Classes are multiple levels deep. This is something that OO enables, but not something that it requires. The original GoF design patterns book recommended favouring composition over inheritance and some OO languages don’t even support inheritance. Most modern C++ style guides favour composition with templates over inheritance. Inheritance is useful when you want to define a subtype relationship with code reuse.
                                                                                                                                                  • Something (OO in general? A specific set of OO patterns? Some OO library that you don’t like?) is a fractal of bad design. This is an emotive and subjective claim, not one that you have supported. Compare your posts with the article that I believe coined that phrase: It contains dozens of examples of features in PHP that compose poorly.

                                                                                                                                                  There is a strong culture of not breaking interfaces in C and using versioning but the opposite is true for OO where changing the base class and interface happens for every release. Do you actually have fun rewriting code between every new release of an MVC framework ?

                                                                                                                                                  You’re comparing culture, not language features. You can write code today against the OpenStep specification from 1992 that will compile and run fine on modern macOS with Cocoa (I know of some code that has been through this process). That’s an OO MVC API that’s retained source compatibility for almost 30 years. The only breaking changes were the switch from int to NSInteger for better support for 64/32-bit compatibility and these changes also affected the purely procedural APIs. They were not breaking changes for code targeting 32-bit platforms. The changes over the ’90s in the Classic MacOS Toolbox (C APIs) were far more invasive.

                                                                                                                                                  A lot of JavaScript frameworks and pretty much everything from Google make breaking API changes every few months but that’s an issue of developer culture, not one of the language abstractions.

                                                                                                                                                  Why use any language feature when you can just roll your own in macro assembly?

                                                                                                                                                  Again, personal attacks are not welcomed in this forum or any forum.

                                                                                                                                                  This is not a personal attack. It is your point. You are saying that you should not use a feature of a language because you can implement it in a lower-level language. Why stop at vtables?

                                                                                                                                                  Vtables are trivial. They are not a new feature. All your optimzations can equally apply to vtables.

                                                                                                                                                  No they can’t. It is undefined behaviour to write to the vtable pointer in a C++ object for the lifetime of an object. Modern C++ compilers use this optimisation for devirtualisation. If the concrete type of a C++ object is known at compile time (after inlining) then calls to virtual functions can be replaced with direct calls.

                                                                                                                                                  Here is a reduced example. The C version with custom vtables is called in the function can_not_inline the C++ version using C++ vtables is called in the function can_inline. In both cases, the object is passed to a function that the compiler can’t see before the call. In the C case, the language semantics allow this to modify the vtable pointer, in the C++ case they do not. This means that the C++ version knows that the foo call has a specific target, the C version must be conservative. The C++ version can then inline the call, which doesn’t do anything in this trivial example and so elides it completely.

                                                                                                                                                  This isn’t the ’80s anymore.

                                                                                                                                                  Lies don’t become truths just because time has passed.

                                                                                                                                                  No, but claims that were believed to be true and were debunked are no longer claimed. In the ’80s, OO was claimed to be a panacea that solved all problems. That turned out to be untrue. Like many other things in programming, it is a set of useful tools that can be applied to make things better or worse.

                                                                                                                                                  If code in your library needs to call into code provided by library consumers, then this is not the case.

                                                                                                                                                  Use function pointers to provide hooks or I am missing something.

                                                                                                                                                  You are missing a lot of detail. Yes, you can provide function pointers as hooks. Now what happens when a new version of your library needs to add a new hook? What happens when that hook interacts in subtle ways with the others? These are the kinds of problems that make OO APIs fragile, but they also make procedural APIs fragile.

                                                                                                                                                  OO is fragile. Procedural code is resilient.

                                                                                                                                                  Assertions are not evidence. Assertions that contradict the experience of folks who have been working with these APIs for decades need strong evidence.

                                                                                                                                                  1. 0

                                                                                                                                                    The only breaking changes were the switch from int to NSInteger for better support for 64/32-bit compatibility and these changes also affected the purely procedural APIs.

                                                                                                                                                    And that doesn’t count as evidence. Please read what I wrote. OO programmers constantly rename things to break backwards compatibility for no good reason at all. Code rewrite is not code reuse, by definition. Do C programmers do this ?

                                                                                                                                                    We are discussing how C does things and maintains backwards compatibility not COM. You say COM and I say POSIX / libc which is older. The fact that you cite COM is in-itself proof that objects are insufficient.

                                                                                                                                                    In Python3 … print was made into a function and almost overnight 100% code was made useless. This is the daily life of OO programmers for the release of every major version of a framework.

                                                                                                                                                    In database how many times do you change the schema ? Well structs and classes are like schema. Inheritance changes the schema. Interface renames change the schema. Changing method names is like changing the column name. Just like in database design you should not change the schema but use foreign keys to extend the tables with additional data. Perhaps OO needs a new “View” layer like SQL.

                                                                                                                                                    No, but claims that were believed to be true and were debunked are no longer claimed …. like many other things in programming, it is a set of useful tools that can be applied to make things better or worse.

                                                                                                                                                    The keyword is “debunked” like snake oil.

                                                                                                                                                    I propose mandatory version suffix for all classes to avoid this. The compiler creates a new class for every change made to a class, no matter how small. If you are changing the class substantially create a completely new name, don’t ship it by the same name and break all code. For ABI do something like COM if that worked.

                                                                                                                                                    These are the kinds of problems that make OO APIs fragile, but they also make procedural APIs fragile.

                                                                                                                                                    You are right. They make procedural APIs using vtables fragile, not to mention slow. So use it sparingly ? 99% of code should be procedural. I only see vtables being useful in creating bags of event handlers.

                                                                                                                                                    1. 7

                                                                                                                                                      The only breaking changes were the switch from int to NSInteger for better support for 64/32-bit compatibility and these changes also affected the purely procedural APIs.

                                                                                                                                                      And that doesn’t count as evidence. Please read what I wrote. OO programmers constantly rename things to break backwards compatibility for no good reason at all. Code rewrite is not code reuse, by definition. Do C programmers do this ?

                                                                                                                                                      You’ve now changed your argument. You were saying that OO is fragile, now you’re saying that OO programmers (which OO programmers?) rename things and that breaks things. Okay, but if procedural programmers rename things that also breaks things. So now you’re not talking about OO in general, you’re talking about some specific examples of OO (but you’re not naming them). You’ve been given examples of widely used rich OO APIs that have retained huge degrees of backwards compatibility, so your argument seems now to be nothing to do with OO in general but an attack on some unspecified people that you don’t like who write bad code.

                                                                                                                                                      We are discussing how C does things and maintains backwards compatibility not COM. You say COM and I say POSIX / libc which is older. The fact that you cite COM is in-itself proof that objects are insufficient.

                                                                                                                                                      Huh? COM is a standard for representing objects that can be shared across different languages. I also cited OpenStep / Cocoa (the latter is an implementation of the former), which uses the Objective-C object model.

                                                                                                                                                      POSIX provides a much simpler set of abstractions than either of these. If you want to compare something equivalent, how about GTK? It’s a C library that’s a bit newer than POSIX but that lets you do roughly the same set of things as OpenStep. How many GTK applications from even 10 years ago work with a modern version of GTK without modification? GTK 1 to GTK 2 and GTK 2 to GTK 3 both introduced significant backwards compatibility breaks.

                                                                                                                                                      In Python3 … print was made into a function and almost overnight 100% code was made useless. This is the daily life of OO programmers for the release of every major version of a framework.

                                                                                                                                                      Wait, so your argument is that a procedural API, in a multi-paradigm language changed, which broke everything, and that’s a reason why OO is bad?

                                                                                                                                                      In database how many times do you change the schema ? Well structs and classes are like schema. Inheritance changes the schema. Interface renames change the schema. Changing method names is like changing the column name. Just like in database design you should not change the schema but use foreign keys to extend the tables with additional data. Perhaps OO needs a new “View” layer like SQL.

                                                                                                                                                      I don’t even know where to go with that. OO provides away of expressing the schema. The schema doesn’t change because of OO, the schema changes because the requirements change. OO provides mechanisms for constraining the impact of that change.

                                                                                                                                                      Again, your argument seems to be:

                                                                                                                                                      1. There exists a set of things in OO that, if modified, break backwards compatibility.
                                                                                                                                                      2. People who write OO code will change these things
                                                                                                                                                      3. OO is bad.

                                                                                                                                                      But it’s also possible to say the same thing with OO replaced with procedural, functional, generic, or any other style of programming. If you want to make this point convincingly then you need to demonstrate that the set of things that break backwards compatibility in OO are more likely to be changed than in another style. So far, you have made a lot of assertions, but where I have presented examples of OO APIs with a long history of backwards compatibility and procedural APIs performing equivalent things with weaker guarantees, you have failed to present any examples.

                                                                                                                                                      I propose mandatory version suffix for all classes to avoid this.

                                                                                                                                                      So, like COM?

                                                                                                                                                      The compiler creates a new class for every change made to a class, no matter how small. If you are changing the class substantially create a completely new name, don’t ship it by the same name and break all code.

                                                                                                                                                      So, like COM?

                                                                                                                                                      For ABI do something like COM if that worked.

                                                                                                                                                      So, you want COM? But you want COM without OO? In spite of the fact that COM is an OO standard?

                                                                                                                                                      These are the kinds of problems that make OO APIs fragile, but they also make procedural APIs fragile.

                                                                                                                                                      You are right. They make procedural APIs using vtables fragile, not to mention slow. So use it sparingly ? 99% of code should be procedural. I only see vtables being useful in creating bags of event handlers.

                                                                                                                                                      It’s not just about vtables, it’s about any kind of rich abstraction that introduces coupling between the producers and consumers of an interface.

                                                                                                                                                      Let’s go back to the C sort function that you liked. There’s a C standard qsort. Let’s say you want to sort an array of strings by their locale-aware order. It has a callback, so you can define a comparison function. Now you want to sort an array that has an external indexing structure for quickly finding the first entry with a particular prefix. Ooops, qsort doesn’t have any kind of hook for defining how to do the move or for receiving a notification when things are moved, so you can’t keep the data structure up to date, you need to recalculate it after the sort. After a while, you realise that resizing the array is expensive and so you replace it with a skip list. Oh dear, qsort can’t sort anything other than an array, so you now have to implement your own sorting function.

                                                                                                                                                      Compare that to C++‘s std::sort. It is given two random-access iterators. These are objects that define how to access the start and end of some collection. If I need to update some other data structure when entries in the list move, then I overload their copy or move constructors to do this. The iterators know how to move through the collection, so when I move to a skip list I don’t even have to modify the call to std::stort, I just modify the begin() and end() methods on my data structure.

                                                                                                                                                      I am lazy. I regularly work on projects with millions of lines of code. I want to write the smallest amount of code possible to achieve my goal and I want to have to modify the smallest amount of code when the requirements change. Object orientation gives me some great tools for this. So does generic programming. Pure procedural programming would make my life much harder and I don’t like inflicting pain on myself, so I avoid it where possible.

                                                                                                                                                      1. 5

                                                                                                                                                        You have the patience of a saint to continue arguing with this person as they continue to disregard your experience. I certainly don’t have the stamina for it, but despite the bizarreness of the slapfight, your replies are really insightful when it comes to API design.

                                                                                                                                                        1. 3

                                                                                                                                                          I had a lot of the same misconceptions (and complete conviction that I was right) in my early ‘20s, and I am very grateful to the folks who had the patience to educate me. In hindsight, I’m astonished that they put up with me.

                                                                                                                                                        2. 1

                                                                                                                                                          This page lists all the changes in Objective C since the last 10 years. Plenty of renames.

                                                                                                                                                          I think more languages could benefit from COM’s techniques but I don’t think it is a part of the C++ core. I would use a minimal and flexible version of it but it seems to be doing way too many Win32 specific things.

                                                                                                                                                          1. 4

                                                                                                                                                            As @david_chisnall has pointed out many times already, this has nothing to do with OO. GTK has exhibited the exact same thing. GCC has done something similar with its internals. Renaming things such that code that relies on the API having to change has nothing at all to do with any specific programming paradigm.

                                                                                                                                                            Please stop your screed on this topic. It’s pretty clear from the discussion you are not grasping what is being said. I urge you to spend some time and study the replies above.

                                                                                                                                                            1. 1

                                                                                                                                                              Fine. I would compare GUI development with Tk which is more idiomatic in C.

                                                                                                                                                              As I have pointed out if people used versioning for interfaces things won’t break every time an architecture astronaut or an undisciplined programmer change a name, amplifying code rewrites. It is clear that the problem applies to vtables as well and naming in general and not solved within OO which exasperates the effects of simple changes.

                                                                                                                                            3. 6

                                                                                                                                              You can conclude whatever you like, but after taking a look at your blog, I’m going to back away slowly from this discussion and find a better use for my time. Best of luck with your jihad.

                                                                                                                                              1. 2

                                                                                                                                                Glad you discovered my blog. I’d recommend you start with Simula the Misunderstood. The language is a bit coarse though. The entire discussion has however inspired me to write - Interfaces a fractal of bad design. I see myself more like James Randi exposing homeopathy, superstitions, faith healers and fortune telling.

                                                                                                                                1. 7

                                                                                                                                  One thing that really bugs me is how “dreaded” is defined in the survey: “languages that had a high percentage of developers who are currently using them, but have no interest in continuing to do so”.

                                                                                                                                  Those two are not the same thing at all. For example, Apple has made clear that Swift is the future and that Objective-C is on the way out, so developers are somewhat obviously interested in learning/using the new thing and anticipate stopping to use the old thing. Does that mean they “dread” Objective-C? They might, but the answer to that question does not indicate either way.

                                                                                                                                  1. 4

                                                                                                                                    Apple have never stated anything like that. Apple’s internal use of ObjC remains extremely high, though some of that has to do with legacy factors and the fact that Swift didn’t become ABI stable until two years ago.

                                                                                                                                    I think Swift’s position as an outlier here somewhat disproves the author’s thesis that all languages must pass through a “honeymoon” phase only to inevitably end up in “dreaded” territory.

                                                                                                                                  1. 4

                                                                                                                                    Ranged numeric types, especially combined with linear types (as per my understanding; not a PLT expert) are a pretty awesome feature, it’s encouraging to see the idea being put into practical use.

                                                                                                                                    Unfortunately, the language seems to target an extremely narrow niche. I get that it’s a Google-internal project, and that niche is probably worth a cool $100M+ for them so there’s no pressure to expand its scope.

                                                                                                                                    Things that look like they’re dealbreakers for just about anything I’d like to use this for:

                                                                                                                                    • It looks like you’re only supposed to implement “leaf” libraries with it. It doesn’t look like there’s any kind of FFI, so if what I want to implement would require internally calling down to a C API for another (system) library, I’d effectively have to extend the language or standard library.
                                                                                                                                    • Memory safety is primarily achieved by not supporting dynamic memory allocations. It’d be a different story if combined with something like Rust’s borrow checker. I mean the support for linear types is already there to some extent…

                                                                                                                                    On the other hand, the ability to compile down to C is a bonus compared to Rust.

                                                                                                                                    I guess if you fix those 2 aspects I mentioned you probably end up with ATS anyway… and I have to admit I haven’t quite wrapped my head around that one yet.

                                                                                                                                    1. 1

                                                                                                                                      Ranged numeric types…it’s encouraging to see the idea being put into practical use.

                                                                                                                                      Pascal had subrange types in 1970.

                                                                                                                                      1. 1

                                                                                                                                        It’s been a few decades since I worked with Pascal, but I’m fairly sure Pascal’s support wasn’t as comprehensive as this is. Without linear types, they’re fairly cumbersome to use comprehensively. Specifically, I’m talking about:

                                                                                                                                        if (a < 200)
                                                                                                                                        {
                                                                                                                                           // inside here, a's type is automatically restricted with an upper bound of 200 exclusive
                                                                                                                                        }
                                                                                                                                        

                                                                                                                                        I don’t think Pascal supported this?